text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Hyperphosphorylation and Aggregation of Tau in Laforin-deficient Mice, an Animal Model for Lafora Disease*
Lafora progressive myoclonous epilepsy (Lafora disease; LD) is caused by mutations in the EPM2A gene encoding a dual specificity protein phosphatase named laforin. Our analyses on the Epm2a gene knock-out mice, which developed most of the symptoms of LD, reveal the presence of hyperphosphorylated Tau protein (Ser396 and Ser202) as neurofibrillary tangles (NFTs) in the brain. Intriguingly, NFTs were also observed in the skeletal muscle tissues of the knock-out mice. The hyperphosphorylation of Tau was associated with increased levels of the active form of GSK3β. The observations on Tau protein were replicated in cell lines using laforin overexpression and knockdown approaches. We also show here that laforin and Tau proteins physically interact and that the interaction was limited to the phosphatase domain of laforin. Finally, our in vitro and in vivo assays demonstrate that laforin dephosphorylates Tau, and therefore laforin is a novel Tau phosphatase. Taken together, our study suggests that laforin is one of the critical regulators of Tau protein, that the NFTs could underlie some of the symptoms seen in LD, and that laforin can contribute to the NFT formation in Alzheimer disease and other tauopathies.
Lafora progressive myoclonous epilepsy (Lafora disease; LD) is caused by mutations in the EPM2A gene encoding a dual specificity protein phosphatase named laforin. Our analyses on the Epm2a gene knock-out mice, which developed most of the symptoms of LD, reveal the presence of hyperphosphorylated Tau protein (Ser 396 and Ser 202 ) as neurofibrillary tangles (NFTs) in the brain. Intriguingly, NFTs were also observed in the skeletal muscle tissues of the knock-out mice. The hyperphosphorylation of Tau was associated with increased levels of the active form of GSK3. The observations on Tau protein were replicated in cell lines using laforin overexpression and knockdown approaches. We also show here that laforin and Tau proteins physically interact and that the interaction was limited to the phosphatase domain of laforin. Finally, our in vitro and in vivo assays demonstrate that laforin dephosphorylates Tau, and therefore laforin is a novel Tau phosphatase. Taken together, our study suggests that laforin is one of the critical regulators of Tau protein, that the NFTs could underlie some of the symptoms seen in LD, and that laforin can contribute to the NFT formation in Alzheimer disease and other tauopathies.
Lafora disease (LD) 2 is an autosomal recessive and a fatal form of progressive myoclonus epilepsy characterized by the presence of Lafora polyglucosan bodies in the affected tissues (1). The symptoms of LD include stimulus-sensitive epilepsy, dementia, ataxia, and rapid neurologic deterioration (1,2). LD is caused by mutations in the EPM2A gene encoding laforin, a dual specificity protein phosphatase, or in the NHLRC1 gene encoding malin, an E3 ubiquitin ligase (3)(4)(5)(6)(7). Both laforin and malin are ubiquitously expressed (3,5), associated with the endoplasmic reticulum (4,7), form aggresome upon proteaso-mal blockade (7), and clear misfolded protein through ubiquitin-proteasome (8). Laforin has two functional domains: a phosphatase domain (dual specificity phosphatase domain; DSPD) and a carbohydrate binding domain (CBD) (9). The CBD helps laforin to target to the glycogen particle and to the Lafora bodies (9,10), and the DSPD of laforin dephosphorylates carbohydrate moieties (11). Recent studies have further shown that laforin and malin together regulate the cellular levels of PTG, the adaptor protein targeting to glycogen, and that the loss of either malin or laforin results in increased levels of PTG that eventually lead to excessive glycogen deposition (12)(13)(14). Although this model explains the genesis of Lafora bodies, the molecular etiology of LD is yet to be understood. For example, unlike this cell line study (12), the presence of Lafora bodies does not lead to neuronal cell death in the two murine models of LD (10,15), and no difference in the level of PTG was seen in laforin-deficient mice (16). 3 However, widespread degeneration of neurons was seen in laforin-deficient mouse with the absence of Lafora bodies, suggesting that the polyglucosan bodies may not play a primary role in the epileptogenesis (15). The laforin dominant-negative transgenic mice line also developed Lafora bodies but had no signs of neurodegeneration or epileptic seizures (10). Thus, the neurodegenerative changes are likely to underlie the etiology of some of the LD symptoms (1). The mouse model developed by the knockdown of the Epm2a gene exhibited a majority of the symptoms known in LD, including the ataxia, spontaneous myoclonic seizures, EEG epileptiform activity, and impaired behavioral responses (15). The knockout animals showed a number of degenerative changes that include swelling and/or loss of morphological features of mitochondria, endoplasmic reticulum, Golgi apparatus, and the neuronal processes (15). Preliminary histochemical investigations have also suggested the possible presence of neurofibrillary tangles (NFTs) in the knock-out mice (17). In this study, we have characterized the biochemical properties of Tau protein in the animal model of LD and identified laforin as an interacting partner of Tau. Our study identifies laforin to be one of the critical regulators of Tau protein and suggests that the Tau pathology might underlie some of the symptoms seen in LD.
EXPERIMENTAL PROCEDURES
Mice and Tissue Harvesting-The characterization of laforin-deficient mice has been described previously (15). The animals were maintained at the RIKEN Brain Science Institute animal facilities according to the Institute guidelines for the treatment of experimental animals. Animals of 4-, 6-, or 10-month-old age groups were sacrificed by cervical dislocation, and selected tissues were dissected and fixed in appropriate fixatives or quickly frozen in liquid nitrogen and stored at Ϫ80°C until further analysis.
Tissue Extraction and Subcellular Fractionation-Brain and muscle tissues were homogenized in Tris-buffered saline containing protease and phosphatase inhibitors and used for immunoblotting analysis. The Sarkosyl-soluble and -insoluble fractions of NFTs were extracted as described (18).
Antibodies-The following monoclonal antibodies, obtained as gifts from Dr. Peter Davies, were used for detecting the Tau protein: CP13 for phospho-Ser 202 Tau, PHF1 for phospho-Ser 396 Tau, and TGF5 for all forms of Tau. In addition, antibodies from Innogenetics (antibody AT8) and GenScript for the detection of phospho-Ser 202 and an antibody from Epitomics (antibody E178) for the detection of phospho-Ser 396 were also used. Antibodies for Gsk3, phospho-Ser 9 Gsk3, protein kinase B (AKT), and Ser 473 phospho-AKT were purchased from Cell Signaling Technology. Antibodies for protein phosphatase 2A (PP2A), Tyr 307 phospho-PP2A, cyclin-dependent kinase 5 (CDK5), Ser 159 phospho-CDK5, protein kinase A (PKA), Ser 96 phospho-PKA, and PP1 were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). Anti-GFP and anti-Myc tag antibodies were purchased from Roche Applied Science, and anti-␥-tubulin, anti-FLAG, and anti-V5 antibodies were from Sigma. Anti-ubiquitin antibody was purchased from Dako, and the secondary antibodies were obtained from Jackson Immuno-Research. Anti-laforin antibody was raised in rabbits using a synthetic peptide corresponding to amino acid residues 85-100 of the murine laforin sequence.
Immunohistochemical and Histopathological Analyses-Immunohistochemical analysis was done on formalin-fixed, paraffin-embedded sections and was reacted with appropriate antibody, as described previously (8,15). They were visualized for light microscopy using diamino-benzidine-conjugated avidin-biotin complex kit (Vectastain ABC Elite; Vector Laboratories). For immunofluorescence staining, sections were processed with appropriate secondary antibodies that were conjugated with rhodamine or fluorescein isothiocyanate and visualized using an epifluorescence microscope, as described (8,15). Bielschowsky's silver staining was done on paraffin embedded brains sections as described previously (19).
Ultrastructural Analysis-For electron microscopy studies, the Sarkosyl-insoluble materials, isolated from the laforin-deficient mice, were mildly sonicated and dispersed in phosphatebuffered saline. For negative staining, the samples were first absorbed onto glow-discharged supporting membranes on 300-mesh grids and then treated with 2% uranyl acetate, dried, and observed with a FEI Technai 20 U Twin electron microscope. For immunogold labeling, the samples were prefixed by floating the grids on drops of 4% paraformaldehyde in 0.1 M phosphate buffer for 5 min. After washing, the grids were incubated with primary antibody followed by 10-nm colloidal goldconjugated secondary antibody and processed for negative staining with 2% sodium phosphotungstic acid and observed as described (20).
Expression Constructs-The expression vectors containing Myc-or GFP-tagged wild-type or mutant forms of laforin were described previously (7,8). Expression constructs for the FLAG-tagged laforin were generated by cloning the coding regions of the EPM2A gene into the pcDNA expression vector (Invitrogen). The short hairpin RNA knockdown constructs for the Epm2a gene were purchased from Open Biosystems and validated in one of our recent studies (8). The expression constructs for V5-tagged Tau and its mutant form were generously provided by Dr. Michael Hutton.
Cell Culture, Transfection, and Pull-down Assays-COS-7 or Neuro2A cells were grown in Dulbecco's modified Eagle's medium (Sigma) supplemented with 10% (v/v) fetal calf serum, 100 units/ml penicillin, and 100 g/ml streptomycin. All cells were grown at 37°C in 5% CO 2 . Transfections were performed using Lipofectamine 2000 transfection reagent (Invitrogen) according to the manufacturer's protocol. Neuro2A cells were differentiated into neurons by culturing them in 1% fetal bovine serum as described (21). To establish the physical interaction between laforin and Tau proteins, we used the expression construct that code for polyhistidine-tagged Tau (22). Lysates of cells that had expressed His-tagged Tau with desired protein were incubated with Ni 2ϩ -affinity resin (Sigma) for 2 h at 4°C and processed for pull-down assays as recommended by the manufacturer. Pulled down products were detected by immunoblotting using specific antibodies.
In Vitro Dephosphorylation Assay-The histidine-tagged Tau protein, transiently overexpressed in COS-7 cells, was hyperphosphorylated by treating the cells with wortmannin and affinity-purified using nickel resins. Similarly, the Histagged laforin or its mutant Q293L was expressed and purified as described (23). The nickel resin-bound Tau was mixed with wild-type laforin or its mutant in the phosphatase assay buffer (50 mM HEPES, pH 6, 50 mM NaCl, 5 mM EDTA, 50 mM -mercaptoethanol) and incubated for 2 h at 37°C. A control reaction was performed in parallel, wherein the Tau protein was incubated with nickel resins treated with cell lysates that did not express His-tagged laforin. The reaction products were finally mixed with SDS sample buffer, boiled, and analyzed by immunoblotting.
Immunoprecipitation-Tissue or cell lysates were preincubated with protein G-Sepharose (Bangalore Genei, India) for 2 h at 4°C and then incubated with anti-laforin or anti-GSK3 antibody (as indicated) for 1 h at 4°C. After incubation, protein G-Sepharose was used for precipitation. The beads were washed with lysis buffer four times and then eluted with SDS sample buffer for immunoblot analysis as described (24).
GSK3 Activity Assay-GSK3 activity was measured as described previously (25) after immunoprecipitation of GSK3 from 100 g of protein. Immobilized immune complexes were washed twice with lysis buffer and twice with kinase reaction buffer and incubated with phosphoglycogen synthase peptide-2 substrate (Upstate Biotechnology) and [␥-32 P]ATP for 30 min at 30°C. After this incubation, an aliquot of samples was placed on phosphocellulose disc (Whatman 31ET CHR filter paper), air-dried, and washed three times in 0.75% phosphoric acid and once with acetone. Radioactivity in the phosphocellulose disc was counted in a -counter (PerkinElmer Life Sciences).
Immunoblotting Analysis-Protein samples were run on a 10% SDS-PAGE and transferred onto a nitrocellulose filter (MDI, India) as described previously (7,8). Signal intensity of the immunoblot was quantitated using NIH Image software (ImageJ; National Institutes of Health).
RESULTS
The characterization of laforin-deficient mice was reported in one of our previous publications (15). The present investigation was carried out on the C57BL/6 isogenic line for the Epm2a gene knock-out, derived by back-crossing the F1 heterozygous mutants with C57BL/6 animals through 11 generations, and the animals were genotyped as described (15).
Neurofibrillary Tangles Observed in Laforin-deficient Mice-Our investigations on the neuropathological changes in the brain sections of the 10-month-old laforin-deficient mice have suggested the presence of neurofibrillary tangles (NFTs), as revealed by Bielschowsky's silver staining (Fig. 1, A-C). This was subsequently confirmed by immunohistochemical staining with antibodies E178 (26) and AT8 (27,28) that specifically recognize Tau protein phosphorylated at Ser 396 or Ser 202 residues, respectively ( Fig. 1, D-I). Numerous neurons that were positive for the dyes or the phospho-Tau antibodies were seen primarily in the hippocampus, thalamus, cerebral cortex, cerebellum, and brain stem of the laforin-deficient mice but not in the corresponding regions of the wild-type littermates (Fig. 1, Identical observations were made with antibodies CP13 and PHF1 as well (supplemental Fig. S1, A-D). The NFTs in laforin-deficient mice also stained intensely with ubiquitin antibody (supplemental Fig. S1, E-G). NFTs were not observed in the 2-, 4-, or 6-month-old knock-out mice analyzed.
In addition to brain, Tau protein is known to express in muscle tissues (29,30). We have therefore checked for the presence of hyperphosphorylation of Tau protein in the muscle tissues of the 10-month-old laforin-deficient mice. Phospho-Tau-specific antibodies identified immunoreactive cytoplasmic inclusions in the muscle sections from the knock-out mice but not the wild-type littermates ( Fig. 1, J-M). Such inclusions were not seen in the muscle sections of 4-and 6-month-old knock-out mice.
Biochemical and Ultrastructural Characterization of Tau in Laforin-deficient Mice-Consistent with the immunohistochemical observations, immunoblot analysis of Tau protein from the 10-month-old animals showed a significant increase in the phosphorylation levels at the Ser 202 and Ser 396 positions, both in muscle and brain tissues of the laforin-deficient mice, as compared with the wild-type littermates (Fig. 2, A and B, and supplemental Fig. S2A). This difference, however, was not obvious in the 4-month-old mice (Fig. 2A). The phosphorylation levels of Tau were nearly the same in wild-type and heterozygous animals of the 10-month age group (supplemental Fig. S2C). Because Tau is known to form insoluble aggregates upon hyperphosphorylation (31,32), we further assessed the amount of Tau in the Sarkosyl-insoluble fractions derived from the brain and muscle tissues of the 10-month-old animals. A large amount of insoluble and phosphorylated forms of Tau was recovered from the brain and muscle tissue lysates of the laforin-deficient mice as compared with lysates of wild-type littermates (Fig. 2C). The Sarkosyl-insoluble material recovered from the laforin-deficient mice was further investigated with transmission electron microscopy. The NFTs observed in the Sarkosyl-insoluble fraction appeared to be straight filaments of about 10 -20 nm in diameter (Fig. 2D). Labeling with antibodies against phosphorylated Tau (Ser 396 ) revealed reasonably abundant Tau-containing filaments in the preparation (Fig. 2, E and F). Such filaments were not seen in the preparations obtained from the age-matched wild-type littermates (data not shown), and the gold particles were not seen when the primary antibody was omitted for the immunodetection for the samples from the knock-out mice (data not shown). Taken together, the biochemical and ultrastructural analyses strongly suggest the presence of NFT-like Tau aggregates in the brain and muscle tissue of the laforin-deficient mice.
Changes in the Phosphorylation Status of Tau Kinases and Tau Phosphatases in Laforin-deficient Mice-Having established the difference in the phosphorylation levels of Tau pro- tein in laforin-deficient mice, we next explored whether loss of laforin leads to changes in the phospho forms of key kinases and phosphatases that are known to regulate Tau. For this analysis, we have selected six kinases and two phosphatases (see Fig. 3). Activation of GSK3 is known to phosphorylate Tau protein (33,34). The active and inactive forms of GSK3 were quantitated by looking at the phosphorylation status of Tyr 216 and Ser 9 residues by using antibodies that are specific to these two phospho forms (35,36) and also by enzymatic assays using a substrate (Fig. 3, A and B). Although the total level of GSK3 was comparable among the wild-type and laforin-deficient animals, the levels of the inactive form of GSK3 (phospho-Ser 9 ) were found to be significantly lower in the muscle and brain tissues of the 10-month-old knock-out mice (Fig. 3, A and D, and supplemental Fig. S2B). We therefore measured the GSK3 activity by 32 P labeling in an in vitro assay and found that the GSK3 from laforin-deficient mice indeed show increased activity as compared with that from age-matched wild-type mice (Fig. 3B). No difference in the phosphorylation levels of Tyr 216 residue was observed in the analyzed tissues of the two age groups (Fig. 3A).
Using a similar approach, we next examined how a few other regulators (kinases/phosphatases) might contribute to hyperphosphorylation of Tau protein in laforin-deficient mice. As shown in Fig. 3C, levels of CDK5 and its active form (CDK5-P), PKA and its active form (PKA-P), protein kinase B (AKT) and its active form (AKT-P), PP2A and its active form (PP2A-P), and the total levels of PP1 were found to be similar among the wild-type and the knock-out mice littermates of the 10-month age group. Taken together, these results suggest that the contributions of these enzymes to Tau phosphorylation were not affected by the loss of laforin.
Laforin Physically Interacts with and Dephosphorylates the Tau Protein-Since the loss of laforin led to the hyperphosphorylation of Tau, we next explored the possibility that laforin, being a protein phosphatase, directly interacts with and dephosphorylates Tau. For this, polyhistidine-tagged Tau was transiently coexpressed with green fluorescent protein (GFP)tagged laforin or GFP in COS-7 cells, and lysates were analyzed by nickel affinity bead pull-down assays. As can be seen in Fig. 4, Tau was able to pull down GFPlaforin and not the GFP, thus establishing the specific and physical interaction between laforin and Tau proteins. We next determined the domain of laforin that interacted with Tau. For this, we created constructs that code for either the CBD or the DSPD of the laforin protein with the FLAG tag at the amino terminal (see Fig. 4C). His-tagged Tau was coexpressed with FLAG-tagged full-length laforin or its truncated forms (CBD or the DSPD) in COS-7 cells and processed for nickel affinity pull-down assays. As shown in Fig. 4B, Tau was able to pull the full-length laforin and the DSPD of laforin, but the truncated peptide having the CBD was not detected in the pulleddown products, suggesting that laforin interacts with Tau through its phosphatase domain. We have also checked the interaction between endogenous Tau and laforin using a coimmunoprecipitation approach. For this, we have used the brain tissue lysates from 10-month-old animals, pulled laforin using anti-laforin antibody, and checked for the presence of Tau protein in the pull-down products. As shown in Fig. 4D, Tau was detected in the pulled-down product from the wild-type tissue and not in the tissues from the laforindeficient mice, confirming the interaction between laforin and Tau proteins and the specificity of the assay employed.
Having confirmed a direct physical interaction between laforin and Tau proteins, we next examined whether phosphorylated Tau is a substrate for laforin. For this, the wildtype Tau was expressed either alone or with laforin or an empty vector (control) in COS-7 cells and treated with wortmannin, a known inducer for Tau hyperphosphorylation (37). Similarly, a Tau missense mutant (P301L), which is Ͻ 0.005). C, the detergent-insoluble Tau from brain and muscle tissues of 10-month-old animals were immunoblotted with (CP13, PHF1, and TGF5) antibodies raised against distinct forms of Tau, as indicated. D, electron microscopic analysis of Sarkosyl-insoluble NFTs isolated from the 10-month-old laforin-deficient mice as revealed by negative staining with 2% uranyl acetate. Most NFTs appeared to be straight filaments. E, purified NFTs were immunolabeled with an antibody against the Ser 396 phospho-Tau, followed by negative staining with phosphotungstic acid. The phospho-Tau antibody was detected with a 10-nm gold particle-conjugated secondary antibody. Scale bar, 50 nm. known to become hyperphosphorylated when overexpressed (31), was expressed either alone or with laforin. As shown in Fig. 5A, coexpression of laforin resulted in a significant reduction in the cellular levels of Ser 396 phospho form of both wild-type and the mutant Tau. Similarly, knockdown of laforin in a differentiated neuroblastoma cell line (Neuro2A) led to increased phosphorylation at the Ser 396 residue of endogenous Tau (Fig. 5B). To finally confirm that Tau is indeed a direct substrate of laforin, hyperphosphorylated Tau was purified and incubated with wild type or the mutant form of laforin, and the phosphorylation status at Ser 396 residue of Tau was evaluated by immunoblotting. Incubation of wild-type laforin with hyperphosphorylated Tau showed a significant reduction in the phosphorylation level at the Ser 396 residue as compared with reactions with the mutant laforin or the mock control (Fig. 5, C and D). These in vitro studies, in addition to replicating the findings in the laforindeficient mice, have established that laforin indeed dephosphorylates the Ser 396 residue of Tau, and thus it is a Tau phosphatase. Knockdown of laforin or its overexpression did not alter the phosphorylation status of GSK3 at the Ser 9 position in the Neuro2A cell line (supplemental Fig. S2, D-E), suggesting that the observed difference in the phosphorylation status of GSK3 in the laforin-deficient mice could be a secondary effect and could be due to the physiological changes associated with loss of laforin in mice.
FIGURE 3. Changes in kinases and phosphatases that regulate Tau phosphorylation.
A, lysates of brain and muscle tissues of 4-(4 MO) or 10-monthold (10 MO) Epm2a knock-out (KO) or wild-type (WT) littermates were analyzed with immunoblotting and antibodies against Ser 9 phospho-, Tyr 216 phospho-, or total form of GSK3, as indicated. B, GSK3 activity assay was done on the brain and muscle tissue of the 10-month-old Epm2a knock-out (KO) or wild-type (WT) littermates. The bar diagram shows the difference in the relative activity of GSK3, as indicated. Each bar represents average values Ϯ S.D. (n ϭ 4; **, p Ͻ 0.005). C, similarly, lysates from the brain and muscle tissues of the 10-month-old animals were tested for the following Tau kinases and phosphatases: Ser 473 phospho-AKT (AKT-P), total AKT, Ser 159 phospho-CDK5 (CDK5-P), total CDK5, Ser 96 phospho-PKA (PKA-P), total PKA, Tyr 307 phospho-PP2A (PP2A-P), total PP2A, and total PP1 antibodies. D, bar diagram showing the difference in the signal intensity of bands detected by antibody specific to phospho-GSK3 in the brain and muscle tissues in the 10-monthold animals. Each bar represents average values Ϯ S.D. (n ϭ 4; **, p Ͻ 0.005).
FIGURE 4. Laforin physically interacts with Tau protein.
A, V5/His-tagged Tau was coexpressed with GFP-laforin or with GFP in COS-7 cells and processed for the pull-down assay using the nickel affinity resin. The pulled down products (PD) and whole cell lysates (WCL) were immunoblotted (IB) and probed with anti-GFP antibody and anti-V5 antibody. B, the phosphatase domain of laforin interacts with Tau protein. V5/His-tagged Tau protein was coexpressed in COS-7 cells with FLAG-tagged laforin, the CBD of laforin, or the DSPD and processed for the pull-down assay using the nickel affinity resin. The pulled down products and whole cell lysates were immunoblotted and probed with anti-FLAG and anti-V5 antibodies, as indicated. COS-7 cells that expressed FLAG-laforin only were processed for pull-down assay as control. As expected, anti-FLAG antibody did not detect any peptide in the pull-down product of this assay (last lane). C, schematic showing the domain organization of laforin protein and its truncated forms used for the pull-down assays. The amino-terminal triangles represent the FLAG epitope. D, co-immunoprecipitation analysis demonstrating the interaction between endogenous laforin and Tau proteins. Tissue lysates from 10-month old brain of wild-type (WT) or Epm2a knock-out mice (KO) were processed for immunoprecipitation with a rabbit polyclonal anti-laforin antibody, and the immunoprecipitated products (IP) were immunoblotted (IB) with an anti-Tau antibody raised in mice. Whole tissue lysates (input) were used as control. Note the presence of an ϳ66 kDa Tau band (identified with an arrow) in the immunoprecipitated products in the tissue lysate from the wild-type but not from the knock-out mice.
DISCUSSION
In this report, we demonstrate that the loss of laforin leads to accumulation of hyperphosphorylated Tau as NFTs in the LD mice model. The observed NFTs were ubiquitinated and detergent-insoluble, as known in Alzheimer disease (18,38,39), and such forms were abundant in the 10-month-old knock-out mice, suggesting a progressive deterioration of brain function. The regions that were positive for the NFTs strongly correlated with the sites of laforin expression (40). NFTs were not seen in the heterozygote littermates, suggesting that the complete loss of laforin is required for its formation in LD. Curiously, Lafora bodies and neuronal cell death, the other two neuropathological changes observed in the knock-out mice, predate the NFT formation (15)(16)(17). Thus, the progressive onset of the LD-like symptoms observed in laforin-deficient mice seems to correlate well with the age-dependent deposition of NFTs in the brain (15). NFTs have also been reported in LD patients (41); thus, abnormal regulation of Tau protein appears to be one of the common neuropathological changes associated with LD in humans and mice. Intriguingly, Tau straight filaments have also been described in Alzheimer and Pick diseases (42,43). Thus, the NFTs are likely to underlie a subset of symptoms of LD, like dementia, which is known in tauopathies as well (2).
One of the significant findings of the present study was the observation of NFTs in the muscle tissues of the laforin-deficient mice. NFTs in the skeletal muscle are known in several forms of myopathies that are characterized by progressive muscle weakness and atrophy (44,45). Although muscular atrophy is known in human LD (2), inclusions other than Lafora bodies in muscle have not yet been reported. Pending such findings, our present set of observations, together with our earlier report on muscular weakness in this LD mouse model (15), suggest that the Tau-positive inclusions could underlie some of the deficits of muscle functions seen in LD.
In determining which kinase or phosphatase is involved in the hyperphosphorylation of Tau in laforin-deficient mice, we show here that the level of the Ser 9 -phospho (inactive) form of GSK3 was lower in the 10-month-old knock-out mice. However, no changes in the level of the protein or in the phosphorylation levels were observed for several other players that are known to regulate the phosphorylation of Tau protein. Thus, the overactive GSK3 could be one of the triggers for the formation of NFTs in LD mice. Our observations on the GSK3 phosphorylation in the laforindeficient mice and in cellular models contradict the report of Wang et al. (46) that the Ser 9 residue of GSK3 was dephosphorylated by laforin phosphatase and support an earlier report that GSK3 is not a substrate of laforin (11). Since there is a reduction in the phospho form of GS3K in the knock-out mice, laforin probably acts upstream of this key enzyme. AKT, PKA, and PP1 are a few of the known regulators of the Ser 9 residue of GSK3 (36,48,49), and all three of them did not show a significant change in the phosphorylation level in the laforin-deficient mice. It would therefore be of interest to look for other regulators of GS3K in the laforin-deficient mice.
Because laforin is a dual specificity phosphatase (4), we have also tested the possibility whether Tau could be a substrate for laforin. We demonstrate here that laforin physically interacts with Tau and that the interaction could perhaps be limited to the phosphatase domain of laforin. Consistent with the findings in the laforin-deficient mice, we show here with cellular models that coexpression of laforin with Tau decreases the phospho-Tau levels and that knockdown of laforin leads to an increase in the phospho form of Tau. Direct evidence for laforin being a Tau phosphatase came from the in vitro dephosphorylation assay. Thus, laforin might dephosphorylate Tau, at least at the Ser 396 residue, under appropriate physiological signals. Mutations resulting in the loss of laforin, its phosphatase activity, or its interaction with Tau would lead to hyperphosphorylation of Tau and the NFTs, as seen in the LD mice. It would be of much interest now to check the level and/or activity of laforin in tauopathies like Alzheimer disease.
Tau phosphorylation reflects a critical balance between Tau kinase and Tau phosphatase activities. We show here that loss of laforin is associated with an increase in the levels of active form of GSK3 in the LD model. Thus, NFTs in LD may both involve activation of the Tau kinase (GSK3) and the inactivation of a Tau phosphatase (laforin). GSK3 is known to act on Tau either individually or as a complex in the Alzheimer disease condition (35,50). The activation of GSK3 in LD draws striking parallels with Alzheimer disease. Another element for Tau pathology in LD could be the Lafora polyglucosan bodies. Alterations in the glucose metabolism are associated with abnormal Tau phosphorylation (47,51). Lafora bodies are thought to result from abnormal glycogen metabolic pathways (1,11,15,16); therefore, a role for these inclusions in the genesis of NFTs cannot be ruled out.
In summary, we demonstrate here that loss of laforin leads to Tau hyperphosphorylation and NFTs, that laforin could be a crit- FIGURE 5. Laforin dephosphorylates Tau protein at Ser 396 residue. A, wildtype Tau or its mutant (P301L) was coexpressed with a construct for laforin or an empty vector in COS-7 cells and analyzed for the difference by immunoblotting with PHF1 antibody, as indicated. COS-7 cells expressing wild-type Tau were treated with wortmannin to induce hyperphosphorylation of Tau. B, Neuro2A cells were transfected with empty vector (vector; lanes 1 and 2), or the short hairpin RNA interference construct to silence the Epm2a gene (RNAi-Epm2a; lanes 3 and 4), differentiated into neurons, and analyzed for changes in the phospho form of endogenous Tau by immunoblotting with PHF1. The efficiency of Epm2a knockdown was previously established by immunoblotting with anti-laforin antibody (8). C, Tau, laforin, and laforin mutant were overexpressed in COS-7 cells, affinity-purified, and used for the in vitro dephosphorylation assay as indicated. Resins incubated with empty vector transfected (pcDNA) cells were used as control. The reaction was arrested and immunoblotted with antibodies as indicated, as discussed under "Experimental Procedures." D, bar diagram showing the difference in the signal intensity of bands detected by PHF1 antibody in the in vitro dephosphorylation assays done with the wild-type laforin, its mutant (Q293L), or the control resin (pcDNA). Each bar represents average values Ϯ S.D. (n ϭ 3; **, p Ͻ 0.005).
ical regulator of Tau phosphorylation, and that the abnormal hyperphosphorylation of Tau might underlie some of the symptoms in LD. This study thus provides novel insight into the molecular basis of LD and has important implications on the formation of NFTs in tauopathies. | 7,001.4 | 2009-06-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Interferon regulatory factor 9 is critical for neointima formation following vascular injury
Interferon regulatory factor 9 (IRF9) has various biological functions and regulates cell survival; however, its role in vascular biology has not been explored. Here we demonstrate a critical role for IRF9 in mediating neointima formation following vascular injury. Notably, in mice, IRF9 ablation inhibits the proliferation and migration of vascular smooth muscle cells (VSMCs) and attenuates intimal thickening in response to injury, whereas IRF9 gain-of-function promotes VSMC proliferation and migration, which aggravates arterial narrowing. Mechanistically, we show that the transcription of the neointima formation modulator SIRT1 is directly inhibited by IRF9. Importantly, genetic manipulation of SIRT1 in smooth muscle cells or pharmacological modulation of SIRT1 activity largely reverses the neointima-forming effect of IRF9. Together, our findings suggest that IRF9 is a vascular injury-response molecule that promotes VSMC proliferation and implicate a hitherto unrecognized ‘IRF9–SIRT1 axis’ in vasculoproliferative pathology modulation.
A rterial intimal hyperplasia is a prevalent severe pathophysiological process that contributes to the progression of atherosclerosis, in-stent restenosis and vein bypass graft failure 1 . In response to injury and other stimuli, vascular smooth muscle cells (VSMCs) proliferate, migrate and secrete extracellular matrix, thus forming a neointima 2 . During neointima formation, growth factors, such as transforming growth factor-b and platelet-derived growth factor (PDGF), are produced locally 3 , and vasculostabilizing factors, such as SIRT1, are attenuated 4 . Consequently, cell cycle genes, including proliferating cell nuclear antigen (PCNA) and Cyclin D1, are upregulated in VSMCs to facilitate cell growth 5 , and matrix metalloproteinases (MMPs) are highly expressed to facilitate VSMC migration from the media to the intima 6 . As a result, the blood vessels narrow or become occluded. The treatments aimed at suppressing neointima formation are extremely limited, however. Thus, further investigation into the pathophysiology and molecular mechanisms underlying neointima formation is urgently required.
Interferon (IFN) regulatory factors (IRFs) constitute a family of transcription factors with nine members (IRF1-IRF9) in mammals 7 . IRFs are well characterized as regulators of immunity and cell survival [8][9][10] . Accordingly, the ubiquitously expressed IRF9 mediates the effects of IFNs 11 . In response to IFN-a/b, IRF9 induces p53-dependent apoptosis, which indicates an antiproliferative role for IRF9 (ref. 12). Moreover, IRF9 was also implicated in antitumour drug resistance and the promotion of cell growth 13 . However, the role of IRF9 in VSMC-responsive proliferation remains enigmatic. In addition to functions in immunity and cell fate decision, our group recently revealed crucial roles for IRFs in the development of metabolic and heart diseases [14][15][16][17][18][19] . In particular, IRF9 was implicated in the regulation of hepatic steatosis, insulin resistance, cardiac hypertrophy and heart failure 16,19 . Hence, an investigation of the role of IRF9 in vascular biology and remodelling would improve our understanding of the intricate mechanisms of IRF9 action and hopefully inspire new strategies for treating many vascular diseases.
In this study, we determine a role for IRF9 in neointima formation. More importantly, we uncover a role for IRF9 as a negative transcriptional regulator of the vascular protective factor SIRT1 in response to injury. Correspondingly, modulating SIRT1 expression or its deacetylase activity completely reverses the neointima-forming effect of IRF9. Thus, our findings strongly suggest the existence of a previously unrecognized 'IRF9-SIRT1 axis' during vasculoproliferative pathology modulation.
IRF9 increases in VSMCs during neointima formation.
To investigate the involvement of IRF9 in neointima formation, we first examined its expression in human femoral artery specimens with or without in-stent restenosis. Using immunofluorescence staining, we observed that IRF9 expression was markedly increased in stenotic artery neointima compared with normal arteries (Fig. 1a). The increased IRF9 was primarily localized in the nuclei of neointima VSMCs, which were identified by a-smooth muscle actin (a-SMA) staining (Fig. 1a). To examine IRF9 expression in endothelial cells, we co-stained human specimens for CD31, an endothelial marker, and IRF9. We observed that some endothelial cells were destroyed after injury. In the remaining endothelial cells, IRF9 expression was minimal, and the expression levels were not significantly different between normal and restenotic arteries ( Supplementary Fig. 1a). Subsequently, we treated rat aortic VSMCs (RASMCs) with plateletderived growth factor-BB (PDGF-BB) and examined IRF9 expression. Before PDGF-BB treatment, IRF9 was primarily localized in the cytoplasm, whereas after PDGF-BB treatment IRF9 was primarily localized in the nucleus ( Supplementary Fig. 1b). Western blot analysis revealed that IRF9 expression also increased after PDGF-BB stimulation ( Supplementary Fig. 1c). These results indicate that IRF9 is a stress-responsive factor that is elevated and activated by PDGF-BB and translocates into the nucleus in VSMCs. To investigate whether IRF9 expression can be induced by vascular injury in vivo, we performed carotid artery wire injury surgery in mice ( Supplementary Fig. 1d). Using immunofluorescence staining, we observed that IRF9 was primarily located in VSMCs, which were identified by a-SMA expression (Fig. 1b). The optical density (OD) of IRF9 in VSMCs increased at 14 days after injury and mildly decreased at 28 days after injury (Fig. 1b). The OD of IRF9 in neointima VSMCs was significantly higher compared with the media VSMCs at 14 days and 28 days after injury (Fig. 1b). Next, we co-stained IRF9 with CD31 to examine IRF9 expression in endothelial cells. Similar to the results in human specimens, we observed that minimal IRF9 was expressed in CD31 þ cells in mouse arteries and that IRF9 expression in endothelial cells did not significantly differ between the sham-operated and wire injury-operated groups ( Supplementary Fig. 1e). Immunofluorescence staining results revealed that the expressions of proliferation markers, namely PCNA and Cyclin D1, were upregulated, whereas expression of the differentiation marker a-SMA was reduced after wire injury ( Supplementary Fig. 1f). Western blot results confirmed that the injury induced IRF9 expression (Fig. 1c). At 7 and 14 days after the injury, IRF9 expression was markedly increased compared with its level before injury (Fig. 1c). Twenty-eight days after the injury, IRF9 expression mildly declined as the acute phase elapsed (Fig. 1c). The above results indicate that IRF9 is upregulated and activated in VSMCs during neointima formation.
IRF9 deficiency suppresses neointima formation. IRF9 induction in VSMCs upon vascular injury suggests a possible regulatory role for IRF9 in neointima formation. Hence, we performed carotid artery wire injury surgery on IRF9 global knockout (IRF9-KO) mice and wild-type (WT) controls to investigate this hypothesis. The absence of IRF9 in the arteries of IRF9-KO mice was verified with immunofluorescence staining ( Supplementary Fig. 2). The neointima in the IRF9-KO mice was significantly thinner compared with the WT mice (Fig. 2a). The intima-to-media (I/M) ratio in the IRF9-KO arteries was only approximately half the ratio observed in the WT controls at 14 and 28 days post injury (Fig. 2a). Immunofluorescence staining and western blot analyses revealed that the expression levels of the proliferative genes Cyclin D1 and PCNA were downregulated in IRF9-KO mice (Fig. 2b,c). The migration of media-resident VSMCs to the intima contributes to neointima formation 6 . The western blot results revealed that MMP9 expression was inhibited in IRF9-KO mice, which indicates constrained cell migration (Fig. 2c). To facilitate proliferation and migration in response to injury and other stimuli, contractile smooth muscle cells (SMCs) are transformed into synthetic SMCs in a process that is referred to as phenotypic switching 3 . Immunofluorescence staining and western blot analyses revealed that the expression of SMC-specific genes (a-SMA, SM22a and Smoothelin) increased and that osteopontin (OPN) expression decreased in the arteries of IRF9-KO mice, indicating that IRF9 deficiency attenuated VSMC phenotypic switching (Fig. 2d, that intima growth is inhibited in IRF9-KO mice, we next questioned whether IRF9 overexpression would facilitate neointima formation. To eliminate the possible influence of cell types other than VSMCs, we generated a strain of SMC-specific IRF9 transgenic (SMC-IRF9-TG) mice in the C57BL/6J background. The transgenic construct included mouse IRF9 cDNA controlled by an SMC-specific mouse minimal SM22a promoter ( Supplementary Fig. 3a). Four SMC-IRF9-TG mouse lines were successfully generated; line 4 exhibited the highest (4.1-fold) increase in IRF9 expression ( Supplementary Fig. 3b), which was comparable to the level in WT arteries at 14 days post injury. Therefore, this line was used in the subsequent phenotypic evaluations. To validate that the SMC-IRF9-TG line is functional and specific in vivo, we co-stained IRF9 with a-SMA and CD31, respectively, in the arteries. We observed that almost all of the IRF9 localized in a-SMA þ cells. Furthermore, IRF9 expression in a-SMA þ cells in the TG group were significantly increased compared with the non-transgenic (NTG) group (Fig. 3a). By contrast, only minimal IRF9 expression was observed in CD31 þ cells, and these levels were not significantly different between the NTG and TG groups at baseline ( Supplementary Fig. 3c). Thus, IRF9 overexpression in SMC-IRF9-TG mice is effective and SMCspecific. At 14 and 28 days after arterial injury, the area of the newly formed intima and the I/M ratio were increased in the SMC-IRF9-TG mice compared with the NTG mice (Fig. 3b). Similar to the results in line 4 TG mice, the exacerbated neointima areas were also observed in the injured arteries of lines 1 and 3 SMC-IRF9-TG mice compared with NTG controls ( Supplementary Fig. 3d). Cyclin D1, PCNA and MMP9 expression levels were increased and SMC-specific genes (a-SMA, SM22a and Smoothelin) expression levels were decreased in the IRF9-overexpressing arteries, as determined by immunofluorescence staining and western blotting ( Fig. 3c-f). Considering that IRF9 is induced in VSMCs upon vascular injury, IRF9 appears to be a genuine mediator of neointima formation.
IRF9 promotes VSMC proliferation and migration. We observed that IRF9 mediated neointima formation induced by vascular injury. Thus, to confirm these in vivo results, we isolated and cultured primary VSMCs from IRF9-KO and SMC-IRF9-TG mice and subsequently mimicked the injury response in vitro by treating these cells with PDGF-BB. We evaluated SMC proliferation by measuring 5 0 -bromo-2 0 -deoxyuridine (BrdU) incorporation. After PDGF-BB stimulation, IRF9-KO VSMCs incorporated less BrdU, whereas SMC-IRF9-TG cells incorporated more BrdU than their corresponding WT control cells (Fig. 4a), which indicates that IRF9 facilitates a proliferative response upon PDGF-BB stimulation. We then modulated IRF9 levels by knocking down or overexpressing IRF9 in primary RASMCs and subsequently examining BrdU incorporation. We observed that the amount of BrdU incorporation was positively related to IRF9 expression in VSMCs (Fig. 4b). Finally, we modulated IRF9 levels in human aortic SMCs (HASMCs) and examined BrdU incorporation. Consistent with the results in mouse primary SMCs and in RASMCs, IRF9 facilitated a proliferative response upon PDGF-BB stimulation in HASMCs (Supplementary Fig. 4a-e). Next, to evaluate SMC migration, we used a modified Boyden chamber migration assay. The VSMC migration assay revealed that after PDGF-BB (20 ng ml À 1 ) stimulation, fewer IRF9-KO VSMCs migrated through the membrane than WT VSMCs; however, IRF9 overexpression significantly increased PDGF-BBinduced migration (Fig. 4c). Furthermore, a gelatin zymography assay indicated that MMP9 activity in the IRF9-KO group was significantly reduced compared with the WT group at 28 days post injury, whereas MMP9 activity in the SMC-IRF9-TG group was notably increased compared with the NTG group, indicating that IRF9 facilitates SMC migration (Fig. 4d). Consistently, western blot analyses revealed that VSMC proliferation, migration and phenotypic switching upon PDGF-BB stimulation were suppressed through IRF9-KO, whereas IRF9 overexpression enhanced this response ( Supplementary Fig. 5a,b). Using both in vivo and in vitro experiments, we determined that IRF9 promotes VSMC proliferation, migration and subsequently neointima formation in a cell-autonomous manner.
IRF9 suppresses SIRT1 transcription. The evident neointimaforming effect of IRF9 prompted us to investigate its underlying mechanism. A genome-wide microarray screen and ingenuity pathway analysis revealed that the mRNA levels of proliferationand migration-related genes, such as MMP9, Cyclin D1, c-Fos and c-Jun, were significantly increased in the carotid arteries of SMC-IRF9-TG mice compared with the NTG controls at 14 days post injury (Supplementary Table 1). Our previous study determined that SIRT1 inhibits AP-1-dependent Cyclin D1 and MMP9 transcriptions, thereby suppressing VSMC proliferation and migration and preventing neointima formation 4 . Interestingly, SIRT1 and SIRT1-dependent molecular targets were also markedly altered in the SMC-IRF9-TG mice compared with NTG mice in response to wire injury (Supplementary Table 1). More importantly, real-time PCR further confirmed these results, indicating that SIRT1 plays crucial roles in the neointima formation induced by IRF9 overexpression ( Supplementary Fig. 6). In agreement with our previous report, SIRT1 expression was decreased at both the protein and mRNA levels in the VSMCs of WT mice following arterial injury 4 (Fig. 5a-c). Notably, the reduction in SIRT1 expression in VSMCs after injury was markedly reversed in IRF9-KO mice, whereas the SIRT1 reduction was enhanced in SMC-IRF9-TG mice (Fig. 5a-c). These findings indicate that IRF9 may inhibit SIRT1 expression. In view of this possibility, we introduced different doses of the IRF9-Myc plasmid into the mouse VSMC line MOVAS and subsequently examined SIRT1 promoter activity using luciferase reporter assays. We observed that when the IRF9-Myc levels increased, SIRT1 promoter activity was suppressed to a greater extent ( Supplementary Fig. 7a). We then treated IRF9-KO, SMC-IRF9-TG and WT primary mouse VSMCs with PDGF-BB and examined SIRT1 promoter activity. As expected, SIRT1 promoter activity decreased upon PDGF-BB stimulation in WT cells; this reduction in activity was blocked in IRF9-KO cells but was more prominent in SMC-IRF9-TG cells (Fig. 5d). These results indicate that SIRT1 expression is suppressed by IRF9.
Considering that IRF9 primarily acts as a transcription factor, we reasoned that IRF9 may modulate neointima formation through the direct suppression of SIRT1 transcription. To investigate this possibility, we performed chromatin immunoprecipitation (ChIP) of Myc-IRF9 in the mouse VSMC line MOVAS followed by quantitative PCR of the SIRT1 promoter ( À 2,500 to þ 200 bp around the transcription start site; Supplementary Fig. 7b). To increase the specificity of the ChIP analysis, sequences containing at least two IRF-stimulated response element (ISRE) repeats (5 0 -GAAA-3 0 ) were considered as potential IRF9-binding sites. Using bioinformatics approaches, a series of putative ISRE-binding sites, designated P1-P5, were identified. We observed that IRF9 ChIPs were enriched for the P3 region but not any of the other regions, indicating that P3 contains the primary site for IRF9 binding (Fig. 5e). To further validate the importance of this binding site, the 5 0 -GAAA-3 0 sequence in the IRF9-binding site P3 region was mutated to generate a mutant murine SIRT1 promoter (Mu-mSIRT1-luc). As expected, IRF9 retained the ability to increase the promoter activity of the WT (WT-mSIRT1-luc) but not mutant SIRT1 promoter, underscoring the importance and necessity of this binding site ( Supplementary Fig. 7c).
To verify that SIRT1 is a genuine downstream target of IRF9, we modulated SIRT1 levels in different primary mouse VSMCs and examined cell proliferation. As mentioned earlier, BrdU incorporation induced by PDGF-BB stimulation was mitigated in IRF9-KO VSMCs, whereas its incorporation was enhanced in SMC-IRF9-TG VSMCs ( Supplementary Fig. 7d). The induction of BrdU incorporation was maintained by shSIRT1 interference targeting SIRT1 in IRF9-KO VSMCs, whereas this induction was abrogated by SIRT1 overexpression in SMC-IRF9-TG VSMCs ( Supplementary Fig. 7d). Taken together, these results indicate that IRF9 directly inhibits SIRT1 transcription to promote VSMC proliferation.
The neointima-forming effect of IRF9 is mediated by SIRT1. Given that SIRT1 is a direct downstream target of IRF9, we next sought to examine whether SIRT1 mediates the effect of IRF9 on neointima formation in vivo. To accomplish this goal, we crossed IRF9-KO mice with SMC-specific SIRT1-KO (SMC-SIRT1-KO) mice (generated by crossing Sirt1 flox/flox mice with SM22-Cre mice) to generate SIRT1/IRF9 double knockout (DKO) mice ( Supplementary Fig. 8a,b). We performed carotid artery wire injury surgery on the SMC-SIRT1-KO, IRF9-KO and DKO mice as well as the WT controls. Twenty-eight days after injury, the extent of neointima formation was examined. SMC-SIRT1-KO significantly aggravated neointima formation, even when IRF9 was also ablated (Fig. 6a). Consistent with the alterations in intima area and I/M ratio, SMC-SIRT1-KO abolished the reduction in PCNA and Cyclin D1 expression and the preservation of a-SMA and SM22a expression in the IRF9-KO vessels (Fig. 6b). Therefore, SIRT1 deficiency abolished the suppressive effect of IRF9 deficiency on neointima formation.
Given that SIRT1 ablation overcomes the vasculoprotective effect of IRF9 deficiency, we questioned whether a gain of SIRT1 expression could rescue the neointima-forming effect of IRF9 overexpression. We therefore simultaneously overexpressed SIRT1 and IRF9 in VSMCs by crossing SMC-specific SIRT1-TG (SMC-SIRT1-TG) and SMC-IRF9-TG mice ( Supplementary Fig. 8c). We observed that the intima areas in SIRT1/IRF9 double transgenic (DTG) mice were significantly smaller than those in the WT mice and comparable to those in the SMC-SIRT1-TG mice (Fig. 6c); thus, the neointima-promoting effect of IRF9 overexpression can be rescued by SIRT1 overexpression. Accordingly, the changes in PCNA, Cyclin D1, a-SMA and SM22a expression in the SMC-IRF9-TG arteries were also reversed by SIRT1 overexpression (Fig. 6d). On the basis of these findings, we conclude that SIRT1 reduction is required for IRF9-mediated neointima formation.
The effects of IRF9 are dependent on SIRT1 activity. We found that genetic manipulation of SIRT1 effectively reverses neointima formation mediated by IRF9. SIRT1 is a class III deacetylase, and its effects are dependent on its enzymatic activity; therefore, we examined SIRT1 deacetylase activity in the arteries in our injury model. A decline in SIRT1 activity was observed after injury (Fig. 7a). Consistent with SIRT1 expression levels, SIRT1 deacetylase activity in mouse arteries and primary mouse VSMCs was also inversely related to IRF9 expression under the stimulation of wire injury or PDGF-BB (Fig. 7b,c). SIRT1 overexpression abolished the increase in BrdU incorporation triggered by IRF9 overexpression; however, when a dominant-negative SIRT1 (H363Y), which lacks deacetylase activity, was overexpressed, the increase in BrdU incorporation was unaltered (Fig. 7d). This finding indicates that the suppression of SIRT1 deacetylase activity is crucial for IRF9-mediated VSMC proliferation.
Our recent work has demonstrated that SIRT1 acts as a modulator of neointima formation in response to vascular injury through the regulation of AP-1-dependent targets 4 . Therefore, we then examined the status of AP-1 (c-Jun) deacetylation in different groups following vascular injury as well as in cultured vascular cells. At 28 days after injury in WT mice, c-Jun was acetylated. In contrast, IRF9-KO mice displayed a significant decrease in the expression of acetyl-c-Jun compared with WT mice subjected to wire injury (Fig. 7e), an effect that was reversed by the VSMC-specific overexpression of IRF9 in vivo (Fig. 7e). Our in vitro results also confirmed these findings (Fig. 7f).
To further confirm the effects of IRF9 on AP-1 activity, we examined whether IRF9 regulates the transcriptional activity of AP-1. We performed luciferase assays using an AP-1-luc reporter in primary cultured VSMCs. The cells were infected with AdIRF9 or AdshIRF9 and co-infected with or without AdSIRT1 or AdshSIRT1; the cells were then treated with PDGF-BB. PDGF-BB markedly stimulated AP-1 reporter activity, which was inhibited by infection with AdshIRF9 and further enhanced by ARTICLE infection with AdIRF9 (Fig. 7g). To assess whether IRF9-mediated AP-1 activity is SIRT1-dependent, we infected primary cultured vascular smooth cells with AdshIRF9, AdshSIRT1 or both AdshIRF9 and AdshSIRT1 (Fig. 7g). Co-infection with AdshSIRT1 largely abolished the protective effect of IRF9 inhibition (Fig. 7g). Conversely, the IRF9-mediated increase in AP-1 activity was reversed by AdSIRT1 (Fig. 7g). These data suggest that IRF9mediated AP-1 activity is dependent on SIRT1 signalling.
As the transcription factor AP-1 is important in the regulation of Cyclin D1 and MMP9 transcription 4 , we performed a transient transfection analysis to determine whether IRF9 regulates Cyclin D1 and MMP9 transcription. Using two luciferase reporter vectors under the control of the Cyclin D1 ( À 903 to þ 202) and MMP9 ( À 711 to þ 19) promoters, we found that PDGF-BB significantly increased the activity of the Cyclin D1 and MMP9 promoters and that these increases were blocked by AdshIRF9 and promoted by AdIRF9 (Supplementary Fig. 9a,b). Importantly, co-infection with AdSIRT1 largely abolished the promoting effects of IRF9 on Cyclin D1 and MMP9 promoter activities ( Supplementary Fig. 9b). Furthermore, point mutations or deletions of the AP-1 DNA-binding site in the Cyclin D1 and MMP9 promoters significantly impaired the ability of IRF9 to augment the PDGF-BB-induced activity of these promoters ( Supplementary Fig. 9c,d).
In addition to the in vitro experiments, we examined whether the modulation of SIRT1 activity would counteract the effect of IRF9 on neointima formation. We performed carotid artery wire injury surgery on IRF9-KO and WT mice; specifically, the mice were intraperitoneally injected with the SIRT1-specific inhibitor EX527 every day for 14 days, and the injured arteries were then examined on day 14. EX527 treatment enlarged the intima area (Fig. 8a), promoted PCNA and Cyclin D1 induction, and decreased a-SMA and SM22a expression (Fig. 8b) after injury in both the WT and IRF9-KO mice. Similarly, we treated SMC-IRF9-TG and NTG mice with the SIRT1-specific activator SRT1720 and examined neointima formation. Regardless of the genotype, SRT1720 treatment significantly reversed the intima area enlargement (Fig. 8c) and the gene expression alterations (Fig. 8d) caused by IRF9 overexpression after VSMC injury. Collectively, the lossand gain-of-function studies described above indicate that IRF9mediated neointima formation in response to vascular injury is dependent on SIRT1 deacetylase activity.
Discussion
Neointima formation is a common pathophysiological process that occurs during several severe vascular pathologies, including atherosclerosis, in-stent restenosis and vein bypass graft failure 1 . The proliferation and migration of VSMCs upon stimulation by endogenous and exogenous insults are of prime importance in this process 21 uncovered a previously unrecognized 'IRF9-SIRT1 axis' in VSMCs that mediates neointima formation following vascular injury. Our results showed that IRF9 was markedly induced in VSMCs by arterial injury in vivo or PDGF-BB in vitro. The induction of IRF9 subsequently suppressed SIRT1 expression (and consequently reduced SIRT1 deacetylase activity) by directly binding to an ISRE in the SIRT1 promoter; by this means, IRF9 stimulated VSMC proliferation and neointima formation (Fig. 9). Consistent with this idea, we found that SMC-specific IRF9 overexpression worsened intimal hyperplasia, whereas IRF9-KO significantly alleviated injury-induced neointima formation. Importantly, genetic manipulation of SIRT1 in SMCs or pharmacological modulation of SIRT1 activity sufficiently reversed the neointima-forming effect of IRF9. On the basis of the results of the present study, we propose that the 'IRF9-SIRT1 axis' be considered a therapeutic target for the prevention of restenosis and warrants additional validation and investigation. In seeking ways to effectively suppress pathological intimal thickening, tremendous efforts have been made in recent decades to elucidate the underlying mechanisms of neointima formation. Using microarrays, Zohlnhofer et al. 22 comprehensively compared the gene expression profile of human in-stent restenosis specimens with the profiles of normal artery specimens, cultured human SMCs and whole blood cells from patients with in-stent restenosis. In their study, 37 of the 223 differentially expressed genes, including IRF9, were involved in the activation of IFN signalling in the neointima 22 . In agreement with their results, we found that IRF9 expression increased during neointima formation. However, unlike other IRF family members (including IRF1 and IRF7), which are highly expressed in both neointima and blood cells from patients, IRF9 is exclusively elevated in the neointima according to Zohlnhofer et al. 22 The results provided by authors suggest that IRF9 involvement in neointima formation is unlikely because of its presence in haematopoietic cells and that instead IRF9 functions in VSMCs might be crucial. In the current study, we confirmed this hypothesis using immunofluorescence staining of an SMC marker, a-SMA, through which we explicitly determined that IRF9 was predominantly expressed in mouse arterial VSMCs after injury. In addition, by utilizing SMC-IRF9-TG mice and IRF9-KO mice, we found that SMC-specific IRF9 overexpression enhanced intimal hyperplasia, whereas IRF9 deficiency attenuated intimal thickening. Although the relative contribution of cells with different origins (for example, adventitial fibroblasts and haematopoietic stem cells) to neointima formation remains to be determined, our results strongly suggest that SMCs with IRF9 function are particularly important in neointima formation. To further resolve the mechanism of IRF9 function, we performed genome-wide microarray analyses using artery tissues from SMC-IRF9-TG and NTG mice. Notably, besides many proliferation-and migration-related genes, we found that SIRT1 and SIRT1-dependent molecular targets were also markedly altered in response to wire injury. Subsequent in vivo and in vitro experiments validated that the neointima formation mediated by IRF9 was indeed dependent on its regulation of a neointima formation modulator, SIRT1, in VSMCs.
SIRT1 is a NAD þ -dependent class III deacetylase, which is the closest homologue of yeast Sir2 in the mammalian sirtuin family 23 . Through the deacetylation of histones and a myriad of transcription factors, SIRT1 regulates a broad range of biological processes 24,25 . Most studies have characterized SIRT1 as a survival factor that protects against aging and age-related diseases, including metabolic disorders, neurodegeneration, cancers and, importantly, cardiovascular diseases 26,27 . In addition, SIRT1 protects against vascular pathologies 20 . By deacetylating downstream targets, SIRT1 improves endothelial function, preserves blood vessel dilation, alleviates vascular inflammation and inhibits atheroma progression [28][29][30][31][32] . Notably, our previous study demonstrated that SIRT1 expression is downregulated in response to vascular injury and that SIRT1 overexpression in VSMCs prevents injury-induced neointima formation 4 . This vascular silencing effect of SIRT1 is dependent on its inhibition of genes downstream of AP-1, namely Cyclin D1 and MMP9 (ref. 4), the main functions of which are to facilitate VSMC proliferation and migration, respectively. Although the decrease in SIRT1 expression in VSMCs mediates neointima formation following vascular injury, the underlying molecular mechanism by which SIRT1 is downregulated remains unknown. In the present study, we found that IRF9 is the upstream regulator of SIRT1 in the setting of neointima formation. We demonstrated that the induction of IRF9 after vascular injury leads to the suppression of SIRT1 expression in VSMCs and subsequent intimal hyperplasia. As a prominent mediator of energy metabolism, SIRT1 expression is induced by a low-energy state through CREB, FOXOs and PPARa/b/s 33-37 and suppressed by a high-energy state through ChREBP, HIC1 and PPARg 33,38,39 . However, little is currently known about the transcriptional regulation of SIRT1 in response to acute stresses other than energy fluctuations. Here we filled this knowledge gap by demonstrating that IRF9 negatively regulates SIRT1 expression upon arterial injury. After injury, IRF9 is elevated and binds to the ISRE in the SIRT1 promoter. Likely in cooperation with other unidentified nuclear factors, IRF9 then inhibits SIRT1 transcription in VSMCs. Furthermore, through multiple genetic and pharmacological gain-and loss-of-function studies, we clearly delineated an 'IRF9-SIRT1 axis' that mediates the intimal hyperplastic response to vascular injury.
In our previous study, we demonstrated that IRF9 inhibits the development of cardiac hypertrophy, and we determined that IRF9 directly interacts with myocardin to suppress its activity and the transcription of SRF (serum response factor) downstream genes in cardiomyocytes 19 . Considering that myocardin is also involved in the regulation of SMC phenotypic switching, we tested in the present study whether the downregulation of myocardin transactivation also mediates the neointima-forming effect of IRF9 in VSMCs. However, we failed to detect the interaction between IRF9 and myocardin through immunoprecipitation in cultured VSMCs. In addition, no significant change was observed in the CArG (SRF response element)-luciferase activity upon IRF9 overexpression or knockdown, either. These results demonstrated that IRF9 had no effect on myocardin activity in VSMCs in vitro, and whether myocardin involved in IRF9-induced neointima formation in vivo remains to be determined in the future study. Although the expression of SMC-selective genes was observed to be downregulated by IRF9 in the present study, this could be a secondary effect of IRF9 inhibiting the proliferation and migration of VSMCs through the SIRT1/AP-1 pathway as demonstrated here. In addition, SIRT1 was recently implicated in the preservation of VSMC differentiation 40 ; therefore, the suppression of SIRT1 by IRF9 may lead to decreased expression of SMC differentiation markers, which also contribute to IRF9's effect of promoting neointima formation. Our finding that IRF9 mediates VSMC proliferation and intimal hyperplasia mirrors previous reports that IRF9 regulates cell survival. IRF9 was found to be overexpressed in approximately half of human breast and uterine tumour samples 13 , and IRF9 was also reported to confer resistance to antiproliferative drugs 13,41 . In contrast, IRF9 also possesses the pro-apoptotic function of type I IFNs by inducing p53 (ref. 12). Hence, IRF9 exhibits contradictory dual roles in the regulation of cell survival, and the role that dominates might strictly depend on the type of cell and stimuli. When stimulated by viral infection and type I IFNs, IRF9 tends to induce p53 expression and mediate pathogen-induced apoptosis 12 . However, as shown in our present work, when stimulated by vascular injury and growth factors (such as PDGF-BB), IRF9 tends to enhance VSMC proliferation and mediate neointima formation after injury. Most of the current literature describes IRF9 functions in relation to IFN-mediated immunity; thus, the immunity-independent roles of IRF9 are poorly understood. Recently, in addition to its immunoregulatory roles, our research unveiled a versatile role for IRF9 in diverse pathologies. In hepatocytes, the decrease in IRF9 upon overnutrition aggravates hepatic steatosis and insulin resistance 16 . In cardiomyocytes, a pressure overload-induced IRF9 decrease leads to the development of cardiac hypertrophy and heart failure 19 . Our finding that IRF9 suppresses SIRT1 in VSMCs to mediate sterile neointima formation undoubtedly adds important new evidence for a fundamental role of IRF9 outside IFN-mediated immunity.
In summary, we demonstrate that IRF9 is a critical mediator of neointima formation following vascular injury. Under physiological conditions, VSMCs express trace levels of IRF9, which facilitates SIRT1-mediated gene silencing. However, upon injury, increased IRF9 inhibits SIRT1 expression, which in turn promotes VSMC proliferation and vascular narrowing. Our findings shed new light on the roles of IRF9 outside immunity and implicate the newly identified 'IRF9-SIRT1 axis' in vasculoproliferative pathology modulation.
Methods
Human femoral artery samples. The human femoral artery samples were acquired from patients at the Peking Union Medical College Hospital, China. Informed consent was obtained from each participating patient. The procedures involving human tissue were approved by the Peking Union Medical College Hospital Institutional Review Board. The control samples were obtained intraoperatively from adjacent normal regions in femoral arteries from patients with lower limb arteriosclerosis obliterans undergoing bypass grafting. The in-stent restenotic arteries were collected from diseased femoral arteries removed during femoral artery bypass grafting.
The SIRT1 inhibitor EX527 (Tocris Bioscience, Bristol, UK, 2780) was dissolved in dimethyl sulfoxide (DMSO) at a stock concentration of 50 mM that was then diluted to 0.1-1.0 nM with ultrapure water (the final DMSO concentration was o2%). The freshly prepared EX527 (2 mg kg À 1 ) was injected intraperitoneally into the WT and IRF9-KO mice daily for 2 weeks. The NTG and SMC-IRF9-TG mice were treated with SRT1720 (Cat. No. S1129; Selleck, Houston, USA); the SIRT1 agonist was administered in parallel with EX527 at 100 mg kg À 1 d À 1 for 2 weeks. The same volume of DMSO was injected intraperitoneally into the control mice. The solutions were sterile-filtered through 0.2-mm syringe filters (Pall Corporation, MI, USA).
Carotid artery wire injury model. For carotid artery injury mouse model, mice were anaesthetised with sodium pentobarbital (80 mg kg À 1 , intraperitoneally). A midline neck incision was made, and the left carotid artery was carefully dissected under a dissecting microscope. The external carotid artery was ligated with an 8-0 suture immediately proximal to the bifurcation point. Vascular clamps were applied to interrupt the internal and common carotid arterial blood flow. A transverse incision was made immediately proximal to the suture around the external carotid artery. A guidewire (0.38 mm in diameter, No. C-SF-15-15; Cook, Bloomington, USA) was then introduced into the arterial lumen towards the aortic arch and withdrawn five times with a rotating motion. After the guidewire was carefully removed, the vascular clamps were removed, and blood flow was restored. The skin incision was then closed. The sham littermate control mice underwent the same procedures without the incision and injury. The animal tissues were collected at specific time points after surgery for morphological and biochemical assays.
Histological and morphometric analysis. At 0, 7, 14 or 28 days post injury, the mice were killed through an intraperitoneal injection of an overdose of sodium pentobarbital (150 mg kg À 1 ). The carotid arteries were harvested after circulation perfusion and fixed with 4% paraformaldehyde dissolved in PBS. The arteries were further formalin-fixed and embedded with paraffin. Serial cross-sections (3 mm) were produced from the entire region (B300 mm) at the bifurcation site of the left carotid artery. For morphometric analysis, the sections were stained with haematoxylin and eosin after deparaffinization and rehydration. The level of neointima formation was determined based on the intima areas and I/M ratios using the Image Pro Plus software (version 6.0, Media Cybernetics) by a single observer who was blinded to the treatment protocols. A mean value was generated from five independent sections of each artery sample.
Cell culture and adenovirus infection. HASMCs and the mouse VSMC line MOVAS were acquired from the American Type Culture Collection (ATCC). The primary VSMCs were enzymatically isolated from the thoracic aortas of male C57BL/6J mice and Sprague-Dawley rats through enzymatic digestion. The cells were cultured in Dulbecco's modified Eagle's medium (DMEM)/F12 medium with 10% fetal bovine serum (FBS, SV30087.02; HyClone) and 1% penicillin-streptomycin in a 5% CO 2 /water-saturated incubator at 37°C. The cells used in the experiments were passaged three to five times. AdshIRF9 and AdIRF9 were previously generated 19 . To knockdown SIRT1 expression, three SureSilencing mouse shSIRT1 constructs were acquired from SABiosciences (KM05054G), and the construct that most significantly reduced the SIRT1 levels was used in further experiments (designated AdshSIRT1). AdshRNA was used as a nontargeting control. The primary mouse VSMCs and RASMCs were infected with recombinant adenoviruses at a multiplicity of infection of 25 particles per cell for 24 h.
Cell proliferation measurement using BrdU. SMC DNA synthesis was evaluated based on the level of (BrdU incorporated. Primary mouse SMCs (5 Â 10 3 per well) were growth-arrested in a 96-well microplate. After growing to 60% confluence, the cells were serum-starved for 24 h and subsequently treated with 20 ng ml À 1 of PDGF-BB (ProSpec; Rehovot, Israel) for 48 h. BrdU was added for the last 2 h of treatment. BrdU incorporation was determined using a cell proliferation ELISA kit (Roche Diagnostics, Mannheim, Germany) in accordance with the manufacturer's protocol.
Migration assay. SMC migration assay was determined using a modified Boyden chamber 43 . In brief, murine aortic SMCs were trypsinized and washed. Once resuspended, B5 Â 10 4 cells were added to the top wells of transwell-modified Boyden chambers of a 24-well transwell dish (a 6.5-mm polycarbonate membrane containing 8-mm pores; Corning, NY, USA) and allowed to attach for 30 min. SMCs were exposed to medium with or without PDGF-BB (20 ng ml À 1 ) that was added to the lower chamber for 6 h. The cells that migrated to the bottom of membranes were fixed and stained with 0.1% crystal violet/20% methanol and counted. Five randomly chosen high-power fields ( Â 200) in three independent experiments were used to calculate the average number of migrated cells. Images were quantified using the Image Pro Plus software.
Gelatin zymography. Mouse carotid arteries were harvested from the sham operation and 28-day post-injury groups and the proteins were extracted with lysis buffer (50 mM Tris-HCl, pH 6.8, 10% glycerol and 1% SDS). Equal amounts of tissue extract protein (15 mg) were loaded on 10% SDS-PAGE gels containing 1% gelatin to detect Gelatinase activity. After washing in 2.5% Triton X-100, the gels were incubated overnight in buffer (10 mM CaCl 2 , 0.01% NaN 3 and 50 mM Tris-HCl, pH 7.5). Subsequently, the obtained gels were stained with 0.2% Coomassie blue R-250 (Bio-Rad) for 2 h and then destained with 10% acetic acid and 40% methanol. Signals were detected using a Nikon D700 digital camera. SIRT1 activity measurement. SIRT1 activity was determined using a deacetylase SIRT1 Fluorometric Kit (Biomol International) according to the manufacturer's instructions. Briefly, SIRT1 was immunoprecipitated using a SIRT1 antibody (Santa Cruz Biotechnology) from the carotid artery and primary mouse VSMC homogenates (200 mg of protein) in RIPA buffer. The SIRT1 substrate reagent and NAD þ were added to the SIRT1-conjugated beads and incubated at 37°C for 80 min after a final washing. The substrate-SIRT1 mixture was placed on a 96-well plate, and the developer reagent was added to the wells at 37°C for 20 min. The plate was read using a spectrophotometer with an excitation of 405 nm.
RNA isolation and quantitative real-time PCR. Total RNA from mouse carotid artery tissue was isolated using TRIzol reagent (Roche, 11667165001). The cDNA synthesis reaction was performed using 2 mg of total RNA and a Transcriptor First Stand cDNA Synthesis Kit (Roche, 04897030001). The quantitative real-time PCR reactions were performed in 20-ml volumes (LightCycler 480 SYBR Green I Master Mix, 04887352001, Roche) using the LightCycler 480 Real-time PCR system (Roche) in accordance with the manufacturer's instructions. The samples were quantified by normalizing the gene expression level to that of the standard housekeeping b-actin gene and expressed as relative mRNA levels compared with the internal control. To confirm the microarray results, 36 proliferation-and migration-related genes were selected from Supplementary Table 1, and the RNAs isolated from the carotid arteries of NTG and SMC-IRF9-TG mice at 14 day post injury were used for real-time PCR. The real-time PCR primers were shown in Supplementary Table 3.
Statistical analysis. The data were analysed using the SPSS software, version 13.0 (SPSS Inc., Chicago, IL, USA), and are presented as the mean values ± s.d. Comparisons between the two groups were performed using independent sample t-test. Differences between multiple groups were determined using the analysis of a oneway analysis of variance with least significant difference or Tamhane's T2 post-hoc test. Po0.05 was considered statistically significant. | 8,755.6 | 2014-10-16T00:00:00.000 | [
"Biology",
"Medicine"
] |
Fiber Optic Sensing for Geomechanical Monitoring: (2)- Distributed Strain Measurements at a Pumping Test and Geomechanical Modeling of Deformation of Reservoir Rocks
In this study distributed fiber optic sensing has been used to measure strain along a vertical well of a depth of 300 m during a pumping test. The observed strain data has been used in geomechanical simulation, in which a combined analytical and numerical approach was applied in providing scaled-up formation properties. The outcomes of the field test have demonstrated the practical use of distributed fiber optic strain sensing for monitoring reservoir formation responses at different regions of sandstone–mudstone alternations along a continuous trajectory. It also demonstrated that sensitive and scaled rock properties, including the equivalent permeability and pore compressibility, can be well constrained by the combined use of water head and distributed strain data. In comparison with the conventional methods, fiber optic strain monitoring enables a lower number of short-term tests to be designed to calibrate the parameters used to model the rock properties. The obtained parameters can be directly used in long-term geomechanical simulation of deformation of reservoir rocks due to fluid injection or production at the CO2 storage and oil and gas fields.
Introduction
Injection of fluids into underground formation, as in the case of geological sequestration of greenhouse gases, can alter the stress state in the porous skeleton of the medium as well as the formation pressure.Stress redistribution within and around the reservoir and caprock system may lead to geophysical changes, ground surface deformation [1,2], microseismicity, fault reactivation, and may even induce damaging earthquakes [3][4][5].In recent years, following the rapid increase in applications, such as injection of fluids into deep formations of the earth's crust, including geological sequestering of CO2, enhanced geothermal systems (EGS), the extraction of unconventional oil and gas, and wastewater disposal, injection-induced earthquakes and other risks related to injectioninduced surface deformation and fault reactivation have attracted increased attention from scientists and the public alike [5][6][7][8][9][10].
Geomechanical effects induced by reservoir production can be particularly pronounced in stress-sensitive reservoirs, such as poorly compacted or fractured reservoirs [11].For both monitoring and risk assessment purposes, it is important to assess the influence of injection operations so that the geomechanical implications, particularly in terms of the potential for fault reactivation, can be adequately assessed [12].How to predict and control the risks associated with fault reactivation, including fluid leakage and damaging earthquakes, are key issues to allow fluid injections to be carried out safely and accepted by the public.In these regards, geomechanics plays a role in bridging the gap from geophysics to engineering and is also a key issue in site selection, injection operation, and post-injection management in CO2 storage and other applications.Given that changes of deformation in a reservoir-caprock system are directly linked to changes in pore pressure and stress state, coupled fluid flow and geomechanical modeling is potentially a powerful tool to understand and predict the mechanical behaviors of the reservoir-caprock system.It can contribute to the characterization, prediction, and optimization of injection operations.In recent years, significant progresses in numerical modeling of reservoir scale have been made for a number of purposes such as quantification of challenge in geothermal development [13], analysis of induced seismicity by shale fracking [5], stimulation of heavy oil reservoir [14], and CO2 geological storage [15].
In order to make accurate predictions, a geomechanical model needs to be first calibrated against geophysical and geomechanical data obtained from monitoring.As a new technology, continuous and distributed fiber optic sensing (DFOS) technology based on Brillouin and/or Rayleigh scattering has great potential in geotechnical monitoring at both laboratory and field scales [16][17][18][19][20]. Optical fiber sensors have advantages such as their immunity against electromagnetic interference, low weight, small size, high sensitivity, and large bandwidth.By grouting fiber optic cables into multiple boreholes, a large amount of accurate, spatially resolved data can be obtained.Xue et al. [21] carried out a fundamental study on distributed strain measurements based on the Rayleigh and Brillouin frequency shifts of two different sandstone samples (Berea sandstone and Tako sandstone) under controlled hydrostatic confining and pore pressure in the laboratory.By comparing with strain data obtained from conventional strain gauges, they showed the potential use of deformation monitoring with the DFOS technology in complex lithology reservoirs.
As a sequel to the study of fiber optic sensing for geomechanical monitoring by Xue et al. [21], this paper presents a field study of distributed strain measurement in a pumping test and also discusses the roles of distributed strain data and a combined analytical and numerical approach in providing the scaled-up formation properties for geomechanical modeling and analysis.Section 2 introduces a general modeling framework and the numerical simulation method used in the coupled thermal, hydrological, and mechanical (THM) analysis.Section 3 describes the field case study results from the pumping test.Finally, Section 4 summarizes the future practical uses of fiber optic sensing in geomechanical monitoring at the CO2 storage and oil and gas fields.
General Framework of Geomechanical Modeling
The governing equations for heat and fluid flows, and mechanics are equations for mass and energy conservation, and momentum balance.The mass and energy conservation equations are of a similar form [22]: In the mass-conservation Equation (1), the subscript j denotes a particular fluid phase, mj (kg/m 3 ) is the unit fluid mass, wj (kg/m 2 /s) is the unit mass-flux, and qj (kg/m 3 /s) is a unit source term of fluid phase j.In the energy-conservation Equation (2), the subscript θ denotes the heat component, mθ (J/m 3 /s), fθ (J/m 3 /s), and qθ (J/m 3 /s) are unit heat, heat flux, and heat source, respectively.The volumetric flux of fluid phase j, (/) = ⁄ , is assumed to follow the multiphase Darcy's law: where g is the vector of gravitational acceleration, k (m 2 ) is the absolute permeability, krj, μj (Pa•s), and ρj (kg/m 3 ) are the relative permeability, viscosity, and density of fluid phase j, respectively.The heat and heat flux are given by where ∅, , (J/kg/K), and (J/m 2 /s/K) are the porosity, density, heat capacity, and thermal conductivity of the porous media, respectively. , and ℎ (J/kg), are the saturation and specific enthalpy for fluid phase j, respectively.The linear momentum balance equation is given by where, σ is the Cauchy total-stress tensor, and ρb is the bulk density.Based on linear poroelasticity theory [23], the coupled governing equations for pore pressure (here, the pore is saturated with a single "effective fluid") and solid mechanics can be written as [24] where u (m) and p (Pa) are the displacement vector and excess pore pressure, respectively.λ and μ are Lamé coefficients, α is the dimensionless coefficient representing the change in pore pressure per unit change in bulk volume under drained conditions, Q −1 (Pa -1 ) is the bulk compressibility, D is Darcy conductivity.The f (Nf/m 2 ) and q (m 3 /s) are body force and fluid source, respectively.By assuming the infinitesimal transformation, the strain tensor (ε) is given by the symmetric gradient of the displacement vector (u): As seen from these equations, fluid flow, heat transfer, and the mechanic responses of the reservoir systems are coupled with each other.Its solution usually requires a coupled modeling approach, thus, coupled THM analysis is important in geomechanical modeling [25].
Figure 1 shows a schematic of the geomechanical modeling framework, which contains a coupled THM simulation and history matching for calibration of the numerical model.Although full coupling schemes are more accurate, they are also more complex.On the other hand, sequential coupling schemes, in which fluid and heat flows and mechanics are explicitly coupled, may be sufficiently accurate for problems in which coupling is relatively weak.The general steps of developing a numerical model for coupled THM simulation start with building a conceptual geological model with existing geological and geophysical data and then creating a numerical model with proper constitutive relations and scaled rock properties obtained through analytical approaches.
To calibrate the numerical model it needs to run the coupled THM simulation for history matching of the monitoring data by adjusting the input parameters, such as pore pressure (Pp), temperature (T), water saturation (Sw), and rock property.An accurate numerical model can be useful for long-term prediction of geomechanical integrity of a reservoir-caprock system.
Geological formations contain heterogeneities at scales down to the pore and grain scale; however, a practically useful model cannot fully account for this level of detail.There are practical needs to deal with a scaled-up model that accounts for most of the important physics in subsurface geoscience.In a scaled-up model, an element represents a volume containing heterogeneities and fractures at scales smaller than the scale of the element, down to the pore and grain scale, and these features may distribute uniformly or in a patchy fashion.Because there are uncertainties in scaling up such models, probability-based uncertainty analysis and prediction are also necessary.
Within the geomechanical modeling framework, the coupled THM simulation, which predicts injection-or production-induced changes in the rock properties, rock mass deformation, stress distribution, and the fracture or fault stability, is the key element to ensure safety of the operations.There are a number of choices of research-oriented or commercial software for reservoir simulation and/or stress analysis.In this study, TOUGH2 and FLAC3D were selected for these purposes.TOUGH2 is a package of programs for multiphase reservoir simulation, and FLAC3D [26] is a commercial software for geotechnical analysis.With sequential couplers [27], this combined TOUGH-FLAC approach has proven useful in the analysis of deformation coupled with fluid flows within hard and soft rocks in injection-induced seismicity in shale gas production [5], geological CO2 storage [28][29][30], geothermal and natural analog studies [31,32].
The permeability of rock mass may change greatly as a result of deformation and fracturing [12,33].In previous works, the permeability has been expressed as a function of volumetric strain εv or shear strain εs [32,34,35] as where φ is porosity, k (m 2 ) is permeability, and are their respective initial value, and n is constant integer.
Field Case Study of Pumping Test
In Japan, the major potential sites for CO2 storage are characterized as soft rock formations.Thus, the efficiency of the TOUGH-FLAC approach should be examined for such sites.To this end, a pumping test with distributed strain monitoring by fiber optic sensors [20] was conducted.In this study, the water level and strain data obtained from the pumping test were used for geomechanical modeling and analysis.
Mobara Test Site
The pumping test site selected is located at Mobara city (Chiba Prefecture), north Boso Peninsula of the Kanto region of Japan, where there are many wells for agricultural use with depths of up to 200 m.The test site consisted of two pumping wells and two monitoring wells (Figure 2).A singlemode optical fiber cable was installed behind one of the monitoring wells (Well-a) and it was cemented in the annulus between well casing and the geological formation.The two pumping wells (Well-3 and Well-4) are located at a distance of 175 m and 280 m from the Well-a respectively.The other monitoring well (Well-b), located at a distance of 5.5 m from Well-a, was used to monitor the water level during the pumping test.Figure 3 shows a schematic view of geological cross-section between the wells.Sediments at the test site, from the surface downward, are of the Kasamori, Chonan, and Kakinodai Formations, all of which belong to the upper Kazusa Group [36].Kakinokidai Formation: Within the site, the Kakinokidai Formation is formed of sandy siltstone.Indistinct stratification is present in the upper part of the formation.Over a wider region, the formation coarsens upward and fines eastward.
Chonan Formation (approximately 120 m thick): The Chonan Formation is subdivided into two parts: the lower and upper parts.The lower part (approximately 40 m thick) is formed mostly of siltstone-dominated alternations with thin slumped beds.The upper part (approximately 80 m thick) is composed mostly of sand-dominated alternations with numerous slumped beds.
Kasamori Formation (154 m thick): The Kasamori Formation is composed mostly of highly bioturbated deposits such as sandy siltstones and silty sandstones.The Kasamori Formation is characterized by low permeability and can be considered as a caprock layer in the site.
Optic Fiber Measurement
The fiber cable was custom-made for this study to improve the coupling strength between casing and cement.The frequency shifts were taken using a Neubrescope NBX-8000 device (2012, manufacturer, city, country), which can measure both Rayleigh and Brillouin as in the experimental study [21].Unfortunately, only the Rayleigh shifts were thoroughly recorded during the pumping test because the signal to noise ratio of Brillouin shift was too low for measuring the strain data.During water pumping the Rayleigh frequency shift was recorded every 5 cm along the fiber cable and the frequency shift measurement was repeated in a time interval of 5 minutes.Deformation of the target reservoir formation caused by water pumping was estimated from the Rayleigh frequency shift following the same approach demonstrated in the experimental study [21].To examine the difference in hydraulic communication between Well-a (fiber sensor well) and the two pumping wells, pumping at Well-4 was paused at 14:00 on 3 November.Then pumping was restarted at 10:00 on 4 November concurrently with stopping pumping at Well-3.Compressive strain increased to 40 µε at the depth from 150 to 220 m when pumping at Well-4 continued from 10:00 on 4 November through 11:00 on 11 November.The total volumes of water pumped out from Well-3 and Well-4 were 1,382.4 and 4,233.6 m 3 , respectively.After the termination of pumping at Well-4 the compressive strain decreased gradually in the measurement time period.Later in 2015, additional pumping tests were conducted separately at Well-3 and other wells to investigate the heterogeneity of the Chonan Formation.
Data Preprocessing
The spatial and temporal sampling resolution of strain measurement during the pumping test was 5 cm and 5 min respectively and both of them were considered to be too fine for calibration of numerical model.Therefore, the Bayesian method [3837] was applied to fit the raw strain data.Splines with a fixed order of 3 were used, and the dimension number dim was determined by minimizing the Bayesian information criterion (BIC), defined by where N is the number of data points and Q is the weighted error sum of squares from a least-squares spline approximation [38].
As demonstrated by the examples shown in Figure 5, the short-wavelength noise and signals were removed by the Bayesian method.From the resulting spline functions, the strain data can be resampled with any sampling interval.
Geological and Numerical Models
As mentioned previously, formations in the test site are alternations of mudstone, silt, and sandstone.Each individual layer is typically thin as characterized by physical logging.From the observed strain data, it was decided to divide the geology into four major types: the Kasamori, upper and lower Chonan, and Kakinodai Formations (Figure 3).For this type of geology mixed with different rocks, the scaling up of rock properties is a key issue in geomechanical modeling and numerical simulations.
Two-dimensional Radial Flowing Model
The sedimentary layers in this region are quite stable and almost flat over a few tens of kilometers.Thus, it is valuable to make a rough estimate of some key parameters using simple models for which theoretical solutions are available.For this end, a two-dimensional (2D) radial flowing model of an infinite and isotropic horizontal layer was tested.Under the assumption that Darcy's law holds, the theoretical solution of the pore pressure change is given by [39] In Equations ( 12) and ( 13), t (s) is time since the start of pumping, r (m) is the distance from pumping well.Q(m 3 /s) is the pumping rate, H (m) and K (m 2 ) are thickness and permeability of the layer, D (m 2 /s) is hydraulic diffusivity, η (Pa⋅s) is dynamic viscosity of water, Sa (Pa −1 ) is unconstrained specific storage coefficient, βfl (Pa −1 ) and βpv (Pa −1 ) are compressibility of the fluid and pores, respectively.
In the present problem, all parameters except K and βpv can be estimated based on the borehole data.In this study history matching approach was applied to estimate the values of K and βpv.First, sensitivity tests were performed to determine if K and βpv are in a tradeoff relationship.As shown in Figure 6, the predicted water level response during pumping and after shutdown is sensitive to both parameters.Thus, the permeability of the layer and the compressibility of the pores, which are poorly known and show large uncertainty in many operation sites, can be determined from the observed water level data.The parameters K and βpv can be determined by fitting the model predictions to the observed data using a grid-searching algorithm.
The water level change in the upper Chonan formation during the initial pumping period from Well-3 and Well-4 were then considered.It was found that, for a single well pumping test, the water level change observed at Well-2, located 630 m away from Well-3 (Figure 2), can be represented fairly well using a single-layer 2D radial flowing model (Figure 7a).For multiple tests, when a single set of K and βpv values were used, the observed data were not well represented (Figure 7b).However, when using different sets of K and βpv values, the fitting results greatly improved (Figure 7c), indicating lateral inhomogeneity in the physical properties.K and βpv values estimated in such a manner may be considered as a type of "mean" or "equivalent" value and can be used as initial values in the coupled model.
A rough estimate of the bulk modulus of the Chonan formation can also be obtained from the maximum change in the water level and the maximum fiber strain observed during the pumping test.Under drained conditions, the volumetric strain εv caused by a change of pore pressure ΔP is correlated with the bulk modulus as where c is a constant ranging from 1 to 3.
Three-dimensional Model for THM-coupled Simulation
The geometry of the numerical model for THM-coupled simulation is shown in Figure 8.The far-field boundaries were placed at a distance of 5 km to approximate infinite boundaries.The in-site stress was applied in all zones and as loads acting on the far-field boundaries.The model has dimensions of 10,000 m × 10,000 m × 300 m centered at the fiber well.Roller boundary conditions were imposed on the four sides and bottom of the model.This model is divided into grid elements with dimensions in the horizontal plane ranging from 20 to 2500 m according to the distance from the pumping well and depth dimensions of 10 or 20 m.The total number of active elements (zones in FLAC3D) is 23,896.To count for the inhomogeneity in the Chonan Formation, two sub-models, model-A and model-B, were examined.In model-A, the Chonan Formation is divided into two parts, L-2s and L-2n, with different permeability.In model-B, the permeability was assumed to linearly decrease along the dip direction of the layers (Figure 8).
History Matching
In the initial model used in the simulations, the permeability and pore compressibility of the aquifer (L-2) were obtained from the 2D radial flow solutions.The other parameters of the aquifer and the other layers were determined based on experimental results.Table 1 lists the optimal hydraulic and mechanical properties determined using the history matching approach.Aided by the aforementioned 2D radial flow analysis results, the determination of the optimal parameters required only a few tuning steps.,b show the water level and strain, respectively, from the observed data and simulated results.The water level data were represented very well by the numerical results.The observed strain data showed small-scale fluctuations, corresponding to the sand-silt alternation structures of the Chonan Formation.The mean strain data were also represented fairly well by the numerical results.
Concluding Remarks
The key topics in geomechanical monitoring at the oil and gas fields and geophysical CO2 storage sites are (1) what depth and how much strain occurs in the subsurface, due to the fluid production or injection?(2) How to interpret ground surface uplift or subsidence as a result of deformation in the subsurface?In this study the distributed fiber optic strain measurement was carried out along the wellbore in the pumping test.The optic fiber installed behind well casing clearly detected the deformation resulting from pumping water from the target formation.Compressive strains responded well to the reservoir rock property.Higher strain magnitude was observed in the sanddominated sediment layer during pumping.This field application demonstrated the practical uses of the distributed strain sensing technology for geomechanical monitoring, which was proposed by Xue et al. [21] based on the laboratory experiment results of core samples.The distributed strain sensing technology not only helps us to understand the development of subsurface deformation caused by fluid injection or production, it also enables us to relate the ground surface subsidence or uplift to the subsurface deformation, as well as interpret how the subsurface deformation migrates to the ground surface.
In this study the observed distributed strain data, in combination with water level data, was used to estimate the scaled-up hydraulic and mechanical properties required in geomechanical modeling and analysis.Generally geological materials contain inhomogeneous structures over a wide scale range from the scale of grains and pores to that of reservoirs; the scaling up of permeability and elastic modulus is commonly required in reservoir simulations and rock mechanics.Numerical simulations are widely conducted to obtain the scaled-up elastic stiffness tensors [40,41] and permeability [42].The scaled-up parameters are expected to be equivalent to the elastic stiffness tensors that include the fine-scale contribution to coarsen the geomechanical mesh.However, such numerical simulations for scaling up rely on some unknown parameter, such as a statistical model of heterogeneities and fractures, and thus must be examined with observed data taken into consideration.The present study demonstrates that the distributed strain data are straightforward and very useful for this purpose.
When processing the fine-resolution fiber optic strain data obtained from the sandstonemudstone alternation along a continuous trajectory, we confirmed that the Bayesian curve-fitting is an effective approach for modeling purpose.In the first-order approximation, the simplified analytical 2D radial flow model of an infinite and isotropic horizontal layer provide an effective and quick way to roughly estimate the permeability and thickness of the target formation by history matching water level change at the monitoring well during a single well pumping.Multi-well pumping test results can be used to quantify permeability anisotropy in horizontal planes within a formation.It has also been demonstrated that the scaled-up sensitive rock properties, including the equivalent permeability and pore compressibility of the reservoir, can be well constrained by the combined use of water level and distributed strain data.More generally, this study illustrates that the combination of analytical and numerical methods provides a promising approach for the efficient optimization of scaled-up formation properties.In comparison with conventional methods, distributed fiber optic strain monitoring allows a lower number of short-term tests to be designed for the calibration of parameters describing the rock properties, which can be used for long-term geomechanical simulations.
Figure 1 .
Figure 1.Framework of geomechanical modeling with coupled thermal, hydrological, and mechanical (THM) simulation and history matching using distributed strain data, for which no geophysical post processors are required.
Figure 2 .
Figure 2. Map of the test site and configuration of wells for pumping, fiber sensor monitoring, and water head measurement.
Figure 3 .
Figure 3. Profile showing well locations and simplified geological columnar sections.
Figure 4
Figure 4 shows the water pumping histories from Well-3 and Well-4, the change of water level measured at Well-b, and the strain estimated from the fiber optic sensing at Well-a.Pumping started simultaneously at Well-3 and Well-4 at 10:00 on 2 November 2015, at a pumping rate of 480 L/min.Several hours after starting pumping, warm colors appeared in the upper layer of Chonan formation.The yellow and red colors indicate compressive deformation of the reservoir rocks.Variations in the magnitude and the times of compressive strain with depth in the Chonan formation suggested the different responses of the sandstone-mudstone alternation.
Figure 4 .
Figure 4. Pumping history and vertical strain observed by the fiber sensor.The spatial and temporal sampling intervals are 5 cm and 5 min, respectively.
Figure 5 .
Figure 5. Example of smoothed profiles of strain data obtained using a Bayesian method of spline fitting.The order of the spline functions was fixed at 3, and the dimension was determined by minimizing the Bayesian information criterion (BIC).
Figure 6 .
Figure 6.Water head changes estimated from the theoretical solution of a 2D radial flow in a homogeneous and isotropic infinite horizontal layer for pumping at a rate of Q = 480 L/min calculated at a distant of r = 175 m.(a) Results for different permeability (K) and hydraulic diffusivity (D).(b) Results for different permeability (K) and layer thickness (H).
Figure 7 .
Figure 7. Water head changes estimated from the theoretical solution of a 2D radial flow in a homogeneous and isotropic infinite horizontal layer.(a) Results of a single test.(b) Results of multiple tests in two wells, water head data were fitted by a single model.(c) Same to (b), but water head data were fitted by a combination of two single models to count the effect of spatial inhomogeneities.The observed data (dashed line) were represented very well by the theoretical solution with the optimal values of the permeability K of the layer and the compressibility βpv of the pores.
Figure 8 .
Figure 8. Geological and numerical models for simulation of the pumping tests at Well-3 and Well-4.
Figure 9 .
Figure 9. Observed and simulated (a) water head and (b) vertical strain (εzz).Both the water head and strain data were represented fairly well by the numerical results.
Table 1 .
Major hydraulic and mechanical properties. | 5,916.4 | 2019-01-26T00:00:00.000 | [
"Engineering",
"Geology"
] |
A model for hypermedia learning environments based on electronic books
Designers of hypermedia learning environments could take advantage of a theoretical scheme which takes into account various kinds of learning activities and solves some of the problems associated with them. In this paper, we present a model which inherits a number of characteristics from hypermedia and electronic books. It can provide designers with the tools for creating hypermedia learning systems, by allowing the elements and functions involved in the definition of a specific application to be formally represented A practical example, CESAR, a hypermedia learning environment for hearing-impaired children, is presented, and some conclusions derived from the use of the model are also shown.
I. Introduction
Current hypermedia learning environments do not have a common development basis.Their designers have often used ad-hoc solutions to solve the learning problems they have encountered.However, hypermedia technology can take advantage of employing a theoretical scheme -a model -which takes into account various kinds of learning activities, and solves some of the problems associated with its use in the learning process.The model can provide designers with the tools for creating a hypermedia learning system, by allowing the elements and functions involved in the definition of a specific application to be formally represented.This paper outlines some basic principles of computer-supported learning and the problems related to the use of hypermedia learning systems.It then summarizes a number of hypermedia and electronic-books models, which represent the basis for the development of the theoretical model presented later, and it describes CESAR, a hypermedia learning environment for hearing-impaired children which is based on this model.Lastly, it reports some conclusions drawn from the definition of the model and the development of CESAR.
Learning and hypermedia
Hypermedia systems, being non-linear nets of information, offer a representation similar to human knowledge, and can thus be considered as useful learning tools.However, hypermedia efficiency in computer-aided environments is not proved, nor is hypermedia devoid of problems (Reader and Hammond, 1994), a common one being related to a user's ability to navigate freely within the information offered.This problem suggests that there should be a change in the design of hypermedia courseware [7], with a view to imposing some navigation constraints.On the one hand, unnecessary links must be avoided.On the other hand, as Haga and Nishino (1995) suggest, students should be prevented from excessively deepening their knowledge of a subject, considering three as the maximum number of levels (depth of hyperlinks) that should be allowed.Structuring information can be considered as being contrary to the hypermedia philosophy, but students can still benefit from structured courseware.For example, Beltran (1993) proposes a courseware structure made up of courses, examples and practice.Courses will include information about the main subjects; examples will be particular or simplified cases related to the courses; and practice will consist of problems and presentations which require a creative involvement.
Another problem is that of determining the kinds of tasks to be provided to the student and how these activities will integrate into the learning process.A computer-aided learning system might include tasks such as reading, creative writing, problems resolution and selfevaluation.Practical tasks may also be required, for instance editing information, gathering, annotating and restructuring the material, and marking useful sections (Page, 1991).In any case, it is important to design an environment which combines complementary activities, such as active-passive, creative-reactive and directedexplanatory.
Matching the instructional approach with student learning objectives is yet another problem.Each student's motivation to learn is different.Moreover, society and his/her social role are factors that directly influence what, how and when he/she learns.And other important issues bear upon student learning such as age, sex, educational level, prior training, ethnic background, cultural heritage, level of initial motivation, personality and physical abilities (Barker, 1993).The learning style of each student can suddenly change, for example due to his/her emotions, the type of courseware being used or the current stage of a system.Consequently, the environment must be personalized so that the learning method is adapted to each student's style abilities (Allinson and Hammond, 1990;Barker, 1993).Some experiments have been based on this idea; an example is AnatomTutor (Beaumont and Brusilovsky, 1995), a hypermedia learning system for medicine that is adapted to the user's needs using artificial-intelligence techniques.
A further issue in developing hypermedia systems is the design of the user interface.In fact, a good user interface can help to solve many of the problems stated above.In this respect, the use of metaphors and stories may be appropriate.Metaphors provide a way of encapsulating system facilities, minimizing the cognitive overhead and maximizing system transparency and use (Vaananen, 1995).For example, the book metaphor allows a familiar learning space to be created; this kind of metaphor has already been implemented in the ABC Book for Early Learners (Barker and Giller, 1990), a hypermedia system for learning the alphabet.Equally, the intrinsic characteristics of stories make them a powerful mechanism for solving some of the hypermedia problems, in particular disorientation and system control.Stories also serve as a natural context where students can acquire and relate knowledge.For instance, The Jasper series (Barron and Kan tor, 1993) uses video tapes of a complex story to show mathematical concepts, and Cyber Bunch (Chijn and Plass, 1995) puts together the story and book metaphors to help readers understand German texts.
The problems mentioned above, and the advantages of using a book metaphor and a story have been taken into account in developing the model presented in the next section.
Designing hypermedia learning systems
This section presents a model, based on the book metaphor and the idea of a story, which inherits a number of characteristics from hypermedia and electronic-book models.
Hypermedia models
According to the Diccionario de la Lengua Espanola (1992), a model is a theoretical scheme of a complex system or reality that facilitates its comprehension and the study of its behaviour(.In particular, a hypermedia model gives unambiguous definitions of the elements and relationships needed to represent any application which uses this technology.A number of models have been developed, such as HAM, Dexter and Labyrinth, which allow designers logically to describe hypermedia applications. The HAM hypermedia model (Campbell and Goodman, 1988;Delisle and Schwartz, 1996) points up some key features such as version control, the use of filters to retrieve information, and data control access.
The Dexter model (Halasz and Schwartz, 1990;1994) introduces the concept of anchor and separation between nodes and their information.This model has been improved in order to take account of collaborative learning environments (Min and Rada, 1993) and the use of multimedia information (Hardman et al, 1993;1994).
Labyrinth (Diaz, 1995) is a model for the design of collaborative hypermedia systems that defines seven elements (nodes, links, contents, anchors, users, events and attributes) and a set of operations to express both static structure and dynamic behaviour.The model separates the structure (i.e.information holders or nodes) from the contents (i.e.information pieces), providing information-sharing by reference instead of by copy.This feature also makes possible the assignation of attributes and dynamic behaviour to particular content-items using events, which are independent elements.It generalizes the link definition, allowing distinct kinds of links to be modelled (e.g.bi-directional links, calculated or virtual links, and conditional links).In addition, Labyrinth includes mechanisms for controlling access to a hyperdocument by users, be they individuals or groups: constraints can be put on the ability to edit and personalize it, and a control version can be defined, either over the whole hyperdocument or over particular elements of it.
However, all such hypermedia models are very general and do not take into account the specific characteristics (elements as well as functionalities) of learning environments.
Electronic-book models
Since the book metaphor appears to be most suitable for presenting electronic books (Benest and Duric, 1990;Barker, 1992;Catenazzi, and Sommaruga, 1994), some features of electronic-book models must be considered.Electronic books present many features which make them close to hypertext systems.However, hypertext models are based on the classical definition of hypertext structures (nodes and links) rather than on the concept of pages and page components which characterize electronic books.For this reason, it is important to consider models which have been explicitly defined for electronic books.Barker (1992) presents a set of three high-level models, including a conceptual model, a design model, and a fabrication model.The conceptual model, intended for the end-user, is composed of a series of pages of reactive and dynamic information, which support two primary functions: book control and information display.The design and fabrication models are intended for designers and producers of electronic books.The design model includes the formulation of the end-user interfaces, the book and page structure, the content of the book, and the nature of the reader services.The fabrication model describes the relationships between the various stages of system development, from specification of the content and structure of the book, to the final book distribution.
These models are high-level models which describe, from a conceptual point of view, several aspects of electronic-book production, design and use.A more complete and formal model for electronic books is the hyperbook model (Catenazzi and Sommaruga, 1994), in which an electronic book is seen as an interactive and dynamic system, i.e. a system which can evolve from one state to another.The hyperbook model is defined in terms of structural and functional components.The structural components reflect the book subdivision into pages, and the page subdivision into elements such as text or figures.The functional aspect is indispensable for describing the use of a dynamic and interactive system.In particular, a number of operators, which represent reader services (e.g.orientation, navigation, personalization, history and searching), allow a user to change the system state.This simple model is intuitive, general and easy to extend.
A model for hypermedia learning systems
The model presented below can be considered as a basis for the development of hypermedia learning systems.It offers a common framework where designers will be able to develop their systems, focusing on the educational material and on the design of help facilities, and ignoring inherent hypermedia problems.
The EBNF (Extended Backus-Naur Form) notation (Sethi, 1989) has been used for the definition of the model.Items ending with the suffix 'id' are used to identify particular attributes of elements.Some elements are not completely specified, since their definition depends on the development of a specific application, and they are therefore formalized for a concrete example in Section 4 below.Table 1 shows the different symbols and their meaning.
Description
Begin and end symbols that enclose no terminal elements.Begin and end symbols that enclose terminal elements.Decomposition symbol, where the left part is made up of the elements in the right part Exclusive symbol (OR).Repetition symbol; the elements enclosed are repeated from 0 to N times.Associative symbol.Optionality symbol: the elements enclosed are optional.
Table I: EBNF notation
The learning environment is defined as a library, with which several users interact, and a series of peripheral elements which support the learning process.The contents are independent elements, so that the same content can be used in different environment elements to promote relations among knowledge domains (Page, 1991).Events are relevant facts that occur in the environment.They are defined by the tutor or by the system and allow, for example, contents to be synchronized or stimuli to be presented to the students.Moreover, the environment components are inter-related.Thus, the learning environment is defined by: <learning_environment>: <library> (<user> {<user>}) {<peripheral_element>} (<content> {<content>}) {<events>} <relation> This definition supposes that the environment is composed of a library which has at least one user and one content, and where peripheral elements and events can be included.Finally, a set of relations among these elements is defined.
Elements definition
The central axis in the environment are the books that are in the library, composed of a story and a set of trainings.The use of the book allows a natural environment to be created where the student knows where he/she is, where he/she can go, and what he/she can do, since he/she is familiar with physical books.The library definition is: The story must be sequential, beginning with the front cover, followed by several content pages, and ending with the back cover.According to Haynes's (1990) recommendations, intellectual-property information must be included in order to preserve the book copyright.
Each content page can have tied a set of exercises for each training, which are oriented towards acquiring a particular knowledge or skill.In addition, bookmarks can be added in the content and the copyright pages.The story definition is: Trainings are used as a learning aid and cover the topics considered relevant by the tutor (e.g.mathematics, history).Each one contains exercises for a particular category.For example, in a training of mathematics, there can be exercises of the Real Numbers category.These exercises will fundamentally formulate questions that students will answer using different strategies.Strategies give the student the possibility of solving the same exercise in different ways, since the presentation (i.e.how contents are shown), the interaction (i.e.how the system will respond to the student interaction) and the assessment (i.e.how to determine if the answer is correct) can be modified from one strategy to another by the tutor or an intelligent system.Each strategy has a simulation that acts as a stimulus and indicates how the exercise must be solved.The training composition is: User is included in the model as a component that must be instanced according to the objectives and characteristics of the concrete system.This element allows the interaction with the system for each person to be established.
Peripheral elements allow a number of additional activities to be included and can be used in any book in the library.They are classified into external elements and tools, according to the activities they support using the taxojd attribute.The first category includes those that mainly support explanatory activities.External elements can be, for example, reference books, dictionaries, and help manuals.The information transfer from/to the book is made by using those tools which support active and creative activities and allow the user to develop a personal framework.Some examples of tools are notebooks, and calculators.
The different peripheral elements are classified using the attribute classjd.The contents refer to particular information included in the environment.Each one is defined by a type and a length specified in the spatial as well as in the temporal axis.Events are included in the model to represent reactive and dynamic behaviour of the multimedia information.In addition, events can be used to model any other kind of conditional actions, for example to activate a process depending on the system status.An event is defined as: There are some decisions that must be taken for each student: the contents to be presented (rUC relation); the peripheral elements that can be used (rUP relation); the exercises that can be done (rUT relation); and the events that can be enabled (rUE relation).These relationships are specified as follows: The model also offers a set of basic functions.Table 2 lists navigation functions.Tables 3, 4, 5, 6 and 7 include functions that can be applied to the model elements.
GoBookMark
Go to the library from everywhere.Go to a particular book from the library.
Go to the front cover of the current book.
Go to the back cover of the current book.
Go to a particular page of the current book.Go to the next page of the current book.
Go to the previous page of the current book
Go to the training list of the current book.Go to the category list of a particular training.
Go to the exercise list of a particular page.
Return from the exercise to the last page activated.
Activate a link.Activate a bookmark.
Using the model: the design of CESAR
CESAR (Aedo et al, 1995) was designed starting from the model presented in the previous .section.It is a hypermedia learning environment which aims to help hearing-impaired children to acquire the necessary skills in sign and written languages.This system will initiate the deaf child to the story structure and will provide him or her with the necessary experience by using stories.In the system description, we include those parts of the model which have been left open because they are dependent on the specific system and on its functionalities.
Each book in the library consists of two parts: the story and the training.A combination of different media (text, image, and video) is used to represent'the book contents in both parts.Therefore, the content definition is completed by instancing the type of information that can be included: «type_id» ::= "image" | "text" | "video" The story is based on the book metaphor and consists of a sequence of pages which can be classified according to three different types.The first one is intended to cover the information contained in the presentation pages of the story: front and back cover.A front cover page is seen in Figure 2, where the video shows a narrator signing the story.The second one corresponds to those pages which contain information about the book production and copyright, for example the name of the publisher.The last one contains the story pages.In each page the story is displayed in text, graphic and video forms.
CESAR has currently one training part, addressed towards learning sign and written language.Starting from the model presented in the previous section, from the s figure 2: Front-cover page conversations with teachers who are involved in hearing-impaired children's education, and from the teaching methodology defined in Aedo et al (1994), the global structure of the training was designed.The training consists of a number of exercises which belong to the following categories: • Vocabulary category, which allows the child to extend his/her vocabulary with the terms used in a concrete story; • Understanding Questions category, intended to facilitate the child in the task of analysing and understanding the story contents, either through video, or through the written text; • Regulators category, which motivates the child toward learning and interiorizing the syntactic structure of the language, by using questions which contain terms such as Who, Where, When, What, Which, Why, and How; • Values, Motives, and Consequences category, which allows the child to realize the existence of different social values; • Narrative Structure category, which aims to create a logical internal speech to help a deaf child relate his/her experience, and interiorize the story contents; • Expressive Elements category, which is useful for enriching information and communication, independently from the language used to communicate; • Grammatical Elements category, in which, by using the formal structure of the book, different kinds of exercise are defined to help a child incorporate these morphosyntactic elements in his/her colloquial language.
By instancing the general model, the result is: «training_id» : := "linguistic competence" «category_id» ::= "vocabulary" | "understanding_questions" | "regulators" | "values_motives_conseq" | "narrative_structure" | "expressive_elements" | "grammatical_eleraents" More than 50 different strategies have been designed according to the particular contents to be presented to the child, and to his/her specific needs.In Figure 3, an exercise in the Narrative Structure category is shown.The exercise consists of three different images that the child has to organize, following the same sequence as they appear in the story.Learning style takes into account three different levels of the learning process, considering the specific problems of a child in any of these levels, and presenting information depending on the specific level.The three levels are as follows.
1.The initial level, where the child is still acquiring the phonemes of a language, and where he/she is not able to read and does not know sign language.Children who belong to this level are between four and seven years old.Sign language, supported by images, is given priority.
2. The intermediate level, where the process of acquiring the phonemes of the language has been completed, and the child is in a phase where he/she is extending his/her vocabulary and creating simple linguistic structures.The child can read, although he/she has difficulties in understanding, and is starting to know sign language.Children who belong to this level are between seven and nine years old.Written text, supported by sign language, is given priority.
3. The highest level, where the child has acquired more linguistic and reading skills but needs to improve and strengthen his/her linguistic structures.Children who belong to this level are between 9 and 12 years old.The sign-language text, supported by explanatory activities, is given priority.
The child's characteristics have been formalized, maintaining information about his/her name and learning level, and about the operations he/she is allowed to accomplish with respect to the narrator and the bookmarks.The result of this formalization is: The child can write, draw and create animations in the personal notebook, which is unique to each child.He/she can create his/her own information starting from the book contents, thus favouring the learning process and inciting him/her to use his/her imagination.
GoNextNote
Go to the next page of the notebook.
GoPrevNote
Go to the previous page of the notebook.The drawing-tools box offers a number of mechanisms for creating and changing information in both the personal notebook and in the book.These mechanisms can be defined as actions which are performed if particular conditions are fulfilled, i.e. they can be represented using the concept of events previously formalized.
«taxo_id» : := "tool" «class_id» ::= "drawing_box" In Table 9, the operations which are available in the drawing-tools box are listed.-BrowserMode Used to go back to the normal mode, where the child can use the functionalities of the environment SelectObject Allows the child to select an object shown on the screen, e.g. a portion of text RemoveObject Allows the child to erase the selected object DrawPixel Allows the child to draw using the pencil.
DrawText
Allows the child to write using the keyboard.
DrawLine
Allows the child to draw a line.
CopyObject
Allows the child to copy the selected object PasteObject Allows the child to paste a copied object in any location of the screen.The dictionary is an external element which supports explanatory activities and helps the child to advance in the language-learning process.The dictionary consists of words with different meanings, each of which includes a description and a sample situation.This information is available both in written and sign language.In addition, each word-entry contains its labial form.The dictionary can be also used as an independent object.Its use is recommended to the children in the intermediate and highest levels.The operations available in the dictionary are listed in Table 10.PreyTerm Go to the previous term following the alphabetic sequence of the dictionary.
SearchTerm
Look for a particular term in the dictionary.
ShowLabialFonm
Show the labial form of the selected term.
CloseLabialForm
Close the labial form if it has been activated.
ShowSignedForm
Show the sign form associated to a definition or a meaning.
CloseSignedForm
Close the sign form associated to a definition or a meaning.
Table 10:The operations available in the dictionary CESAR was evaluated using an expert technique called 'jogthrough' (Rowley and Rhoades, 1992) which can achieve very good results.In particular, the evaluation involved the system functionalities (the ease of use, the navigation tools, etc.), and the learning tools provided by the system (the design of the exercises, the contextualization of the learning process, etc.).
Conclusions
The hypermedia learning-environment model presented in this paper can be used as a basis for designing hypermedia learning systems for a number of reasons.
Firstly, this model is adaptable.It is possible, for instance, to change the page contents, and thus, substantially, the story, maintaining the same logic scheme.It could also be possible to achieve different learning objectives with the same story by changing the training.
Secondly, the use of the book concept in the model allows the main hypermedia problems mentioned in Section 2 above to be taken into account and solved in the system-design phase.Our experience the development and evaluation of CESAR confirmed that the use of the story allows its sequentiality to be employed as a natural way of navigating.Moreover, a student takes advantage of working with a known object, the book, in which the services are not limited by the system designer but by the environment itself.
Thirdly, the training allows the child to acquire, assimilate and associate knowledge and ideas, by doing a number of exercises designed to achieve a specific objective.In CESAR evaluation (Aedo et al, 1996), participants confirmed that training is an indispensable process which supports learning and allows the required competence to be achieved.They argue that this is useful not only for learning the written and learning language, but also for acquiring knowledge of a subject.
Fourthly, the inclusion in the model of peripheral elements allows the system designers to create contextualized help which complements the learning process.CESAR evaluation confirmed that it is profitable to use a dictionary which allows a hearing-impaired child to acquire new words and concepts, and to employ tools for helping the child develop new knowledge and ideas.
Finally, the model highlights the need for adapting the environment to the learning style of a child.In a specific system, this adaptation can be accomplished by using techniques such as expert systems or frame-oriented systems.These techniques will allow the instructional objective of the system to get close to the learning objective of the child, adapting the contents to the user.
Figure 3 :
Figure 3:An exercise in the Narrative Structure category account the peripheral elements considered in the model.There are two kinds of tool to accomplish active and creative activities, which extend the book metaphor to include the desktop metaphor: the personal notebook and the drawing-tools box.
Table 2 :
Navigation operations
Table 3 :
Operations on the links
Table 4 :
Operations on
Table 5 :
Operations on
Table 6 :
Operations onActivate Peripheral ClosePeripheralActivate the simulation for an exercise and a strategy.Activate the presentation for an exercise and a strategy Activate the interaction for an exercise and a strategy Activate the assessment for an exercise and a strategy.
Table 7 :
Operations on peripheral elements
Table 8 :
The notebook operations
Table 9 :
The operations provided by the drawing-tools box | 5,972.4 | 1997-01-01T00:00:00.000 | [
"Computer Science",
"Education"
] |
Tailoring the Band Structure of Twisted Double Bilayer Graphene with Pressure
Twisted two-dimensional structures open new possibilities in band structure engineering. At magic twist angles, flat bands emerge, which gave a new drive to the field of strongly correlated physics. In twisted double bilayer graphene dual gating allows changing of the Fermi level and hence the electron density and also allows tuning of the interlayer potential, giving further control over band gaps. Here, we demonstrate that by application of hydrostatic pressure, an additional control of the band structure becomes possible due to the change of tunnel couplings between the layers. We find that the flat bands and the gaps separating them can be drastically changed by pressures up to 2 GPa, in good agreement with our theoretical simulations. Furthermore, our measurements suggest that in finite magnetic field due to pressure a topologically nontrivial band gap opens at the charge neutrality point at zero displacement field.
Transport measurements
Transport measurements were carried out in a four-terminal and two-terminal geometry with typical AC voltage excitation of 0.1 mV using a standard lock-in technique at 177.13 Hz. The device was cooled down six times: twice at zero pressure, three times at p = 2 GPa and once at p = 1 GPa. Twice at p = 2 GPa we measured in a two-terminal geometry while for the rest of the cooldowns we measured in a four-terminal geometry as depicted in Fig. S1b. In Figure S2: Comparison between the measured gap energies at two different cooldowns at ambient pressure. Cooldown 1 is measured before applying the pressure and cooldown 2 is measured after releasing the pressure.
Gate voltage n-D conversion
The charge density (n) and electric displacement field (D) is related to the top and bottom gate voltage by where α TG and α BG are the lever arm for the top and bottom gate respectively, 0 is the vacuum permittivity, V TG and V BG are the top and bottom gate voltage, respectively, is the carrier density when the gate voltages are set to zero, is a built-in offset electric field. V TG0 and V BG0 are the values of the top and bottom gate at the zero density and zero displacement field point.
Vm 2 ) 3.38(5) 3.75(5) 3.84(5) α BG ( 10 15 Vm 2 ) 4.54(6) 4.94(7) 4.91(8) n 0 (10 15 /m) 2.3(2) 2.5(2) 2.4(2) D 0 (V/nm) 0.016(5) 0.018(5) 0.017 (5) The lever arms were obtained from the gate-gate maps (Fig. S3a) and from the quantum oscillations in magnetoconductance measurements (Fig. S4). The extracted lever arms are shown in Table S1. The pressure dependence of the lever arms is originating from the compression of the dielectrics and pressure-dependent dielectric constant of the hBN as already reported in Refs. S2,S3. The extracted lever arms within the margin of error were the same at the same pressures at different cooldowns. At certain magnetic fields, the magnetic length is commensurable with the lattice periodicity which results in the recovery of the translation symmetry thus the electron feels effectively zero magnetic field. This results an oscillation in the resistance called Brown-Zak (BZ) oscillation. S4,S5 A common method to determine the twist angle (ϑ) in superlattice structures such as the TDBG via transport measurements is to calculate it from the BZ oscillations. It can be calculated using that the BZ oscillations have maxima in the conduc- is the flux quantum, q is an integer number and A s is the area of the superlattice unit cell which is given by where a is the lattice constant of the graphene. We determined the twist angle (ϑ = 1.067°±0.003°) from our data at different pressures and found that they were identical within the uncertainty of our measurements.
We also estimated the twist angle inhomogeneity from the width of the resistance peaks at n = ±n s in Fig. S7 panel a d and f at different pressures which varied in a similar range: at p = 0 GPa ϑ is between 1.06°and 1.09°, at p = 1 GPa ϑ is between 1.06°and 1.1°and at p = 2 GPa ϑ is between 1.05°and 1.08°. In regions containing substantial twist angle inhomogeneity, this inhomogeneity could result in smaller measured gaps values, but it doesn't change our results qualitatively. At the CNP we also estimated the gaps with bias measurements. We measured the twoterminal resistance of the device with the lock-in technique at low frequency and applied a DC voltage bias (V B ) to the sample. If we apply a higher or equal bias voltage to the gap, transport processes become available. We performed bias measurements over a small range of n at fix D (see Fig. S6a). First the maximum resistance (R max ) was determined (occurring in Fig. S6a at n = 0 and V B = 0). We defined the gap value as the point where the resistance drops to 15% of the maximum resistance value, also taking into account a background value. To determine the gap size, the following procedure was followed. At the gap (n = 0, V B = 0) we took the peak value (R max ), while at a slightly different density value, where the resistance R is virtually independent of V B , we took the background value R gapless . We approximated the gap energy with e|V B | where V B = (V B+ + V B-)/2 with R(±V B± ) = R gapless + 0.15(R max − R gapless ) as it is shown in Fig. S6b.
Band structure calculation
The band structure of TDBG is calculated using the non-interacting Bistritzer-Macdonald model S6 and applied to the TDBG using the parameters from Ref. S7. The matrix elements in the model can be written as In our calculations we used a configuration space with cutoff in momentum space with a radius up to 4|b s | = 32 sin(ϑ/2)/( √ 3a) using Hamiltonian matrices with a size of 648×648. In the model for the low-energy Hamiltonian of the BLG (δ l,l H ls,l s (k)) we included the remote hoppings and used the following parameters: γ 0 = 3.1 eV for the intralayer nearest-neighbor hopping, γ 1 = 3ω(p) eV for the interlayer coupling between orbitals on the dimer sites, γ 3 = 0.283 eV for the interlayer coupling between orbitals on the non-dimer sites, γ 4 = 0.138 eV for the interlayer coupling between dimer and non-dimer orbitals and δ = 0.015 eV onsite energy difference between dimer and non-dimer sites. S7,S8 For the matrix elements between the BLGs ([1 − δ l,l ]H ls,l s (K)) which are given by the pressure dependent interlayer couplings ω(p) and ω (p) were taken from Table I. in Ref. S7 as where A = 0.0546 B = 0.0044 C = 0.0031 for ω and A = 0.0561 B = 0.0018 C = 0.0018 for ω .
The external electric field modifies the layer potentials (u i ), where i = {1, 2, 3, 4} marks the 4 layers of graphene from top to bottom. The effect of the external electric field is modelled with u 1 = −u 4 , u 2 = −u 3 , and we introduce u as the potential difference within each BLG and 2u as the potential difference between the BLGs as u 1 = (u + u )/2 and u 2 = (−u + u )/2. In the calculations, we modeled the external electric field with equal interlayer potential drops (u = 2u) and neglected the quantum capacitance corrections and the electron-electron interactions, which could modify the low-energy flat bands.
Magnetic field dependence
In Fig. S7 we present n-D resistance maps at B z = 0 T, 1 T and 2 T at 1.5 K. At ambient pressure a gap opens at the charge density of n = n s /2 which corresponds to a correlated insulating phase. S9-S11 In panel d,e and f,g the measurements are shown for 0 and 2 T at 1 GPa and 2 GPa, respectively. Under pressure this gap disappears for all applied B z fields.
In Fig. S8 we show resistance maps at various in-plane magnetic fields (B x ) at p = 2 GPa.
Comparing B x = 0 (panel a) and B x = 3 T (panel b) it is visible that the effect of the in-plane magnetic field is negligible. At a finite perpendicular magnetic field, the effect of applying an in-plane field is also negligible. In thermal activation measurements, B x also had a negligible and are discussed in the main text. We note that our model does not include the effect of magnetic fields and more advanced model capable of handling that goes beyond the scope of this manuscript. Figure S8: Four-probe resistance of the TDBG as a function of n and D measured at 2 GPa in different in-and out-plane magnetic fields. (a) presents the data for zero magnetic field, (b) for B x = 3 T in-plane magnetic field, (c) for B z = 2 T vertical magnetic field whereas (d) shows the measurement in B z = 2 T and B x = 2 T. Temperature dependence at half and three-quarter fillings Fig. S10 show the temperature dependence of the correlated states at half and near threequarter fillings which are similar what was observed in Ref. S10,S11,S13,S14. At finite pressure the correlated states disappear as shown in Fig. S7.
Comparison between the measurement and the model
To compare experimental findings with our calculations we used the relation of D 0 = eu d , where d = 0.33 nm is the interlayer distance of bilayer graphene and is the relative dielectric constant of bilayer graphene. We also used the same conversion between the top layer of the bottom BLG and the bottom layer of the twisted, top BLG. A qualitatively good agreement at the CNP is achieved using = 5 which is depicted in Fig. S11 where the experiments and the theory are shown in the same figure. | 2,339.4 | 2021-08-17T00:00:00.000 | [
"Physics"
] |
He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues
In this work, we define a new style transfer task: perspective shift, which reframes a dialogue from informal first person to a formal third person rephrasing of the text. This task requires challenging coreference resolution, emotion attribution, and interpretation of informal text. We explore several baseline approaches and discuss further directions on this task when applied to short dialogues. As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models on this data. Additionally, supervised extractive models perform better when trained on perspective shifted data than on the original dialogues. We release our code publicly.
Introduction
Style transfer models change surface attributes of text while preserving the content. Previous work on style transfer has focused on controlling the formality, authorial style, and sentiment of text (Jin et al., 2022). We propose a new style transfer task: perspective shift from dialogue to 3rd person conversational accounts ( §2). In this task, we seek to convert from an informal 1st person transcription of the dialogue to a 3rd person rephrasing of the conversation, where each line captures the information of a single utterance with relevant contextualizing information added. Table 1 demonstrates an example conversion and its perspective shifted version.
This task is challenging because it requires the interpretation of many discourse phenomena. In dialogue, speakers commonly use 1st and 2nd person pronouns and casual speech. Speakers also convey their own emotions and opinions in their speech. Converting a multi-party conversation to a singleperspective rephrasing requires pronoun resolution, formalization, and attribution of emotion/stance markers to individuals. While coreference resolution, stance detection, and formalization are often treated as separate tasks, the signal for these objectives is commingled in the dialogues. A pipeline approach would discard information necessary for any one task in the completion of the other two.
We create a dataset for this task by annotating dialogues from the SAMSum corpus (Gliwa et al., 2019), a dialogue summarization corpus of synthetic text message conversations ( §3). For each conversation, annotators rephrase the utterances line-by-line into one or more sentences in 3rd person. Unlike a summary, which condenses information to highlight the most important points, the goal of this transformation is to capture as much of the information from the original utterance as possible in a more standardized form.
We fine-tune BART on this dataset as a supervised baseline under several different problem formulations, and we experiment with incorporating formality data into the training process ( §4). As a motivating use case, we demonstrate that extractive summarization over perspective-shifted dialogue is more fluent and has higher ROUGE scores than extractive summarization over the original dialogues ( §5). This trend holds for zero-shot performance of extractive summarization models trained on news corpora and for fully supervised training on modelgenerated perspective shift data.
Perspective shift can be a useful operation for extractive summarization when annotation time is limited; when additional data from out-of-domain is available; when the exact length and content of the summary is not known at annotation time; or when high faithfulness is important to the end task, but fluency is also a concern ( §5.3). selected utterance, the goal of the task is to rewrite that utterance as a formal third person statement. Four operations are required to accomplish this change: coreference resolution, syntactic rewriting, formalization, and emotion attribution. Table 1 shows an example conversation and perspective shift, demonstrating each of these challenges.
First-person singular and second-person pronouns are usually easily resolved in a conversational context-first-person singular refers to the speaker, while second-person pronouns generally refer to the other conversational parties-plural first-person pronouns can be less obvious to resolve. When a party in a conversation uses the pronoun "we," this plural may be referring to the other parties in the conversation, some but not all of the parties in the conversation, or a party not present in the conversation, e.g. in the utterance "I need to talk to my husband. We might have other plans." In our hand-annotated dataset, we resolve these pronouns wherever possible; if it is not clear what group the pronoun refers to, we resolve the pronoun as referring to "<the current speaker> and others," e.g. "Laura: we are busy" becomes "Laura and others are busy". Other entities in the text may also be difficult to resolve, such as those defined only at the beginning of the conversation, many turns prior to the current reference.
Syntactic rewriting is the problem of converting the syntax of the utterance to reflect 3rd rather than 1st person. This may involve re-conjugating verbs, e.g. converting "Sam: I am busy" to "Sam is busy." Formalization and emotion attribution are related problems, as much of the emotion and stance information in the text is contained in informal phrases, unconventional punctuation, and emojis (Tagg, 2016). Typical formalization eliminates these markers without replacement (Rao and Tetreault, 2018). However, this makes formalization a highly lossy conversion, which may be undesirable for down-stream tasks. We aim to limit the information lost in the perspective shift operation by encoding the meanings of such informal language in the output. Often this takes the form of an adverb (e.g. "Sam angrily says") or a short descriptive sentence (e.g. "Cam is amused"). This requires interpretation of the informal elements of the text.
Clearly, this task is far more complex than simply swapping pronouns for speaker names. We curate a dataset for the perspective shift operation.
Dataset creation
The dataset is an annotated subset of the SAMSum (Gliwa et al., 2019) dataset for dialogue summarization. SAMSum is a dataset of simulated text message conversations, ranging from 3 to 30 lines in length and with between 2 and 20 speakers. The dataset consists of 314 conversations from the train set, 368 conversations from the validation set, and 151 conversations from the test set 2 . We set aside the 151 conversations from test as a test split and use the other 682 conversations as training and validation data.
Annotators were instructed to convert each utterance individually to a formal 3rd person rephrasing, while preserving as much of the tone of the utterance as possible. Annotators were required to insert the speaker's name in each rewritten utterance and remove all 1st-person pronouns. Annotators were also asked to standardize grammar, remove questions, and add additional context (e.g. descriptive adverbs) to convey emotions previously expressed by emoticons. Further information about annotator selection and pay, as well as a full copy of the annotation instructions, is available in Appendix D.
Dataset statistics
The perspective shifted conversations differ from the original in several ways. The number of turns in each conversation is preserved, but the average turn length varies: for the perspective shifts, the mean number of words per turn is 11.0, while the mean for the original dialogues is 8.4. (Note that the simplest heuristic would increase each utterance's word count by 1, as the colon next to the speaker name is swapped out with the word "says").
The average word-wise edit distance between original and perspective-shifted utterances is 8.5 words. This is partially due to the insertion of a dialogue tag (e.g. "says") in each utterance, the removal of emojis (average 0.1 per utterance), and the resolving of first and second person pronouns (average 0.9 per utterance). The part of speech 3 distribution of the conversations also changes, with a strong (65.8%) decrease in interjections and a slight (5.1%) decrease in adjectives and adverbs. However, in utterances that contain at least one emoji, the number of adjectives and adverbs present increases 12.8%. This is consistent with the annotation guidelines, which instruct annotators to capture the meaning of informal markers such as emoji with descriptors.
Formulation of the Prediction Problem
Methods We consider several formulations of the perspective shifting task as a prediction problem with different input and output styles. Below, the first three approaches formulate the problem as a line-by-line task: each input example consists of the full conversation with one utterance designated as the utterance to be perspective shifted. The fourth approach below formulates the problem as conversation-level task in which the entire conver-sation is perspective shifted at once.
1. no context: The input to the model is the utterance u t , and the output is the perspective shifted version, y t .
2. left context only: The input is the dialogue up to and including utterance u t , and the output is the perspective shifted version, y t . A [SEP] token delimits the left context, u 1 , . . . , u t−1 , from the utterance u t .
3. left and right context: The input is the full conversation, with [SEP] tokens around the utterance u t , and the output is the perspective shifted version, y t .
conversation-level:
The input is a complete dialogue u 1 , . . . , u T , and the output is a complete perspective shift y 1 , . . . , y T .
For each formulation, we finetune a BART-large (Devlin et al., 2019) model for 15 epochs, using early stopping, an effective batch size of 8, and a learning rate of 5e-5.
Results ROUGE 1/2/L scores and BARTScore for each model are listed in Table 2. The no context model treats this as a purely utterance-level task, but fully precludes the addition of context from other utterances. This means that second-person and first-person plural pronouns cannot be resolved clearly. While this model scores quite highly on all 4 metrics, we observe a high rate of named entity hallucination in the converted outputs. For instance, for the input utterance "Hannah: Hey, do you have Betty's number?", the no context model outputs "Hannah asks John if he has Betty's number." However, the other conversational partner in this dialogue is "Amanda," not "John." Because the gold perspective shifts were annotated with the full conversation available for reference, this model often hallucinates to fill in named entity slots that it does not have the context to resolve. By contrast, the conversation-level model has the clear advantage of referencing the entire conversation at generation time. However, the model does not have a requirement to produce the same number of lines as the input and must learn this property during training. We conjecture that this is the reason for this model's relatively weak performance relative to the left and right context model. Additionally, if the model generates more or less lines than the input dialogue, this can be a conflating factor in the extractive summarization example we discuss in Section 5. If the model generates less lines than the input, it has performed some part of the summarization process by abstracting the input into a shorter output; if it has generated more lines than the input, it has produced a harder problem for the extractive summarization system by creating more lines to choose the summary from. Because of this model's weaker performance and this conflating factor, we restrict our remaining experiments in this paper to models that perspective shift one utterance at a time.
The model with left context only mimics how a human might read the conversation for the first time, from top to bottom. This choice of model also imposes the constraint that the output is the same number of lines as the input, as desired. However, the dialogues frequently contain cataphora, especially in the start of the conversations, where the first speaker may be addressing a second speaker who has not yet spoken. For instance, in the example "Hannah: Hey, do you have Betty's number?", this is the first utterance of the dialogue. A model with only left context cannot resolve the word "you" here any better than the no context model.
The left and right context model addresses this concern by providing the full conversation as input, but restricting the output generation to a perspective shift for a single (marked) utterance. This imposes the output length constraint without sacrificing contextual information. This model performs best on all 4 metrics. As the scores for left and right con-text and no context models are relatively close, we conduct a human evaluation comparing these two cases. In our blind comparison of 22 conversations, the left and right context model was preferred over the no context model 86% of the time (2 annotators, Cohen's kappa 0.62).
The conversation-level model may be a good choice for some applications, where output length is less important to the downstream task. This model has a higher degree of abstractiveness, which can lead to increased fluency but also increased hallucination. For tasks where this is a concern, the left and right context model achieves reasonable fluency while adhering more closely to the task, as measured by the automatic metrics.
Formality and Perspective Shift
Approaches We observe that the perspective shifting task requires a high degree of formalization. We consider several models ranging from simple rule-based approaches to those relying on an external formalization dataset in order to better understand the role of formalization in perspective shifting. The external dataset we consider is the Grammarly Yahoo Answers Formality Corpus (GYAFC) (Rao and Tetreault, 2018): a dataset of approximately 100,000 lines from Yahoo Answers and formal rephrasings of each line.
Our core method is the BART model trained under the left and right context formulation (PS ONLY).
We also consider a heuristic baseline (RULES-BASED HEURISTIC).
For each message, we prepend the speaker's name and the word "says" to the utterance. We replace each instance of the pronoun "I" in the message with the speaker's name. After observing that most messages are not wellpunctuated, we also append a period to the end of each utterance. While this heuristic is simple and ignores many pronoun resolution conflicts, it has the clear advantage of being highly efficient.
We incorporate the GYAFC corpus as part of our training regime by finetuning on the formalization task prior to finetuning on perspective shift (FORMALITY + PS). Finally, we perform an ablation by finetuning BART for formalization on the GYAFC corpus, then attempting zero-shot transfer to the perspective shifting task. As input for this model at test time, we provide either the original dialogues (FORMALITY ONLY) or the output of the rulesbased heuristic (HEURISTIC+FORMALITY).
Results
We evaluate each approach on ROUGE 1/2/L (Lin, 2004) and BARTScore (Yuan et al., 2021). The scores are in Table 3, and example outputs are in Table 4.
At first glance, perspective shift is a task closely related to formalization. However, the addition of formalization data leads to a slight decrease in model performance. This may be due to the formality data biasing the model toward minimal rephrasings, as there is generally relatively low edit distance between the informal and formal sentences in the formality corpus used (Rao and Tetreault, 2018). However, for high performance on perspective shift, the addition of clarifying words, emotion attributions, and pronoun substitutions is necessary; these are high-edit-distance operations that are not observed frequently in the formality data.
Formalization without any additional training for perspective shift is, as expected, far weaker than the perspective-shift-only model.
The rules-based heuristic appears competitive in ROUGE, but both the BARTScore scores and a manual inspection of the output reveal that this approach is lacking.
In the next section, we explore a downstream task: extractive summarization. For all extractive summarization experiments, we use model-generated perspective shift data from the perspective-shift-only model. We train a model on only the validation-set PS data to generate perspective shifts for the train set of SAMSum, and we train a model on only the train-set PS data to generate perspective shifts for the validation set of SAMSum.
Application: Extractive summarization
In the extractive summarization setting, phrases or sentences are taken directly from the input and composed into a summary. This is a clear failure case for dialogue, where sentences in the input are in first person and often pose questions or corrections to previous utterances; knowledge of other speakers in the dialogue can be necessary to contextualize the information. Summaries should present an overview of a conversation that incorporates global contextual information; generally, these summaries are also expected to be in third person.
Extraction over a perspective-shifted dialogue does not suffer from many of the same problems as extraction over an original dialogue. The text in a perspective shifted dialogue is in formal third person, which matches the desired style of the summary text. While individual sentences of the perspective shifted dialogue correspond directly to individual utterances in the dialogue, the coreference resolution involved in the perspective shift step means that these sentences are less interdependent than the dialogue turns. In many respects, perspective shift should make the task of dialogue summarization easier.
Oracle Extraction after Perspective Shift
Methods This intuitive result is confirmed by the performance of an oracle extractive model. Given both the input and the summary, the oracle model is tasked with choosing a combination of k utterances from the input to maximize ROUGE. Table 5 shows the performance of an oracle extractive model over the original SAMSum dialogues and the perspective shifted versions. For comparison, a simple extractive baseline-choosing the three longest utterances-and a strong abstractive model are also reported.
Results Clearly, the potential (best-case) performance of a model over the perspective shifted dialogues is better; the oracle scores over perspective shifted dialogues even approach the scores of the abstractive model.
Zero Shot and Supervised Extractive Summarization
Train/Test Regimes A common summarization domain is news articles due to the relatively wide availability of data. We use an extractive summarizer trained on the CNN/DM news summarization corpus ( I've got so much to do at work and I'm so demotivated. RULES-BASED HEURISTIC Igor says Shit, Igor has got so much to do at work and Igor is so demotivated. HEURISTIC + FORMALITY Igor says, "Shit, Igor has so much to do at work and Igor is so demotivated." GOLD Igor has too much work and too little motivation. Table 5: Performance of oracle extractive models as compared to the best extractive baseline from the SAMSum paper (longest-3) and a competitive abstractive system (BART-large, averaged over 5 random restarts).
roshot summarization 4 over the original SAMSum dialogues and over a perspective-shifted version of the dialogues. We also consider the fully supervised case; we train models using the PreSumm architecture for extraction over the original SAM-Sum dialogues and over the perspective-shifted dialogues.
Results Results across all models are in Table 6. The zeroshot model scores higher than the supervised model for SAMSum, which at first appears unintuitive. We credit this to 2 factors. First, the training dataset for CNN/DM is approximately 21x more training examples than SAMSum train set, allowing the model increased generalizability to an unseen test set. Second, the summaries in the CNN/DM dataset are often several sentences, while the summaries in the SAMSum dataset tend to be a single sentence. The CNN/DM model's bias toward longer summary length may artificially inflate ROUGE scores, as the model selects more utterances for the output. Despite these factors, the supervised model over perspective shifted data outperforms the zeroshot model over the same data. Perspective shift is useful as an operation to bring the dialogue domain closer to the news domain. This drastically improves zero-shot transfer.
The zero-shot model over perspective shifted data performs better than the fully supervised model trained over the original dialogues. In a low-data setting, where annotating the entire dataset for summarization may be cost-prohibitive, perspective shift can serve as an alternative annotation goal. The perspective shift model used to generate the test data in Table 6 was trained on 545 dialouges (with a validation set of 137 dialogues); by contrast, annotating the entire train and validation sets for summarization would require annotating 15,550 conversations, a more than 20fold increase in annotation effort.
Analysis of Hallucination
One of the oft-cited benefits of extractive summarization is that models that copy text directly from the input are less likely to present factually incorrect summaries (Ladhak et al., 2022). Clearly, perspective shifting introduces a rephrasing step into the summarization pipeline. A natural concern is the potential presence of "cascading errors"where errors in the perspective shifting process lead to hallucinatory extractive summaries. We randomly select 100 conversations and associated summaries from the perspective-shift-then-extract model and a standard abstractive finetuned BART model. We then ask 2 annotators to label each for faithfulness-ranking the summary -1 if it describes information that contradicts the conversation, 0 if it contains information that cannot be verified or falsified by the conversation, and 1 if all information stated in the summary is derived from the conversation. Cohen's kappa between these two annotators was 0.49, with annotators disagreeing on 12.6% of summaries. For cases where the annotator scores differ, we ask a 3rd annotator to label the conversation and choose the majority opinion. Results of this evaluation are in Table 7. While perspective shift introduces some hallucinations into the dataset, the rate of hallucination is far lower than for abstractive models.
In the 100 randomly selected conversations, we observe 5 hallucinations introduced by the perspective shifting operation that influence the downstream summaries. In the same conversational sample, 22 summaries from the abstractive model contain hallucinations, commonly in the form of incorrectly attributing actions to entities or negating the implications from the original conversations. Here, we define a hallucination as a statement that is not verified by the source text. Some hallucinations are directly contradictory with the source material (contradictions); there are 3 such contradictions in the extractive summaries and 18 such contradictions in the abstractive summaries.
Fluency
Extractions from text message dialogues are not normally conducive to forming a fluent summary. Each message has its own speaker who may use first person pronouns. Additionally, messages often contain slang or emojis, which are not appropriate for a formal summary. Perspective shifted dialogues are more formally written and describe the conversation from a single frame of reference.
To compare the fluency of extraction from original dialogues and perspective shifted dialogues, we calculate the perplexity of the output summaries for each model. We measure perplexity using GPT-2 (Radford et al., 2019), which is not used to generate any of the outputs. The extractions from the perspective shift dialogues have an average perplexity of 31.07, while the extractions from the original dialogues have an average perplexity of 48.77. Example outputs from each model are in Appendix B.
Similarly to extract-then-abstract systems, perspective shifting represents a compromise between the strong faithfulness of extraction and the improved fluency of abstraction.
Discussion
Another possible application of perspective shift for summarization is in query-specific summarization, where there is not a single canonical summary at training time. Instead, a relevant span is selected and summarized based on a user query. Query-specific summarization has been applied to dialogue-based domains, such as meeting summarization (Zhong et al., 2021). In these domains, we conjecture perspective shift may make the choice of an extractive summarization model feasible, allowing for greater interpretability and faithfulness of outputs.
Perspective shift also appears to be a less effortful task for annotators than summarization. We ask a crowdworker to perform perspective shift and summarization annotation for 5 hours each over different sets of dialogues. The annotator gave this unsolicited feedback: [Summarization] is a completely different task in that it takes a lot more mental capacity, paraphrasing complete conversations into a concise synopsis. I need to take a break! 5 This annotator was able to summarize conversations at a faster hourly rate than perspective shifting, but reported that the perspective shift task was more enjoyable.
We discuss perspective shift for different dialogue subdomains briefly in Appendix C.
Related Work
Style Transfer The most similar style transfer task is formalization, which has attracted attention as a standardization strategy for noisy usergenerated text. Formalization can be performed as a supervised learning task, and supervised approaches often use the parallel sentence pairs from Grammarly Yahoo Answers Formality Corpus (Rao and Tetreault, 2018). More commonly, however, formalization is performed as a semi-supervised (Chawla and Yang (2020) Another related style transfer task is the 3rd to 1st person rephrasing task proposed by Granero Moya and Oikonomou Filandras (2021). This task is evaluated with exact-match accuracy, and their best model achieves 92.8% accuracy on the test set. We conjecture perspective shift is a more difficult task because of its many-to-one nature, as well as the additional emotion attribution and formalization required.
Speaking-style transformation Speaking-style transformation is a task which seeks to transform a literal transcription of spoken speech to one that omits disfluencies, filler words, repetitions, and other characteristics of speech that are undesirable in written text. This task attracted notice particularly in the statistical machine translation community (Neubig et al. (2020)). This task differs from perspective shift in several respects: the focus of speaking-style transformation is on removing disfluencies, whereas perspective shift aims to preserve information that may be conveyed by the informal style of text; perspective shift requires complex coreference resolution and utterance contextualization, while speaking-style transformation leaves references unresolved; and perspective shift is applied to text post-hoc, while speaking style transcription may be performed over transcripts or in an online setting, during speech transcription. (2021) extract conversational structure from several views to feed into a multi-view decoder. Another approach to modeling the differences between dialogues and well-structured text is to use auxilary tasks during training (Liu et al., 2021a). In work concurrent with this paper, Fang et al. (2022) propose a narrower utterance rewriting task for dialogue summarization, swapping some pronouns in the text for speaker names; however, this task does not allow for full rephrasings of the text or produce output that is in third person, making it unsuitable for extractive summarization.
Dialogue summarization
Domain adaptation for summarization Another popular direction for dialogue summarization is domain adaptation to dialogue, primarily by pretraining models on additional dialogue data. Khalifa et al. (2021) pretrain BART on informal text before training on SAMSum, observing improvement when pretraining on dialogue corpora but not when training on Reddit comments. Yu et al. (2021) study the effectiveness of adding an additional phase of pretraining to improve domain adaptation, in which they either train on a news summarization task, continue pretraining (using the standard reconstruction loss) for an in-domain dataset, or continue pretraining on a smaller dataset of unlabeled input dialogues from the training set. Zou et al. (2021) pretrain an encoder on dialogue and a decoder on summary text separately before training the two together on a summarization objective. While these approaches improve performance on dialogue summarization, particularly in a lowerresource setting, they largely require pretraining at a large computational cost.
Perspective shift is a new, non-trivial style transfer task that requires incorporation of coreference resolution, formalization, and emotion attribution. This paper presents a preliminary dataset for this task that includes interpretation of the meanings of conventional text abbreviations, emojis, and emoticons. The baselines presented in this paper are sufficient for downstream performance on a summarization task, but may be further improved by modeling the unique challenges of this task direction.
In addition to being a challenging task, perspective shift is a useful operation for dialogue summarization. Perspective shift can act as a tool for domain adaptation by shifting dialogue into a form more similar to common summarization domains (e.g. news). For extractive systems, dialogue summarization is largely infeasible because outputs will not be fluent. Perspective shift allows for fluent extractive summaries. This differs from a more traditional extract-then-abstract approach because the "abstraction" (perspective shifting) step can benefit from the full document context. In a domain such as dialogue, where many utterances are strongly conditioned on the prior context of the conversation, this allows for more faithful rephrasings. When coupled with an extractive system, this perspective-shifting-based paradigm allows for the creation of more interpretable, less hallucinatory summarizations when compared to an abstractive model.
Other potential applications of perspective shift include direct application in abstractive summarization; in related tasks such as key point analysis (Bar-Haim et al., 2020), which often rely on dialogues as inputs; and for summarization of texts which contain partial dialogues, such as novels. The general strategy of transforming the input to adapt to a new domain rather than changing the model or pretraining paradigm is a promising direction because of the ease of annotation and relatively low computational cost.
Limitations
Perspective shift requires the modeling of informal language, a challenging task. The meaning of informal language can vary across communities (Jørgensen et al., 2015), age groups (Rickford and Price, 2013), and time (Jin et al., 2021), making generalization of these results more difficult. This is also an inherently lossy conversion; though we take steps to minimize the loss of emotion and stance information, the nuances of this information may still be discarded.
The perspective shift process also discards most of the discourse information available in the original dialogue. By performing perspective shift prior to dialogue summarization, we take a simplistic view of dialogue as a linear collection of firstperson statements without considering underlying structure. While this approach proved effective, we believe that the best possible performance on this task may be constrained by this simplifying assumption. Yeah, Crystal is just going brankrupt. Irene: Let me take him, I also promise to buy him something Irene tells Crystal that she will take her son shopping with her, and that she also will buy him something. Crystal: you really wanna do that?
References
Crystal asks if Irea really wants to do that. Irene: why nont? I'm his aunt! Irene asks why not, she's his aunt. Crystal: well yeah it's just such a drag Yeah, Crystal says, it is a drag. Irene: you were always a bore when shopping :P just let me take the little man Irene tells Crystal that she was always a bore when shopping, and that she should just let her take the little man. Irene: well have fun! Irene tells her to have fun. Crystal: ok Crystal agrees. While we present the perspective shift task using text message conversations as an example, there are a wide variety of subdomains within dialogue. We apply the perspective shift operation to two other domains-roleplaying game transcripts and media interviews-using the model trained only on data from the text message conversation domain. While the model effectively perspective shifts most short utterances, the largest issues we observed in inspection of these outputs are as follows: 1. Long utterances: The perspective shift model performs poorly when utterances are very lengthy, as this is very uncommon in the SAMSum dataset (average utterance length: 8.4 words). This leads to repetition and denigrated performance, especially when several long utterances occur in sequence.
2. Domain differences in formatting: Differences such as multi-word speaker names or adding sound effects in parentheses are not captured effectively by the model, as they were not encountered at training time.
While we leave improving perspective shift over long outputs to future work, we provide examples of perspective shifts from two different domains, to demonstrate these potential pitfalls for other researchers. These are model-generated perspective shifts, generated using the model trained only on perspective shift for SAMSum dialogues.
C.1 CRD3
CRD3 is a dataset of Dungeons & Dragons roleplaying game transcripts (Rameshkumar and Bailey, 2020). Dungeons & Dragons is a collaborate roleplaying game where multiple players describe the actions and dialogue of their characters as the team explores an open-ended world. While each session of the game consists of several thousand turns of dialogue, the CRD3 dataset sections the sessions into smaller chunks with aligned summaries. For brevity, we present only a chunk of a session, in Table 12. The SAMSum perspective shift model serves as a reasonable baseline for this dataset, though in-domain data would likely further improve performance.
C.2 MediaSum
MediaSum is a dataset of NPR and CNN media interview transcripts (Zhu et al., 2021). The average turn length in this dataset is substantially longer-37.5 words for the NPR transcripts and 53.1 words for the CNN transcripts. Correspondingly, the model-generated perspective shift is worse. The model generates repetitious content in the perspective shift. The model also performs poorly on multiword speaker names, which are a rarity in SAM-Sum as well. A snippet of an interview appears in Table 13 Original Perspective shifted MATT: Okay. You take your first step and you watch something drift across the entrance from wall to wall. Some faint glowing figure-and is gone.
Matt says that when you take your first step and you watch something drift across the entrance from wall to wall, some faint glowing figure-and Talesin tells her that he is going to do another one.
MATT: Okay. Give me the specifications on that one.
Matt agrees and asks for the specifications.
TALIESIN: All right. Eyes of The Grave. As an action you know the location of any undead within 60 feet of you that isn't behind total cover and isn't protected from divination magic until the end of your next turn.
Talyiesin agrees and mentions that it is the Eyes of the Grave, and that as an action, it can alert the location of any undead within 60 feet or less that isn't covered by total cover and isn't protected from divination magic until the end of the next turn. MATT: Okay, got you. How long does it last?
Matt tells her that he got her. TALIESIN: Six seconds.
Talesin says it takes six seconds. MATT: Okay. We'll say for the purposes of this, this is a reaction to seeing this figure pass by. You definitely get an undead sense from whatever this figure is, and then it merges with the wall and is gone. It appeared to be loosely humanoid in the brief glimpse you saw.
Matt tells her that they will say that for the purposes of this, it is a reaction to seeing the figure pass by, and that you definitely get an undead sense from whatever the figure is, and then it merges with the wall and is gone. TALIESIN: We have undead.
Talyiesin says they have undead. TRAVIS: No.
Trevor says no. SAM: It's a ghost ship! Sam says it is a ghost ship. MARISHA: Is it going to be your crew?
Marisha asks if it is going to be Ducey's crew. TRAVIS: I don't know! Trevor doesn't know. MARISHA: Wait, we're not on your ship anymore. Marisha asks them to wait. She tells them that they are not on Ducey's ship anymore. TRAVIS: Now we're in the mouth of a beast! Ducey, what's going on, brother?
Trevor tells her that they are now in the mouth of a beast, and asks Ducey what is happening. TALIESIN: I don't know, I'm literally blind. I want to light this-Talesin doesn't know, she is literally blind.
Original
Perspective shifted MADELELEINE BRAND, host: OK, here's some good news on the jobs front for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs this summer. That's for adults, but for teenagers this summer's job market is shaping up to be the weakest in more than 50 years.
MADELEINE says that there is some good news on the job market for both men and women. A new survey out today from the employment firm Manpower finds that about a quarter of employers will add jobs to the workforce this summer, but for teenagers, the summer job market is the weakest in more than 50 years. ALEX COHEN, host: So, how do you get your teenage kids not to spend the entire summer glued to the couch? You're about to get some tips from Michelle Singletary. She's Day to Day's personal finance contributor. Hi, Michelle! Alexo Cohen, host, tells Michelle that the types of jobs that teens tend to go for in retail, fast food, and the like, that they still need people. He says that oftentimes, listen, you may not get a job at the beginning of the summer, but hold on because in late summer, when some college students are going back to school and perhaps some of the people who lost their MICHELLE SINGLETARY: Hi! MICHelle says hello. ALEX COHEN, host: So why is the summer job market so hard for teens this year?
Alex Cooper says that the types of jobs that teens tend to go for in retail, fast food, and the like, they still need people. He tells Michelle that they've got a tough economy and that they have a lot of college graduates going into the job market and taking jobs that would traditionally go to teens, like in restaurants and retailers, but that for teenagers, the summer job market is the weakest in more than MICHELLE SINGLETARY: Lot of things going on right now. We've got a tough economy. We've got a lot of college graduates going into the market. We have people who are losing their jobs and taking jobs that would traditionally go to teens, like in restaurants and retailers. And we have a lot of older people holding on to their jobs and not retiring because they can't afford to retire. And that puts teens at the end of the line when it comes to these types of jobs.
MICHelle says that there are a lot of things going on in the job market right now, and that they have a tough economy. She says that they've got a number of college graduates going into the workforce and taking jobs that would traditionally go to teens, like in restaurants and the like, but that for teenagers, the summer job market is the weakest in more than 50 years. | 8,772.4 | 2022-10-27T00:00:00.000 | [
"Computer Science"
] |
The Beneficial Effects of Prenatal Biotin Supplementation in a Rat Model of Intrauterine Caloric Restriction to Prevent Cardiometabolic Risk in Adult Female Offspring
Numerous studies indicate that intrauterine growth restriction (IUGR) can predispose individuals to metabolic syndrome (MetS) in adulthood. Several reports have demonstrated that pharmacological concentrations of biotin have therapeutic effects on MetS. The present study investigated the beneficial effects of prenatal biotin supplementation in a rat model of intrauterine caloric restriction to prevent cardiometabolic risk in adult female offspring fed fructose after weaning. Female rats were exposed to a control (C) diet or global caloric restriction (20%) (GCR), with biotin (GCRB) supplementation (2 mg/kg) during pregnancy. Female offspring were exposed to 20% fructose (F) in drinking water for 16 weeks after weaning (C, C/F, GCR/F, and GCRB/F). The study assessed various metabolic parameters including Lee’s index, body weight, feed conversion ratio, caloric intake, glucose tolerance, insulin resistance, lipid profile, hepatic triglycerides, blood pressure, and arterial vasoconstriction. Results showed that GCR and GCRB dams had reduced weights compared to C dams. Offspring of GCRB/F and GCR/F dams had lower body weight and Lee’s index than C/F offspring. Maternal biotin supplementation in the GCRB/F group significantly mitigated the adverse effects of fructose intake, including hypertriglyceridemia, hypercholesterolemia, hepatic steatosis, glucose and insulin resistance, hypertension, and arterial hyperresponsiveness. This study concludes that prenatal biotin supplementation can protect against cardiometabolic risk in adult female offspring exposed to postnatal fructose, highlighting its potential therapeutic benefits.
Introduction
Metabolic syndrome (MetS) is a cluster of cardiometabolic risk factors that include abdominal obesity, insulin resistance, hypertension, hepatic steatosis, and dyslipidemia.MetS is significantly associated with an increased risk of developing type 2 diabetes and cardiovascular diseases [1].These chronic non-communicable diseases (NCDs) constitute the leading cause of morbidity and mortality worldwide [2].The origins of susceptibility to many NCDs in adults, including MetS, could originate from early life through what is known as the "developmental origins of health and disease" (DOHaD) [3,4].
Maternal nutritional status during pregnancy is an important determinant of fetal growth and development, where its deterioration can lead to the development of MetS.Epidemiological and animal studies have consistently demonstrated a negative relationship between the maternal environment and adult metabolic diseases [3,4].If the fetus is vulnerable to the maternal environment during intrauterine development, then the offspring of parents with MetS are at higher risk of developing this disease, regardless of sex or age [5].Furthermore, maternal undernutrition is linked to the induction of intrauterine growth restriction (IUGR) and is one of the leading causes of perinatal morbidity, affecting approximately 7% to 15% of pregnancies worldwide [6].Therefore, low birth weight is related to "fetal programming", which may determine the onset of future health problems [6,7].Fetal programming allows the new organism to maintain homeostasis under inadequate conditions, by redistributing nutrients to the most vital organs, resulting in altered organogenesis.While this may be beneficial for short-term survival, it may become detrimental later in life if there is a mismatch between the prenatal and postnatal environments [3,4,7].In addition, several insult studies in developmental models have shown differences in the response to developmental programming between sexes [8].Thus, when deprivation is followed by abundance, catch-up growth occurs, predisposing individuals to the development of metabolic diseases [9,10].
Treatment strategies for MetS include weight loss, adoption of a healthy lifestyle, and pharmacological agents, but total recovery remains unreported.In addition to side effects, drug treatments are not always satisfactory for the control of metabolic complications [11].Therefore, it is essential to explore therapeutic alternatives that are both more effective and user-friendly.Abundant studies have provided evidence of the beneficial effects of vitamins at pharmacological concentrations for the management of MetS [12].
Biotin is a water-soluble vitamin of B-complex that acts as a prosthetic group of carboxylases, which play a key role in a variety of biochemical functions, including the metabolism of carbohydrates, fatty acids, and amino acids [12][13][14][15].Numerous preclinical and epidemiological studies have shown that biotin at pharmacological concentrations (2-100 mg, 30 to 650 times the daily requirement) has hypoglycemic, hypotriglyceridemic, and antihypertensive effects [12,13,15].Additionally, no toxic effects of biotin have been reported at these concentrations and its effects are related to modifications at transcriptional, translational, and post-translational levels [12,15].However, the effect of biotin on fetal programming has not been studied.Therefore, this study aims to investigate the beneficial effects of prenatal biotin supplementation in a rat model of intrauterine caloric restriction to prevent cardiometabolic risk in adult female offspring exposed to a chronic fructose diet postnatally.
Results
2.1.Food Intake and Body Weight in Dams and Body Weight Gain, Caloric Intake, Feed Conversion Ratio, and Lee's Index in Female Offspring Dams subjected to global caloric restriction exhibited a significant 13% decrease in body weight during gestation compared to their control counterparts.No significant differences were observed between the GCRB and C groups.The female offspring were fed for 16 weeks according to each group, C, C/F, GCR/F, GCRB/F, starting at 21 days of age.In all groups that consumed fructose, food consumption decreased compared to the control group.The progeny within GCR/F and GCRB/F groups displayed significantly lower weaning weights compared to control groups (C and C/F).While GCR/F and GCRB/F groups had similar final body weights, both were reduced relative to control groups.Interestingly, the C/F group displayed increased body weight and Lee's index compared to other restriction-diet groups.Weight gain was greater in the fructose-fed groups from undernourished mothers compared to the control group, as was caloric intake.The feed conversion ratio (FCR) was lower in the fructose-fed groups originating from undernourished mothers.Litter size remained unaffected by nutritional restriction, and maternal nursing behavior was consistent across groups (Table 1).Food intake: For dams, the average total intake over the course of gestation (21 days) was measured, while for offspring, the average intake over the last 8 weeks was recorded.Values are presented as means ± SEM.Dams: n = 4 and offspring: n = 8.Means in a row with superscripts without a common letter differ, p < 0.05.For dams, a one-way ANOVA was performed, and the Tukey post hoc test was used to determine significant differences between treatment groups in the offspring groups.C = control; GCR= global caloric restriction; GCRB = global caloric restriction biotin; C/F = control/fructose; GCR/F = global caloric restriction/fructose; GCRB/F = global caloric restriction biotin/fructose.
Intraperitoneal Glucose Tolerance and Insulin Resistance Testing
With respect to glucose homeostasis, following a 16-week period post-weaning, notably, the GCR/F and C/F groups exhibited elevated glucose levels at various time points, whereas biotin administration to maternal subjects effectively countered this effect within the GCRB/F offspring group, aligning their glucose levels with the C group (Figure 1a,b).Similarly, biotin improved insulin resistance, by reducing serum glucose levels within the GCRB/F group during specific time intervals (Figure 1b,c).
Serum Lipid Profile Parameters
Regarding lipid profiles, the GCR/F group exhibited dyslipidemia characterized by a noteworthy increase in serum triglyceride, cholesterol, and LDL levels, and a significant decrease in serum HDL levels when compared to the C group (Figure 2).Regarding the C/F group, there was an increase in triglyceride levels compared to the other groups.In the group of offspring from mothers with caloric restriction but supplemented with biotin, the triglyceride, cholesterol, LDL, and HDL values were similar to those of the C group (Figure 2).
Serum Lipid Profile Parameters
Regarding lipid profiles, the GCR/F group exhibited dyslipidemia characterized by a noteworthy increase in serum triglyceride, cholesterol, and LDL levels, and a significant decrease in serum HDL levels when compared to the C group (Figure 2).Regarding the C/F group, there was an increase in triglyceride levels compared to the other groups.In the group of offspring from mothers with caloric restriction but supplemented with biotin, the triglyceride, cholesterol, LDL, and HDL values were similar to those of the C group (Figure 2).
Hepatic Triglyceride Quantification
We observed significantly higher hepatic triglyceride levels in the GCR/F group compared to all other groups.The C/F group showed a trend towards higher hepatic triglyceride content compared to the C and GCRB/F groups, although this difference did not reach statistical significance.In contrast, the GCRB/F group exhibited a hepatic triglyceride concentration comparable to that of the C group (Figure 3).
Arterial Blood Pressure Measurement
An increase in systolic and diastolic blood pressure (BP) was observed in the GCR/F group compared to the other groups, and it was slightly higher in the C/F group compared to the C and GCRB/F groups.Interestingly, the GCRB/F group exhibited BP values following a pattern similar to those of the C group (Figure 4).
Hepatic Triglyceride Quantification
We observed significantly higher hepatic triglyceride levels in the GCR/F group compared to all other groups.The C/F group showed a trend towards higher hepatic triglyceride content compared to the C and GCRB/F groups, although this difference did not reach statistical significance.In contrast, the GCRB/F group exhibited a hepatic triglyceride concentration comparable to that of the C group (Figure 3).
Hepatic Triglyceride Quantification
We observed significantly higher hepatic triglyceride levels in the GCR/F group compared to all other groups.The C/F group showed a trend towards higher hepatic triglyceride content compared to the C and GCRB/F groups, although this difference did not reach statistical significance.In contrast, the GCRB/F group exhibited a hepatic triglyceride concentration comparable to that of the C group (Figure 3).
Arterial Blood Pressure Measurement
An increase in systolic and diastolic blood pressure (BP) was observed in the GCR/F group compared to the other groups, and it was slightly higher in the C/F group compared to the C and GCRB/F groups.Interestingly, the GCRB/F group exhibited BP values following a pattern similar to those of the C group (Figure 4).
Arterial Blood Pressure Measurement
An increase in systolic and diastolic blood pressure (BP) was observed in the GCR/F group compared to the other groups, and it was slightly higher in the C/F group compared to the C and GCRB/F groups.Interestingly, the GCRB/F group exhibited BP values following a pattern similar to those of the C group (Figure 4).
Measurement of Vascular Reactivity
Offspring fed with fructose from calorie-restricted dams (GCR/F) exhibited concentration-dependent vascular responses to phenylephrine in isolated thoracic aortic rings, both with (Figure 5a) and without endothelium (Figure 5b), resulting in heightened reactivity compared to the other groups.Offspring from dams with unrestricted diets, but fructose-fed (C/F group), displayed slightly increased aortic contractile responses to phenylephrine (PE), although these were lower than in the GCR/F group.Notably, the GCRB/F group, born to calorie-restricted but biotin-supplemented dams, exhibited aortic contractile responses similar to those of the C group (Figure 5a,b).
Measurement of Vascular Reactivity
Offspring fed with fructose from calorie-restricted dams (GCR/F) exhibited concentration-dependent vascular responses to phenylephrine in isolated thoracic aortic rings, both with (Figure 5a) and without endothelium (Figure 5b), resulting in heightened reactivity compared to the other groups.Offspring from dams with unrestricted diets, but fructose-fed (C/F group), displayed slightly increased aortic contractile responses to phenylephrine (PE), although these were lower than in the GCR/F group.Notably, the GCRB/F group, born to calorie-restricted but biotin-supplemented dams, exhibited aortic contractile responses similar to those of the C group (Figure 5a,b).
Measurement of Vascular Reactivity
Offspring fed with fructose from calorie-restricted dams (GCR/F) exhibited concentration-dependent vascular responses to phenylephrine in isolated thoracic aortic rings, both with (Figure 5a) and without endothelium (Figure 5b), resulting in heightened reactivity compared to the other groups.Offspring from dams with unrestricted diets, but fructose-fed (C/F group), displayed slightly increased aortic contractile responses to phenylephrine (PE), although these were lower than in the GCR/F group.Notably, the GCRB/F group, born to calorie-restricted but biotin-supplemented dams, exhibited aortic contractile responses similar to those of the C group (Figure 5a,b).
Discussion
A growing body of research shows that maternal malnutrition during pregnancy increases the risk of metabolic syndrome in offspring [4][5][6].The current study represents a pioneering investigation into the therapeutic potential of maternal biotin supplementation in rat models experiencing malnutrition.The results indicate that adult female offspring from malnourished mothers supplemented with biotin experienced a reversal of comorbidities associated with MetS, including insulin resistance, dyslipidemia, hepatic steatosis, hypertension, and arterial hyperresponsiveness.Furthermore, the weight of the biotin-treated mothers was not affected by caloric restriction.Therefore, prenatal biotin supplementation has the potential to serve as a preventive strategy for MetS induced by global caloric restriction during pregnancy.
In this study, dams from the caloric-restricted group (GCR) had lower final body weights than the other groups, consistent with reports that caloric restriction during pregnancy leads to maternal weight loss [16][17][18][19].The prenatal biotin supplementation maintains maternal weight gain at levels close to control values.Biotin probably compensates for the metabolic imbalance, suggesting that this treatment may have a protective effect against intrauterine malnutrition.Furthermore, all litters reached full gestation and litter size was unaffected.
The extent of fetal growth retardation achieved by maternal diet restriction was akin to that witnessed in other rat models of IUGR [16][17][18].Changes in body weight gain and Lee's index were noted at 16 weeks in the C/F group, with all groups exhibiting similar final weights due to catch-up growth.Many animal studies have shown a strong association between suboptimal nutrition during fetal life and postnatal catch-up growth, with lasting adverse effects across a broad phenotypic spectrum, including metabolic syndrome [20,21].However, the introduction of fructose to promote rapid catch-up growth did not induce obesity in the GCR/F and GCRB/F groups.This divergence may stem from the active behavior and heightened metabolic rate of Wistar rats [22].Nevertheless, it became apparent that catch-up growth exacerbated the adverse effects of maternal restriction [20,21,23].Fructose consumption resulted in diminished food intake, consistent with prior findings [14,24].This decrease is likely attributed to the fact that the extra highfructose diet increased total caloric intake [14,22], as observed in the fructose-consuming groups, and metabolic programming did not modify this trend.However, a lower feed conversion ratio (FCR) was observed in the fructose-consuming groups compared to the control, suggesting greater feed efficiency, as these groups consumed less food while achieving a higher weight gain per day.
Several studies have shown that chronic fructose consumption has been linked to insulin resistance and metabolic syndrome [24].Furthermore, progeny exposed to IUGR exhibit heightened insulin resistance, with impaired insulin sensitivity linked to low birth weight [25].In our study, we observed that in the C/F group, fructose increased in serum glucose concentrations in both the glucose tolerance and insulin resistance curves, according to other studies [14,22].This effect was more pronounced within the GCR/F group.Conversely, biotin treatment significantly reduced fasting glucose levels and improved glucose tolerance in the GCRB/F group.Numerous studies have shown that pharmacological doses of biotin have hypoglycemic effects, both in patients and murine models [12,15].This effect is related to a decrease in the expression of gluconeogenic genes in the liver (PEPCK, glucose-6-phosphatase, transcription factors FoxO-1, and HNF-4-alpha), and it has a positive impact on hepatic and pancreatic glucokinase mRNA expression and activity.Furthermore, biotin positively affects pancreatic endocrine function, gene expression, and insulin secretion, ultimately improving insulin sensitivity [12,15].In the rat liver, biotin has been shown to reduce lipoperoxidation, which may influence the production of reactive oxygen species [14].
Another characteristic component of MetS is dyslipidemia and hepatic steatosis [1,11] observed with chronic fructose consumption [22,24].This is due to an increase in hepatic lipogenesis and subsequent lipid accumulation, caused by an imbalance between hepatic lipid acquisition and removal [24,26].In the present study, IUGR rats (GCR/F group) showed a significant increase in serum and hepatic triglyceride concentration, an increase in LDL, and a decrease in HDL levels.The fructose-fed control group exhibited similar behavior but with lower values than the GCR/F group.Notably, female rats demonstrated greater resistance to hepatic steatosis development, probably due to the influence of estrogens on hepatic metabolism [27,28].However, biotin supplementation in the GCRB/F group decreased hepatic and serum triglyceride concentrations to levels similar to those of the control group.In addition, biotin decreased LDL levels and increased HDL levels.Pharmacological concentrations of biotin have hypotriglyceridemic effects, achieved by reducing the expression of lipogenic genes and abundance of proteins in the liver (SREBP1, acetyl-CoA carboxylase-1, fatty acid synthase, and pyruvate kinase), as well as in adipose tissues (SREBP1, acetyl-CoA carboxylase-1, fatty acid synthase, glucose-6-phosphate dehydrogenase, phosphofructokinase-1, and PPAR-gamma) [12,14,15].Moreover, biotin supplementation decreases serum free fatty acid concentrations [29], blunting lipogenesis and lipoprotein export.Biotin also activates AMPK kinase, a pivotal factor governing lipid synthesis and oxidation, closely associated with conditions linked to MetS [30].Moreover, biotin supplementation in mice was observed to heighten glucagon expression and secretion without concurrent alterations in fasting blood glucose levels [31].This elevation has been associated with acutely enhanced hepatic lipid clearance and suppressed de novo lipogenesis [32].
Multiple preclinical and epidemiological studies provide strong evidence linking adverse intrauterine environments and vascular function alterations to an increased risk of adult cardiovascular disease, including hypertension, in adulthood [10,16,17].Hypertension is a crucial component of MetS [1,11] and different research has shown that long-term fructose administration in rats results in arterial hypertension [14,24].We found increased arterial pressure in the C/F group, which was even more pronounced in the GCR/F group compared to the C group.However, the blood pressure levels in the GRCB/F group were similar to those in the C group.Similarly, the contraction in arteries with and without endothelium followed a similar pattern to the blood pressure results.Previous studies by Watanabe (2008) [33] and Aguilera (2018, 2019) [13,14] support the hypotensive and vasorelaxant effects of biotin in genetically hypertensive rats and rats with MetS, respectively.The vasorelaxant impact of biotin is proposed to be endothelium-and nitric oxide-independent, involving guanylate cyclase activation [33] and alteration of Ca 2+ channels [13].Studies on human patients and experimental models reveal specific mechanisms related to programmed hypertension in the kidney, such as reduced nephron number, oxidative stress, epigenetic regulation, activation of the renin-angiotensin system (RAS), and sodium transporters.Renal programming is recognized as a key driver of programmed hypertension [34].Malnutrition is also known to raise hypertension risks through mechanisms such as abnormal vascular function and stimulation of the RAS [35].As a result, biotin could potentially modulate various systems involved in blood pressure regulation, including the kidney, the renin-angiotensin system, stress-induced stimulation of the hypothalamicpituitary-adrenal axis, insulin resistance, and endothelial damage [14,35].
Epigenetic modifications are essential in linking a mother's nutrition with the metabolic health of her offspring.The "Developmental Origins of Health and Disease" theory suggests that non-genetic changes in physical traits result from epigenetic modifications like DNA methylation, histone acetylation, and microRNA expression [36,37].Biotin's influence on gene expression could involve a specific signaling pathway, to which histone biotinylation may contribute [12,15].Therefore, our research group is currently conducting experiments to evaluate possible epigenetic modifications in specific genes related to carbohydrate and lipid metabolism through the activation of the PKG signaling pathway or histone biotinylation.
Differential responses to early-life programming based on sex have been observed in various studies, showing varying effects depending on the timing of famine exposure during gestation.Studies focusing on females revealed metabolic syndrome-related issues under malnutrition conditions [27,28].Therefore, we decided to focus on studying the response in females.Moreover, considering the predominantly male-centric focus in prior investigations concerning the pharmacological impacts of biotin, there arises a necessity to ascertain the extent to which these effects remain consistent regardless of gender.Within the scope of our inquiry, we discerned analogous outcomes among females vis-à-vis males with regards to the antihypertensive, hypolipidemic, and hypoglycemic attributes of biotin.This observation suggests a negligible influence of female hormones on the metabolic effects of biotin on lipid and carbohydrate metabolism.Nevertheless, acknowledging the pronounced physiological and metabolic distinctions inherent to each gender, it becomes imperative for forthcoming inquiries to undertake comparative analyses of biotin's effects across sexes.These distinctions are likely to influence varying rates of cardiometabolic risk and susceptibility to disease development in both men and women.
In summary, this exploratory study demonstrates for the first time that prenatal biotin supplementation in a rat model of intrauterine growth restriction exerts a protective effect against cardiometabolic risk in female offspring exposed to postnatal fructose feeding.These risks encompass hyperglycemia, insulin resistance, dyslipidemia, hepatic steatosis, hypertension, and arterial hyperresponsiveness, thereby emphasizing its potential therapeutic efficacy.
Experimental Animals
Rats were handled under the established guidelines for the use and care of laboratory animals of Mexico and approved by the Biosecurity Committee of the Biological Chemistry Research Institute/UMSNH (NOM-062-ZOO-1999).The animals were kept under standard environmental conditions (temperature: 25 ± 2 • C, light/dark cycle of 12 h) and were fed a standard diet (calories provided as a percentage: 58.3 carbohydrates, 13.1 fat, and 28.5 protein) containing biotin at a concentration of 0.00003 g/kg (Labdiet 5012, LandOLakes, Inc., Arden Hills, MN, USA).
Experimental Design
Female and male rats (3 months old) were weight-matched and housed together in a 2:1 ratio for mating for 5 days.If a vaginal plug was detected, the females were single-housed, and the time was recorded as embryonic day 0.5.Thereafter, the dams were randomly divided into three groups (n = 4) and housed in standard cages with two rats per cage in each group.They were placed in one of the following diet groups: a control group (C) receiving a standard diet ad libitum; or a global caloric restriction group (GCR), where food intake for the restricted rats (20%) was calculated based on the amount consumed by their respective pair mates in the ad libitum-fed groups.The amount of food given to the rats paired at 20% of energy intake was calculated using the following formula: [(food consumed by the ad libitum-fed pair mate during the previous day/weight of this rat on the previous day) × (0.2) × (current weight of the rat for which the food was estimated)].The GCR group also received biotin (Sigma-Aldrich, St. Louis, MO, USA) (GCRB) at a dose of 2 mg/kg body weight (IP), which represents approximately 35 times the daily requirement for rats [13,15].The biotin was dissolved in PBS buffer (pH 7.4) and administered daily throughout the gestational period.The dose of biotin was based on previous studies that showed significant antihyperglycemic, antihyperlipidemic, and antihypertensive effects [12,14,15].Male rats for mating (average 250 ± 10 g) received the standard diet ad libitum.After birth, the litter size was adjusted to 9 pups on postnatal day (PD) 3 to ensure no nutritional bias.Dams nursed the pups until weaning at PD 21 and were free to feed on the standard diet during lactation.The number of offspring per litter was quantified, and body weights were recorded on day 21 of postnatal life to prevent maternal rejection.They were weaned and divided into 3 groups (n = 8 per group), with two offspring per dam based on the type of maternal diet (C, GCR, and GCRB) and housed in standard cages with two rats per cage in each group.Female offspring, randomly selected from each litter, were designated to be used in all subsequent experimental procedures and their estrous cycle was not monitored.Throughout 16 weeks, all groups, except for the control group (C), were provided with a 20% fructose (F) solution in tap water ad libitum to induce catch-up growth.Fructose supplementation was quantified by measuring the daily water consumption of rats to which 20% fructose (w/v) had been added.By knowing the total volume of water consumed, the amount of fructose consumed per rat in grams, and the corresponding caloric intake in kilocalories (kcal), were calculated.Simultaneously, the rats were fed a standard diet.The offspring were categorized as C, GCR/F, and GCRB/F, while a control group (C/F) comprising offspring from mothers fed the standard diet was also incorporated.After this period, the rats underwent a 12 h fasting period and were subsequently euthanized following established guidelines (Figure 6).Blood samples were collected through cardiac puncture after anesthesia (sodium pentobarbital, 55 mg/kg IP).Liver tissue was excised and preserved at −80 • C for subsequent analysis.
cages with two rats per cage in each group.Female offspring, randomly selected from each litter, were designated to be used in all subsequent experimental procedures and their estrous cycle was not monitored.Throughout 16 weeks, all groups, except for the control group (C), were provided with a 20% fructose (F) solution in tap water ad libitum to induce catch-up growth.Fructose supplementation was quantified by measuring the daily water consumption of rats to which 20% fructose (w/v) had been added.By knowing the total volume of water consumed, the amount of fructose consumed per rat in grams, and the corresponding caloric intake in kilocalories (kcal), were calculated.Simultaneously, the rats were fed a standard diet.The offspring were categorized as C, GCR/F, and GCRB/F, while a control group (C/F) comprising offspring from mothers fed the standard diet was also incorporated.After this period, the rats underwent a 12 h fasting period and were subsequently euthanized following established guidelines (Figure 6).Blood samples were collected through cardiac puncture after anesthesia (sodium pentobarbital, 55 mg/kg IP).Liver tissue was excised and preserved at −80 °C for subsequent analysis.
Assessment of Food Intake, Body Weight, Body Weight Gain, Caloric Intake, Feed Conversion Ratio, and Lee's Index
Weights and food intake of dams were measured three times per week during gestation (21 days), and of pups weekly during the last 2 months of treatment.We calculated the average total food intake over the course of 21 days for dams and the last 8 weeks for pups.To determine food intake in dams and pups, the quantity of feed remaining was subtracted from the quantity of feed provided within a 24 h interval.Body weight gain Weights and food intake of dams were measured three times per week during gestation (21 days), and of pups weekly during the last 2 months of treatment.We calculated the average total food intake over the course of 21 days for dams and the last 8 weeks for pups.To determine food intake in dams and pups, the quantity of feed remaining was subtracted from the quantity of feed provided within a 24 h interval.Body weight gain was calculated by subtracting the body weight of the rat on the last day of the 16-week period from its weight at weaning on postnatal day (PD) 21.Daily body weight gain was determined by dividing the total body weight gain by the number of days from 8 weeks to the last day of the 16-week period.Caloric intake was calculated using the following formula: Caloric Intake (Kcal/g/day) = Daily food intake (g) × total energy food of (kcal/kg)/1000 + kcal equivalent to 20% of fructose consumption based on water intake per rat.The energy provided by the standard diet was 3200 kcal/kg.The feed conversion ratio (FCR) was calculated by dividing the feed intake of the offspring by their average daily body weight gain over the last 8 week.Lee's index was calculated using the following formula: Lee's index = cube root of body weight (g)/nose-to-anus length (cm) [38].
Assessment of Intraperitoneal Glucose Tolerance and Insulin Resistance Testing
After a 16-week period of post-weaning treatment, for the glucose test, rats were fasted for 10 h and given a single dose of glucose (2 g/kg IP).Blood samples were obtained by tail nick, and glucose concentration was determined at 0, 30, 60, and 120 min after the glucose injection using a glucometer (Accutrend Plus, ROCHE, Basel, Switzerland).For the insulin test, fed rats were injected with 1 IU/kg body weight of regular human insulin (Eli Lilly, Indianapolis, IN, USA) via IP, and blood glucose levels were checked at 0, 30, 60, and 120 min after the glucose injection by the same method.
Serum Triglyceride, Total Cholesterol, LDL, and HDL Concentration Analysis
Fasting serum triglyceride, total cholesterol, LDL, and HDL concentrations were determined using enzymatic colorimetric kits (Spinreact, Girona, Spain) and measured spectrophotometrically, following the manufacturer's instructions.
Measurement of Hepatic Triglyceride
Total triglycerides were extracted from 100 mg of frozen liver.The samples were homogenized in 1 mL of solution containing 5% Triton-X100 in PBS buffer using a Polytron (Kinematica AG, Littau, Switzerland), and hepatic triglyceride concentrations were assessed using a colorimetric enzymatic method based on the protocol outlined by Aguilera et al. in 2012 [30].Briefly, samples were heated in a water bath at 50-60 • C for 2-5 min, then cooled to room temperature, and the process was repeated to solubilize triglycerides.The samples were diluted 1:5 in distilled water and centrifuged for 15 min at maximum speed.A commercial kit, GPO-POD Enzymatic-Colorimetric (Spinreact), was used for the analysis according to the manufacturer's instructions.
Blood Pressure Measurement
Prior to euthanasia, systolic (SBP) and diastolic (DBP) blood pressure were measured using a tail-cuff plethysmography method (CODA tail-cuff blood pressure system, Kent Scientific Corporation, Torrington, CT, USA) in conscious animals, following a previously described procedure [14].Each blood pressure reading was obtained by averaging three consistent measurements (with a variation of less than 5 mmHg) per rat.The data represent the average of 4 measurements per week over the last month.
Measurement of Vascular Reactivity in Aortic Rings
The experiments were conducted following the procedures described in previous studies [13,14].The aortic rings were examined with intact endothelium or with endothelium removed.Briefly, the aortic rings were equilibrated and sensitized with a submaximal concentration of phenylephrine (0.1 µM) for 30 min until reaching a resting tension of 3.0 g.Concentration-response curves to phenylephrine were generated by adding cumulative concentrations ranging from 1 × 10 −9 to 1 × 10 −5 M. The contraction was measured using isometric force transducers (Grass FT03, Astro-Med, West Warwick, RI, USA) connected to a data acquisition system MP100 (Biopac Systems, Inc., Goleta, CA, USA).
Statistical Analysis
The results were analyzed using a one-way ANOVA to estimate the impact of maternal restriction diet and post-weaning diet as dependent variables.The therapeutic effect of biotin on offspring (mixture composed of the offspring of the four mothers per group) and fructose group was evaluated using a post hoc Tukey test to determine the differences between the four offspring groups when a statistically significant interaction between birth weight and diet was observed.The data for glucose tolerance tests and lipid measurements were analyzed using repeated measures analysis.Data are presented as mean ± S.E.M., with statistical significance assigned to outcomes displaying p < 0.05.The entire analysis was executed utilizing SigmaPlot ® 11.0 software.
Conclusions
In summary, this exploratory study demonstrated, for the first time, that prenatal biotin supplementation in a rat model of intrauterine growth restriction exerts a protective effect on certain alterations related to metabolic syndrome.These alterations include hyperglycemia, insulin resistance, dyslipidemia, hepatic steatosis, hypertension, and arterial hypercontraction in female offspring challenged with postnatal fructose feeding.
Figure 1 .
Figure 1.Effect of biotin on blood glucose.(a) Intraperitoneal glucose tolerance test.(b) Area under the curve (AUC) during intraperitoneal glucose tolerance test.(c) Intraperitoneal insulin resistance test.(d) Area under the curve during intraperitoneal insulin resistance test.Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by repeated measures, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 1 .
Figure 1.Effect of biotin on blood glucose.(a) Intraperitoneal glucose tolerance test.(b) Area under the curve (AUC) during intraperitoneal glucose tolerance test.(c) Intraperitoneal insulin resistance test.(d) Area under the curve during intraperitoneal insulin resistance test.Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by repeated measures, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 2 .
Figure 2. Serum lipid profile.Serum concentrations of triglycerides (TG), cholesterol (CHOL), highdensity lipoprotein (HDL), and low-density lipoprotein (LDL).Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by repeated measures, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 3 .
Figure 3. Hepatic lipid concentration.Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 2 .
Figure 2. Serum lipid profile.Serum concentrations of triglycerides (TG), cholesterol (CHOL), high-density lipoprotein (HDL), and low-density lipoprotein (LDL).Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by repeated measures, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 2 .
Figure 2. Serum lipid profile.Serum concentrations of triglycerides (TG), cholesterol (CHOL), highdensity lipoprotein (HDL), and low-density lipoprotein (LDL).Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by repeated measures, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 3 .
Figure 3. Hepatic lipid concentration.Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 3 .
Figure 3. Hepatic lipid concentration.Data (n = 8) are presented as means ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined by one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 4 .
Figure 4. Blood pressure (systolic and diastolic values).Dates (n = 8) are mean ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 5 .
Figure 5. Phenylephrine cumulative concentration-response curves on aortic rings.Curves were generated with endothelium (a) and without endothelium (b).Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA and Tukey's post hoc testing.
Figure 4 .
Figure 4. Blood pressure (systolic and diastolic values).Dates (n = 8) are mean ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 4 .
Figure 4. Blood pressure (systolic and diastolic values).Dates (n = 8) are mean ± SEM.Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA, followed by the Tukey post hoc test to determine significant differences between treatment groups.
Figure 5 .
Figure 5. Phenylephrine cumulative concentration-response curves on aortic rings.Curves were generated with endothelium (a) and without endothelium (b).Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA and Tukey's post hoc testing.
Figure 5 .
Figure 5. Phenylephrine cumulative concentration-response curves on aortic rings.Curves were generated with endothelium (a) and without endothelium (b).Means without a common letter differ, p < 0.05.Statistical significance was determined using one-way ANOVA and Tukey's post hoc testing.
4. 3 .
Assessment of Food Intake, Body Weight, Body Weight Gain, Caloric Intake, Feed Conversion Ratio, and Lee's Index
Table 1 .
Quantification of various parameters in dams and female offspring. | 8,008 | 2024-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
An evolutionary trajectory planning algorithm for multi-UAV-assisted MEC system
This paper presents a multi-unmanned aerial vehicle (UAV)-assisted mobile edge computing system, where multiple UAVs are used to serve mobile users. We aim to minimize the overall energy consumption of the system by planning the trajectories of UAVs. To plan the trajectories of UAVs, we need to consider the deployment of hovering points (HPs) of UAVs, their association with UAVs, and their order for each UAV. Therefore, the problem is very complicated, as it is non-convex, nonlinear, NP-hard, and mixed-integer. To solve the problem, this paper proposed an evolutionary trajectory planning algorithm (ETPA), which comprises four phases. In the first phase, a variable-length GA is adopted to update the deployments of HPs for UAVs. Accordingly, redundant HPs are removed by the remove operator. Subsequently, a differential evolution clustering algorithm is adopted to cluster HPs into different clusters without knowing the number of HPs in advance. Finally, a GA is proposed to construct the order of HPs for UAVs. The experimental results on a set of eight instances show that the proposed ETPA outperforms other compared algorithms in terms of the energy consumption of the system.
Introduction
With the development of mobile communication systems, a huge number of resource-intensive and latency-sensitive applications are emerging, such as virtual reality, online gaming, and so on. Such applications are usually sensitive to latency and require huge computational resources. However, due to limitations on mobile users (MUs) devices, it is very difficult to execute these tasks on them.
Mobile edge computing (MEC) is a promising technology to address the above-mentioned issue. It can provide services with low latency and high reliability near or at MUs. It can execute tasks of MUs at the nearby edge cloud and send back the results to MUs (Asim et al. 2020). Due to the shorter physical distance between MEC's server/edge cloud and MUs, it consumes less energy as compared to mobile cloud computing. However, it is still lacking in fulfilling the requirements of MUs, as the location of the edge cloud is usually fixed and cannot be adjusted flexibly according to the requirements of MUs. Therefore, it cannot provide timely services during a natural disaster as the terrestrial communication link may be broken/lost.
To satisfy this ever-increasing demand, unmanned aerial vehicle (UAV) is regarded as one of the most promising technologies to achieve these ambitious goals. Compared to the traditional communication systems that utilize the terrestrial fixed base stations, UAV-aided communication systems are more cost-effective and likely to achieve a better quality of service due to their appealing properties of flexible deployment, fully controllable mobility, and low cost. In fact, with the assistance of UAVs, the system performance (e.g., data rate and latency) can be significantly enhanced by establishing the line-of-sight communication links between UAVs and MUs. In addition, through dynamically adjusting the flying and hovering location, UAVs are capable of improving communication performance in wireless communications.
Recently, due to the above-mentioned advantages, UAVs have been extensively used in various fields, such as wireless communication (Zaini and Xie 2019;Mozaffari et al. 2019), military (Low et al. 2017;Zeng et al. 2016), surveillance and monitoring (Olsson et al. 2010;Yuan et al. 2016), delivery of medical supplies (Gupta et al. 2020), and rescue operations (Gomez et al. 2015;Merwaday and Guvenc 2015). Very recently, UAVs have been used to enhance the capabilities of MEC systems. For example, studied a multi-UAV-enabled MEC system, where several UAVs are deployed as flying edge clouds for large-scale MUs. Zhang et al. (2020) proposed a UAV-assisted MEC for efficient multitask scheduling to minimize completion time. Garg et al. (2018) studied the application of a UAV-empowered MEC system in cyber-threat detection of smart vehicles.
Moreover, to fully exploit the potential of UAV-assisted MEC systems, some researchers have studied appropriate path planning and trajectory designing of UAVs. For instance, proposed a multi-agent deep reinforcement learning-based trajectory planning algorithm for UAV-aided MEC framework, where several UAVs having different trajectories fly over the target area and support the ground MUs. Wu and Zhang (2018) studied a practical scenario of UAVs in an orthogonal frequency-division multiple access (OFDMA) system. They proposed an iterative block coordinate descent approach for optimizing the UAV's trajectory and OFDMA resource allocation to maximize the minimum average throughput of MUs. Diao et al. (2019) optimized joint trajectory and data allocation to minimize the maximum energy consumption. Jeong et al. (2018) studied the bit allocation and trajectory planning under latency and energy budget constraints. Hu et al. (2019) developed a UAV-assisted relaying and MEC system, where the UAV can act as the MEC server or the relay. They proposed a joint task scheduling and trajectory optimization algorithm to minimize the weighted sum energy consumption of UAVs and MUs subject to task constraints. Yang et al. (2019) presented the sum power minimization problem for a UAV-enabled MEC network. studied a multi-UAVassisted MEC system, where the UAVs act as edge servers to provide computing services for Internet of Things devices. Zeng et al. (2019) proposed an efficient algorithm to optimize the trajectory of UAV, including the hovering locations and duration. They formulated the problem as a traveling salesman problem to minimize the energy consumption of UAV. Asimand et al. (2021) studied a multi-UAV-assisted MEC system. They proposed a novel genetic trajectory plan-ning algorithm with variable population size to minimize the energy consumption of the multi-UAV-assisted MEC system. Xu et al. (2021) investigated the computing delay issue in multi-UAVs-assisted MEC systems aiming to minimize the task completion time. Specifically, they considered both the partial offloading and binary offloading modes by jointly optimizing time slot size, terminal devices scheduling, computation resource allocation, and UAVs' trajectories. Tun et al. (2021) proposed a UAV-aided MEC system. They jointly minimized the energy consumption at the Internet of Things devices and the UAVs during task execution by optimizing the task offloading decision, resource allocation mechanism, and UAV's trajectory. Ji et al. (2020) investigated joint resource allocation and trajectory design for UAV-assisted MEC systems. They jointly optimized resource allocation and UAV trajectory in order to minimize the weighted sum energy consumption of the UAV and user devices.
From the above introduction, it is clear that variable numbers of UAVs have rarely been considered in the current studies. The deployment of an appropriate number of UAVs can improve the system's performance. The main contributions of this paper are summarized as follows: • A new multi-UAV-assisted MEC system is proposed and formulated to minimize the energy consumption of the system by considering the deployment including the number and locations of hovering points (HPs), the number of UAVs, and their association with HPs, and the order of HPs. • The deployment of HPs is addressed by proposing a genetic algorithm (GA) with a variable length individual. Specifically, evolutionary operators like crossover and mutation are modified to handle variable-length individuals. • An evolutionary trajectory planning algorithm (ETPA) is proposed that consists of four phases. First, a variablelength GA (VLGA) (Ting et al. 2009) is adopted to optimize the deployment of HPs. Subsequently, redundant HPs which have no MUs to be served are removed by using the remove operator. After that, UAVs are associated with HPs via differential evolution clustering (DEC) algorithm (Mostapha 2015). Accordingly, a GA is adopted to construct the order of HPs for UAVs. The remainder of this paper is organized as follows. In Sect. 2, we introduce the system model, including the problem formulation of the proposed system. Section 3 presents the details of our proposed algorithm ETPA. In Sect. 4, the exper- imental studies are discussed. Finally, Sect. 5 concludes this paper. UAV flies over all the MUs to collect the data. We assume that the UAV will hover at some points for some time and the MU can send the sensing data to the UAV. We assume UAV will hover over t ∈ T j = {1, 2, . . . , T j } HPs. Therefore, one has
System model
where a i j [t] = 1 denotes that the i-th MU decides to send its sensing data to j-th UAV at t-th HP, while a i j [t] = 0 indicates otherwise. Then, one has which denotes that one MU should choose one UAV at each HP to send its sensing data. We assume that the MU always sends data to the closest UAV at each HP t. Then, one has Assume that at each HP t, j-th UAV can accept at most U j MUs. Therefore, one has We assume that i-th MU may collect D i amount of data which intend to send it to the UAV. The UAV may stop at T j points at the air in which each stop may last for T max seconds, where T max is the fixed value.
Then, the time to send the data from MU to UAV at the t-th HP is as where r i j [t] is the data rate which is given by (14). Also, define F i as the CPU cycles which this task may need to process. Then, one can have the process time of the data in UAV as where f i j [t] is the computation capacity of the UAV assigned to each data processing procedure, where we have where f max is the maximal computing power the UAV can provide to each MU. Also, we have Then, one can have Assume that the coordinate of i-th MU is as (x i , y i ) and the coordinate of the j-th UAV at t-th HP is as (X j [t], Y j [t], H ). Also, assume the UAV's trajectory can be characterized by a sequence of location q j In addition, all UAVs start from the same initial position q[0] and finally come back to the same initial position q[0] after visiting all the HPs. Also, we have where S max = V max · T max is the maximum horizontal distance which the UAV can travel and V max is the maximum speed. Then, the horizontal distance between the i-th MU and the UAV is as Also, the distance between the i-th MU and the UAV at the t-th HP is as Then, the channel power gain can be given as where β 0 denotes the channel power gain at the reference distance 1m.
If MUs decide to offload to the UAVs, the data rate can be given as where σ 2 is the noise power and p ue i is the transmission power, which is constrained by The energy consumption of the i-th MU for sending data to the j-th UAV at t-th HP is given by The whole energy consumption of all MUs is expressed as Assume the flying energy of the UAV is proportional to the flying distance/flying time, then the flying energy can be calculated as Also, for the hovering energy, one can have where P H denotes the hovering power of the UAV.
The whole energy consumption of all UAVs is expressed as where C is the fixed cost including take off, land in, and maintenance cost for adding UAVs. Then, we can have the optimization problem as follows.
P : min subject to: where the objective function is the sum of hovering energy and flying energy of UAVs and C8 and C9 present the lower and upper bounds of X-axis and Y-axis, respectively.
Motivation
By analyzing the proposed system model and problem formulation in Sect. 2, it is clear that (21(a)) is a non-convex, NP-hard, and nonlinear optimization problem. (21(a)) cannot be solved by traditional optimization methods due to the following challenges.
• To solve (21(a)), we need to consider the number of UAVs, the number of HPs and their locations, which MU will send data to which HP, which UAV will visit which HPs, and in which order the UAV will visit the assigned HPs. Therefore, it is a complicated/complex problem to be tackled.
• (21(a)) contains integer decision variable M and the number of HPs T j for UAV j, binary variable a i j , and continuous variables (X j and Y j ). Therefore, it is a mixed decision variable problem, which is challenging to be solved Liao et al. 2014). • Since the number of UAVs is unknown in prior, the clustering of HPs into different clusters requires an unsupervised scheme (i.e., free of initialization/parameter-free clustering algorithm) that can group closely spaced HPs into different clusters automatically and can also simultaneously find an optimal number of clusters/UAVs (Sinaga and Yang 2020).
In this paper, we proposed an algorithm called ETPA to design the trajectories of UAVs. The proposed algorithm consists of four phases: the deployment of HPs, removing redundant HPs, the association between UAVs and HPs, and the order of HPs for UAVs.
The main technical advantages of the proposed algorithm are given as.
• Consider the strong coupling among the deployment of HPs, the association between UAVs and HPs, and the order of HPs. ETPA plans the trajectories of UAVs at each iteration through four phases: updating the deployment of HPs, removing redundant HPs, the association between UAVs and HPs, and constructing the optimal trajectories for UAVs. • In ETPA, the deployment of HPs is solved by using VLGA in Ting et al. (2009). Each individual represents the whole deployment; thus, the whole population represents a set of deployments. Since the length of individuals is variable, we modified the common crossover and mutation operators to handle variable-length individuals for updating the deployment of HPs. • The optimization problem (21(a)) includes mixed decision variables i.e., integer, binary, and continuous decision variables. By analyzing the problem, we transformed it into subproblems so that there are no mixed variables involved. We solved each subproblem independently by proposing an efficient algorithm.
ETPA
The framework of ETPA is given in Algorithm 1. In the initialization, the locations of HPs are produced randomly, forming an initial population P O P = (X 1 , Y 1 ), (X 2 , Y 2 ),…,(X max , Y max ). Subsequently, redundant HPs are removed to restrict UAVs from visiting HPs having no MU by using the algorithm given in Algorithm 3. Accordingly, DEC algorithm in Algorithm 4 is adopted to group HPs into different clusters and a UAV is assigned to each cluster. Afterward, GA in Algorithm 5 is adopted to construct the order of HPs in each cluster. After that, P O P is evaluated via Eq. (21(a)), if it is feasible, the initial population is generated successfully; otherwise, the initialization is repeated until it is feasible or the number of fitness evaluations (F Es) is not less than maximum F Es (F Es max
The deployment of HPs
For the deployment of HPs, a VLGA in Ting et al. (2009) is adopted. GA is a simple, most popular, and effective EA and has been successfully applied in many fields (Asim et al. 2018(Asim et al. , 2017aMashwani and Salhi 2012;Mashwani et al. 2021). More specifically, different from Ting et al. (2009), tournament selection (Goldberg and Deb 1991), simulated binary crossover (SBX) (Deb and Agrawal 1995;Deb and georg Beyer 1995;Deb et al. 2007), and polynomial mutation (Kalyanmoy and Hans-georg 1996) operators were adopted in ETPA to generate an offspring population P O P of f (i.e., locations of new HPs). The individuals of P O P of f are adopted to update parent population P O P (i.e., locations of HPs can be updated).
Since each individual in GA represents a whole deployment of HPs. Therefore, the whole population represents the set of deployments of HPs. Hence, the number of HPs is equal to the length of the individual in the population. Since the lengths of individuals in P O P are not same i.e., variable, the lengths of individuals are varying during evolution while updating the number of HPs i.e., the individual length can be increased, kept unchanged, or reduced by applying SBX designed for variable-length individuals. By using Algorithm 2, we construct the offspring population P O P of f . More specifically, we designed a special scheme to apply SBX operator (Ting et al. 2009) on variable-length individuals. First, the lengths of both individuals are compared. For example, if individual 1 has four substrings and individual 2 has five substrings. Then, to deal with the unequal individual lengths, the shorter individual will be chosen as parent 1. After that, the substrings of parent 1 will be mapped randomly to the substrings of longer individual (i.e., parent 2). The SBX crossover for substrings can then be performed on the four pairs of mapped substrings for parents 1 and 2 (Ting et al. 2009).
If the new population was composed of the newly created descendants only, the old population's best individual may be lost. To eliminate this deficiency, a new operator, the socalled elitism, was introduced. This operator ensures that the previous population's best individual will get into the new population without any modification; thus, the best solution found so far will survive during the whole evolutionary process.
Algorithm 3 Removing HPs with no MU 1: U ← Find unique association between MUs and HPs; 2: D ← Find the set difference between the index set of HPs/P O P and U ; 3: U pdated P O P ← Update HPs by removing HPs from P O P with indexes D;
Removing redundant HPs
After associating MUs with closest HPs via Eq. 3, we have some redundant HPs which have no MUs associated with them. We update the number of HPs by removing redundant HPs that have no MU to be served by using Algorithm 3. First, we find unique association U between MUs and HPs (line 1); then, we calculate the set difference D between the index set of HPs/P O P ( index set of P O P = 1 to size(P O P)) and U (line 2) and finally remove HPs from the P O P with indexes given in D (line 3). By removing redundant HPs, we restrict UAVs from visiting redundant HPs; as a result, the flying energy can be saved. In addition, it can shorten the running time of ETPA.
Association between UAVs and HPs
In this section, we group HPs into different clusters, and then a UAV is associated with the HPs of each cluster. However, since the number of UAVs is unknown, we need a clustering algorithm that does not require the number of clusters/UAVs in advance. Clustering can be stated as a particular kind of NP-hard grouping optimization problem (Falkenauer 1998). Therefore, it can be solved by optimization algorithms and metaheuristics. Specifically, evolutionary algorithms (EAs) are widely used for solving NP-hard problems, which provide near-optimal solutions to such problems in a reasonable time (Hruschka et al. 2009). Therefore, a large number of EAs for solving clustering problems have been proposed in the past. EAs are based on the optimization of some objective function (i.e., the so-called fitness function) that guides the evolutionary search (Hruschka et al. 2009). ETPA adopted a DEC algorithm in Mostapha (2015) to automatically cluster HPs into different clusters. Specifically, DE/rand/1 and binomial crossover (Qin et al. 2009;) are used to produce offspring. Like other EAs, it is also based on a fitness function. The fitness function is computed using the Davies-Bouldin index (DBI) (Davies and Bouldin 1979). The DBI is a function of the ratio of the sum of within-cluster scatter to betweencluster separation (Bandyopadhyay and Maulik 2002). The scatter within C i cluster is computed as where S i,q is the qth root of the qth moment of the HPs in cluster i with respect to their mean and is a measure of the dispersion of the HPs in cluster i. Specifically, S i,1 is the average Euclidean distance of the vectors in class i to the centroid of class i, and z i is the centroid of C i and is defined as and n i is the cardinality of C i , i.e., the number of HPs in cluster C i . The Minkowski distance of order t between cluster C i and C j is defined as The DBI is then defined as where The objective is to minimize the DBI for getting proper clustering of the HPs.
The DEC algorithm is explained in Algorithm 4. First, for each individual in the population P O P, a random number j in the range [ j min ; j max ] is generated. This individual is assumed to present the centers of j clusters. For initializing these centers, j HPs are chosen randomly from the set of HPs. These HPs are distributed randomly in the P O P. After that, the DBI is calculated by using Eq. (24). Subsequently, the offspring population is generated by using DE operators. Accordingly, the new population is evaluated by using Eq. (24). Population with minimum DBI is selected as a parent population for the next iteration. This process continues until the maximum number of iterations Max I ter is reached. Finally, the best solution with minimum DBI is selected as the best solution; hence, the number of clusters with proper clustering is obtained (i.e., C j clusters are obtained, where j represents the number of clusters).
The order of HPs
In this subsection, we design the optimal trajectories for all UAVs. In fact, this problem can be dealt with as a traveling salesmen problem. In ETPA, we proposed GA to construct the optimal order of HPs for all UAVs. GA is a popular EA that ensures good convergence in solving traveling salesman problem (Larrañaga et al. 1999). Specifically, Swap, Flip, and Slide operators are used in GA to produce offspring populations. The implemented operators are given below.
• Swap: selects two HPs and swaps them. Selected HPs can belong to the same or different routes. • Flip/Inversion: selects a sub-route and reverses the visiting order of the HPs/UAVs belonging to it. • Slide/Insertion: selects an HP and inserts it in another place. The route where it is inserted is selected randomly. It is possible to create a new itinerary with this single customer, with probability.
It can be seen from Algorithm 5, and the algorithm requires two input sets, the coordinates of the locations of HPs, and the distance matrix which contains traveling distances among HPs. Furthermore, it requires some parameter determination, like population size, maximum iteration number, and some additional constraints. After these steps, the initial population can be created, which consists of randomly created individuals. The fitness function simply summarizes the overall route lengths for each UAV inside an individual. As can be seen in Algorithm 6, the selection is tournament selection, where tournament size, i.e., the number of individuals who compete for survival, is 8. Therefore population size must be divisible by 8. The winner of the tournament is the member with the smallest fitness, this individual is selected for a new individual creation, and this member will get into the new population without any modification. After selecting parents from the population, GA's operators, i.e., Swap, Flip, and Slide given in Algorithm 6, are applied to produce offspring population. The population with minimum tour (i.e., minimum distance) is selected as a parent population for the next iteration. Finally, the best routes/solutions are obtained for UAVs.
Experimental settings
The parameter setting of the proposed multi-UAV-assisted MEC system is presented in Table 1. We have tested eight instances with up to 200 MUs to evaluate the performance of ETPA. We assumed that all the MUs are distributed randomly in a 1000 m × 1000 m square region. The maximum number of fitness evaluations (F Es max ) is set to 5000, and 20 runs are implemented independently on each algorithm. The mean energy consumption and the standard deviation of the proposed system over 20 runs are denoted by mean EC and Std Dev, respectively. Furthermore, we performed the Wilcoxon rank-sum test at 0.05 significant level. In the experimental results, we used ↑, ↓, and to show that ETPA performs significantly better than, worse than, and similar to its competitors, respectively.
Algorithm 6 GA Operator with flip, slide, and swap 1: while i < popsi ze do 2: Apply tournament selection with tournament size 8 to select 8 individuals from P O P; 3: for k:= 1 to 8 step 1 do 4: Flip ← Apply Flip to flip 2 HPs; 5: Swap ← Apply Swap to transpose HPs from two random individuals; 6: Slide ← Apply Slide operator to slide the HPs of random individual; 7: end 8: i = i +8; 9: end while 10: OUTPUT: NEW POP P O P N ;
Effectiveness of the deployment of HPs
The deployment of HPs is addressed by proposing a GA with variable-length individuals. To prove its effectiveness, we replaced the proposed GA in ETPA with DEVIPS and developed a variant called DEVIPs-ETPA. In DEVIPS-ETPA, the deployment of HPs is updated by using DEVIPS in . The experimental results of ETPA and DEVIPS-ETPA are presented in Table 2, which show that the proposed ETPA outperforms DEVIPS-ETPA in terms of mean EC. Furthermore, as summarized at the bottom of Table 2, ETPA provides better statistical results than DEVIPS-ETPA. Moreover, Fig. 2 Figure 2 shows that ETPA converges faster than DEVIPS-ETPA and maintains better performance during evolution. The better performance of ETPA is attributed as: since variable length GA in ETPA can always predict the optimal number of HPs quickly, thus leading to the performance improvement. Table 3, which show that the performance of ETPA is better than ETPA-W in terms of mean EC on all eight instances. In addition, ETPA provides statistically better results than ETPA-W, as can be seen at the bottom of Table 3. To further evaluate its effectiveness, Fig. 3
Effectiveness of the association between UAVs and HPs
To associate UAVs with HPs, this paper adopted DEC algorithm given in Algorithm 4. To show the effectiveness of the association between UAVs and HPs, we have replaced DEC with K-means algorithm (Jain 2010) and designed an algorithm called Kmeans-ETPA. The experimental results of ETPA and Kmeans-ETPA are listed in Table 4, which show that the performance of ETPA is better than Kmeans-ETPA in terms of mean EC on all eight instances. In addition, ETPA provides statistically better results than Kmeans-ETPA, as can be seen at the bottom of Table 4. To further evaluate its effectiveness, Fig. 4 presents the evolution of the mean EC of ETPA and Kmeans-ETPA on eight instances, which shows that ETPA converges faster than Kmeans-ETPA and maintains better performance during evolution. The reason why ETPA performs better than Kmeans-ETPA is straightforward: DEC algorithm in ETPA can group closely spaced HPs into the same cluster automatically without knowing the number of clusters that reduces the EC of the system. In addition, it can also predict the optimal number of UAVs, which reduces the extra cost and improves the system performance.
Effectiveness of GA
To construct the order of HPs for UAVs, this paper adopted GA in Algorithm 5. To show the effectiveness of GA, we Table 5, which show that the performance of ETPA is better than ETPA-Greedy in terms of mean EC on all eight instances. In addition, ETPA provides statistically better results than ETPA-Greedy, as can be seen at the bottom of Table 5. To further evaluate its effectiveness, Fig. 5 presents the evolution of the mean EC of ETPA and Kmeans-ETPA on eight instances, which shows that ETPA converges faster than ETPA-Greedy and maintains better performance during evolution. The reason why ETPA performs better than ETPA-Greedy is straightforward: GA in ETPA is a famous EA that is known for its good convergence in solving NP-hard problems.
Conclusion
This paper has presented a multi-UAV-assisted MEC system, where multiple UAVs have been used to serve MUs. A trajectory planning problem was formulated as an optimization problem with the aim of minimizing the system energy consumption. To solve the problem, we have proposed an evolutionary trajectory planning algorithm that consisted of four phases. In the first phase, a genetic algorithm with variable length individual in population was adopted for the deployment of HPs. This algorithm updates the number and location of HPs by using genetic operators designed for variable-length individuals. Accordingly, redundant HPs were removed by the remove operator. Afterward, the asso- ciation between UAVs and HPs was determined by adopting DEC algorithm. Finally, a GA was adopted to construct the trajectories of all UAVs with the aim of reducing their flight distances. The experimental results on eight instances up to 200 MUs have shown that the proposed ETPA performs better than other compared variants in terms of minimizing the system energy consumption. In the future, we intend to propose some low-complexity algorithm keeping in view the dynamic environment of MEC systems across the globe. | 6,724.2 | 2021-08-30T00:00:00.000 | [
"Computer Science"
] |
IL 28 B SNPs rs 12979860 and rs 8099917 Are Associated with Inflammatory Response in Argentine Chronic HCV Patients
Background: Hepatitis C virus (HCV) is a major cause of chronic liver disease, including cirrhosis and liver cancer. The aim of our study was to determine whether IL28B single nucleotide polymorphisms (SNPs) rs12979860 and rs8099917 can be considered a prognostic host factor in untreated chronic HCV patients. Methods: We set up a real-time Allele Specific PCR amplification to determine the allele present in each polymorphic site, and statistically grouped and compared this result with clinical data. Results: We determined rs12979860 and rs8099917 genotype and allele frequencies in a single cohort of untreated chronically HCV-infected patients. We found significant associations between higher inflammatory activity, measured as ALT levels or METAVIR scores and rs12979860 CC (P = 0.0013 and P = 0.0033, respectively) and rs8099917 TT (P = 0.0005 and P = 0.0264, respectively) genotypes. Interestingly, considering both genotypes together, we also found association with ALT levels (P = 0.0003; OR = 5.125) or METAVIR scores (P = 0.0038; OR = 5.179), suggesting and additive effect on liver inflammation in these patients. Conclusion: we show association between hepatic inflammatory activity in a single Argentinean untreated chronically HCV cohort and SNPs located in the interferon lambda gene region. The studied polymorphisms, together with further innate and adaptive immune responses, clearly play a role in modulating the HCV infected patients outcome, contributing to hepatic inflammation and possible fibrosis/cirrhosis. How to cite this paper: Machicote, A., Flichmann, D., Arana, E., Paz, S., Fainboim, H., Fainboim, L. and Fernández, P.M. (2018) IL28B SNPs rs12979860 and rs8099917 Are Associated with Inflammatory Response in Argentine Chronic HCV Patients. International Journal of Clinical Medicine, 9, 79-91. https://doi.org/10.4236/ijcm.2018.92009 Received: January 19, 2018 Accepted: February 10, 2018 Published: February 13, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access DOI: 10.4236/ijcm.2018.92009 Feb. 13, 2018 79 International Journal of Clinical Medicine
Introduction
Chronic hepatitis C is a liver disease caused by the hepatitis C virus (HCV), a blood-borne virus mostly transmitted through unsafe injection practices, but also from inadequate sterilization of medical equipment, or unscreened blood and blood products.HCV can cause both, self limited and chronic infections, ranging in severity from a mild, few weeks lasting illness, to a serious, lifelong illness.
Based on World Health Organization reports, 150 million people are infected with chronic hepatitis C (approximately 600,000 in Argentine), and around 500,000 people die each year from hepatitis C-related liver diseases [1].Unfortunately a vaccine for hepatitis C is not currently available.Early diagnosis of the HCV infection is rare.Those people who go on to develop chronic HCV infection remain undiagnosed, often until serious liver damage has developed.Although many direct acting antivirals have been developed in recent years, these are not widely available.The most widespread standard treatment at the moment is interferon alpha plus ribavirin.The success of such therapy depends on host and virus factors [2].The degree of liver damage, measured by liver biopsies or through a variety of non-invasive tests, together with the determination of HCV genotype, is used to guide therapeutic decisions and management of the disease, to determine the most appropriate approach for each patient.In order to contribute to decisions related to the course of treatment, several groups working in Genome Wide Association Studies identified Single Nucleotide Polymorphisms (SNPs) associated to the outcome of antiviral therapy in chronic HCV patients [3] [4] [5] [6] [7].Among them the SNPs rs12979860 and rs8099917 showed the most significant statistical relevance.These SNPs that were also associated with spontaneous virus clearance in acute infection [8] [9] [10], and appeared to modify the natural course of disease [11], are located close to the IL28B locus, containing a gene coding for interferon lambda 3 (IFN-λ3) that belongs to the type III interferon-family [12].However, it is unclear how these SNPs affect transcription or protein expression.Regarding IFN-λs, these are known to be able to inhibit virus replication [13] [14], including HCV [15], and share with type I interferon a similar anti-viral effect.The role of these SNPs in the host inflammatory response and evolution of chronic infection in untreated chronic HCV patients is not well understood.Recent studies yielded contradictory results and have shown rs12979860 CC (or rs8099917 TT) association with more advanced fibrosis or cirrhosis [16] [17] and worse clinical outcomes [18], while others have reported rs12979860 TT (or rs8099917 GG) to be associated with more advanced fibrosis or cirrhosis [19] [20] [21] or even no association of DOI: 10.4236/ijcm.2018.92009 A. Machicote et al. [22].The aim of our study was to determine whether SNPs rs12979860 and rs8099917 can be considered a prognostic host factor in untreated chronically HCV-infected patients from an Argentine cohort, by analyzing host haplotypes involved in modulation of patient's immune responses.
Study Design
The study was designed and performed (years 2015-2017) using samples stocked during routine medical practice.The study has been approved by the Research and Ethics Committee at both involved centers and performed in accordance with the ethical standards adopted in the Declaration of Helsinki and revised forms.Informed consent was obtained from all donors.
Subjects
Samples from 150 HCV chronically infected patients were included.All individuals were aged ≥ 18 years, and patients were not undergoing any kind of HCV antiviral therapy.Exclusion criteria included alcohol intake greater than 20 g day −1 , history of organ transplantation, creatinine clearance < 50 mL min −1 , co-infection with hepatitis B virus or human immunodeficiency virus and African American or Asian ethnicity.Patients were also excluded if they presented evidence of other liver disease, such as autoimmune hepatitis, primary biliary cholangitis, sclerosing cholangitis, Wilson's disease or alpha-1-antitrypsin deficiency.
Genotype Determination
Genomic DNA was obtained from blood samples using the standard method of phenol:chloroform extraction.Genotypes were determined by allele specific amplification on a real-time PCR detection system (Mx3000P, Stratagene) using SYBR Green as fluorescent DNA binding dye, complementary primer sets in separate tubes, and Taq Platinum polymerase (Invitrogen).Primer sets yielded a single product of the correct size with their specific DNA templates and no or extremely retarded amplification with the unspecific alleles.
Histological Parameters
Liver biopsies were obtained from patients, simultaneously with peripheral blood DOI: 10.4236/ijcm.2018.92009samples, and before any treatment against HCV infection.Tissue was fixed in formaldehyde (10%) and included in paraffin.After Masson's trichromic stain, inflammatory activity was determined by microscopic analysis attributing METAVIR scores for each patient based on histopathologic features [23].Patients being attributed METAVIR scores A2 or A3 were considered as having a high inflammatory activity.This activity was also measured indirectly by Alanine aminotransferase (ALT) determination in blood samples, considering as high those values over twice the normal value (N, men: 41 UI mL −1 , women: 31 UI mL −1 ).Patients being attributed METAVIR scores F3 and F4 were considered as presenting advanced fibrosis.For those patients without liver biopsies, ultrasonic transient elastography (Fibroscan 502) study was performed, considering scores F3 and F4 as indicator of advanced fibrosis.
Statistics
Association between liver inflammatory activity and SNPs was studied with Fisher exact and Chi square tests.Analysis was performed with GraphPad Prism 3.0 (GraphPad Software).P values were considered significant when lower than 0.05.Odds ratios (OR) were determined using the following formula: OR = AxD/BxC, being A the number of patients harboring the studied allele and B the number of patients not harboring the studied allele, both with high inflammatory activity, C, the number of patients harboring the studied allele and D the number of patients not harboring the studied allele, both with low inflammatory activity.
Clinical Features of Patients
A total of 150 untreated chronic HCV-infected patients were included.Clinical features are summarized in Table 1.Sixty-two percent were males, with a mean age of 48.9 ± 9.8 years (range 19 -76 years old).The date of infection could be estimated for 37 patients (25%) being the use of unsafe injections the main infection route (34 patients, 23%).Viral genotype was established for 135 patients.
Most of them were infected with genotype 1 (n = 93, 62%), where genotype 1a was found in 43 patients, and genotype 1b in 30 patients.Viral load (n = 113) ranged from 1978 IU mL −1 to 30,800,000 IU mL −1 (Mean: 1,668,031.79± 2,657,526.68IU mL −1 ), Serum ALT levels were determined in 137 patients, obtaining normal values in 71 patients (47%).Hepatic biopsies were available for 91 patients.Among them, 38 patients presented a METAVIR score A2 -A3, considered as advanced inflammatory activity.Fibrosis level was determined in 136 patients, and an advanced level of fibrosis was detected in 49 of them (33%).
rs12979860 and rs8099917
For rs12979860 the most frequent allele presented a base C at the polymorphic position (60%), being the heterozygous CT genotype the most frequent (45%
SNPs rs12979860 and rs8099917 and Inflammatory Activity
The ALT levels were abnormal in 67% of HCV patients harboring rs12979860 genotype CC, whereas this was observed in only 38% of non-CC patients (P = 0.0013; OR = 3.200; Figure 1(a)).This association was also found when the analysis of infected patients included only the HCV genotype 1, where 75% of Figure 2. Association of rs8099917 and inflammatory activity.SNP rs8099917 is associated with higher ALT levels in TT patients (a), but not when comparing GG vs no-GG patients (b).When using METAVIR score as inflammation indicator, statistically significant association was found in both comparisons, TT vs no-TT (c), and GG vs no-GG (d).
Data were analyzed by Fisher exact test.n, P and odd ratios (OR) are indicated in each panel.Asterisks denote significance level.
genotypes and higher ALT levels, we analyzed the possible association of both polymorphisms together.71% of HCV patients harboring genotypes CC and TT (rs12979860 and rs8099917, respectively) showed high ALT levels, in contrast to 33% of non-CC/non-TT individuals (P = 0.0003; OR = 5.125, Figure 3(a)).
When using METAVIR scores for comparison, 68% of CC/TT patients showed
rs12979860 and rs8099917 and Fibrosis
As a high liver inflammatory activity can lead to fibrosis/cirrhosis, we analyzed the association of specific patient alleles with advanced fibrosis scores (F3 -F4).
The rs12979860 CC genotype was present in 33% of patients with early stage fibrosis, compared to 45% of patients with advanced fibrosis, a difference that was not statistically significant.Similar result was obtained when analyzing rs8099917 genotypes: 34% of patients harboring rs8099917 TT genotype presented early stage fibrosis, compared to 48% of patients with advanced fibrosis (Data not shown).
These results suggest that SNPs are not significantly associated with fibrosis.
Discussion
HCV faces, during chronic infection, complex mechanisms of host innate and DOI: 10.4236/ijcm.2018.92009adaptive immunity.As previously mentioned, host genetics plays an important role in the outcome of antiviral therapy [3] [4] [5] [6] [7].We found that C and T were the most represented alleles for rs12979860 and rs8099917, respectively.
Considering the possible genotypes, we found that CT was the most represented for rs12979860 and TT and TG for rs8099917.C/T and T/G alleles (rs12979860/ rs8099917) were found together in approximately 85% of individuals, suggesting a strong linkage and inheritance as a haplotype, as has been previously reported [3].However, the allele frequency of rs8099917 differs between populations worldwide, so this linkage may vary between diverse cohorts [24].A study carried out in a Spanish cohort showed similar results for rs12979860 allele C frequency [8].Our results are consistent with a previous report [7] in Argentine patients of European ancestry, and to our knowledge, the first report with such a genetic analysis on a single Latin-American cohort in untreated HCV-infected patients.Furthermore, we show for the first time an association between hepatic inflammatory activity in a single Argentine cohort of untreated chronically HCV-infected patients and single nucleotide polymorphisms located close to IFN-λ3.The association we described here was neither dependent on HCV genotype nor confounded by viral load.The immune response to HCV infection is established and modulated by liver-infiltrated immune cells [25].The hepatic inflammatory activity is a clinically useful tool to follow such response, through the measurement of ALT levels as well as through the analysis of liver biopsies.
In the untreated chronically HCV infected patients we found a statistically significant association between a higher inflammatory activity grade and the rs12979860 CC genotype.Previous studies have shown that this genotype was also associated with a better response to pegylated interferon-alpha and ribavirin treatment [3] [4] [5] [6] [7] [9] [12] and with spontaneous virus clearance [9].In our group of untreated patients we found a significant association with a higher inflammatory activity grade for the rs8099917 TT genotype.Further, considering both genotypes together (rs12979860 CC and rs8099917 TT), we also found this association to be significant, suggesting and additive effect on liver inflammation in patients harboring both genotypes.As these polymorphisms are close to IFN-λ3, which has been involved in the modulation of antiviral responses [13] [26] [27] [28], our results, in agreement with a recent report concluding that IFN-λ3 rather than IFN-λ4 likely mediates haplotype-dependent hepatic inflammation and fibrosis [29], support the hypothesis that this interferon is contributing to a stronger immune response to HCV during the acute infection phase, favoring the spontaneous clearance.Moreover, they also suggest a role for this cytokine in the chronic infection inducing a favorable outcome to the antiviral therapy.For those untreated patients who entered into a chronic phase the presence of these genotypes clearly favor an inflammatory liver state.While other groups have reported similar findings [11] [18], our study further confirm and extend the knowledge on the field through the analysis of a single cohort recruited in a specialized unit where all parameters were evaluated by a single pan- In some of such patients, rs12979860 CC and rs8099917 TT genotypes will contribute to a spontaneous or treatment-induced clearance of the virus.In other cases, particularly on those that will develop chronic infection, the same genotypes will contribute to hepatic inflammation and possibly fibrosis/cirrhosis.
Funding
Study was funded by ANPCyT-Argentina.Grant Number: BID-PICT-2013-3290.Funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Figure 1 .
Figure 1.Association of rs12979860 and inflammatory activity.SNP rs12979860 is associated with higher ALT levels in CC compared to no-CC patients, independently of HCV genotype (a) or considering only HCV genotype 1 (b).Similar association is found when using METAVIR score as inflammation indicator (c).Data were analyzed by Chi square test.n, P and odd ratios (OR) are indicated in each panel.Asterisks denote significance level.
Figure 3 .
Figure 3. Association of CC (rs12979860) and TT (rs8099917) genotypes and high inflammatory activity.Genotypes CC (rs12979860) and TT (rs8099917) are associated with high inflammatory activity as measured by ALT levels.Data were analyzed by Chi square test (a) and Fisher exact test (b).n, P and odd ratio (OR) are indicated in the panel.Asterisks denote significance level.
) influencing the immune response to HCV.The applicability of our findings for the actual directacting antiviral therapies, where there are no data available on long term clinical effects, mainly in terms of chronic infection and inflammation, has to be demonstrated in future studies.Our study adds a piece of knowledge to the implications of genetic polymorphisms in the evolution of untreated chronic HCV infection.The studied polymorphisms, together with further innate and adaptive immune responses, clearly play a role in modulating the HCV infected patient outcome.
Table 1 .
Clinical features of patients.
Table 2 .
Genotypes and alleles representation in samples (a) and distribution of coincident alleles (b).
of experts, limiting the effects of sampling errors, and with simple clinical tools routinely used in public health facilities from poor countries.Hepatic inflammatory activity can lead, on time, to the appearance of fibrosis and lately to cirrhosis.When we analyzed the fibrotic liver stage in patients, we found that (i.e.infection lasting) and unknown mechanisms contributing to fibrosis progression, etc.Further studies on chronically untreated HCV-infected patients with known time from infection are required to elucidate whether the studied polymorphisms can contribute to fibrosis/cirrhosis through a prolonged inflammatory context.A full understanding of the implications of harboring specific rs12979860 and rs8099917 genotypes will require considerably larger patient group size, that would allow individual analysis considering other factors (like alcohol consumption, gender, age, body mass index, etc. | 3,718.4 | 2018-01-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
What do we Know about Contributing Factors for “Never Events” in Operating Rooms? A Machine Learning Analysis
A Surgical “Never Event” (NE) is a preventable error. Various factors contribute to the occurrence of wrong site surgery and retained foreign item, but little is known about their quantied risk in relation to surgery's characteristics. Our study uses machine learning to reveal factors and quantify their risk to improve patient safety and quality of care. Methods We used data from 9,234 observations on safety standards and 101 Root-Cause Analysis from actual NEs, and utilized three Random Forest supervised machine learning models. Using a standard 10-cross validation technique, we evaluated the model's metrics, and, through Gini impurity we measured the impact of factors thereof to occurrence of the two types of NEs. Results We identied 24 contributing factors in six surgical departments. Two had an impact of >900% in Urology, Orthopedics and General Surgery, six had an impact of 0–900% in Gynecology, Urology and Cardiology, and 17 had an impact of <0%. Factors' combination revealed 15-20 pairs with an increased probability in ve departments: Gynecology:875–1900%; Urology: 1,900:2,600%; Cardiology:833–1,500%; Orthopedics:1,825–4,225%; and General Surgery:2,720–13,600%. Five factors affected the occurrence of wrong site surgery (-60.96–503.92%) and ve of retained foreign body (-74.65–151.43%), three of them overlapping: two nurses (66.26–87.92%), Surgery length<1 hour (85.56–122.91%), Surgery length 1-2 hours (-60.96–85.56%). The use of machine learning has enabled us to quantify the potential impact of risk factors for wrong site surgeries and retained foreign items, in relation to surgery's characteristics, which in turn suggests tailoring the safety standards accordingly. adopted Gini [25] to estimate importance of features their combination our Gini impurity a by measuring of how often chosen Feature importance ranking calculated feature assumed
Abstract
Background A Surgical "Never Event" (NE) is a preventable error. Various factors contribute to the occurrence of wrong site surgery and retained foreign item, but little is known about their quanti ed risk in relation to surgery's characteristics. Our study uses machine learning to reveal factors and quantify their risk to improve patient safety and quality of care.
Methods
We used data from 9,234 observations on safety standards and 101 Root-Cause Analysis from actual NEs, and utilized three Random Forest supervised machine learning models. Using a standard 10-cross validation technique, we evaluated the model's metrics, and, through Gini impurity we measured the impact of factors thereof to occurrence of the two types of NEs.
Results
We identi ed 24 contributing factors in six surgical departments. Two had an impact of >900% in Urology, Orthopedics and General Surgery, six had an impact of 0-900% in Gynecology, Urology and Cardiology, and 17 had an impact of <0%.
Conclusions
The use of machine learning has enabled us to quantify the potential impact of risk factors for wrong site surgeries and retained foreign items, in relation to surgery's characteristics, which in turn suggests tailoring the safety standards accordingly.
Trial registration number: MOH 032-2019 Background Adverse medical events can lead to signi cant morbidity and mortality and increase healthcare expenditures. [1] A Never Event (NE) is an unacceptable adverse event, both preventable and unjusti ed, and should be reduced to zero through quality improvement. [2] Major NEs in perioperative care include incorrect surgery sites and foreign items retained in patients following surgery. [3][4] The human factors approach recognizes that human error is often the result of a combination of both individual surgeon factors and work system factors, [5] which makes human error the main contributing factor to NEs. [6] Human error includes surgeon distraction, [7] lack of situational awareness of the surgical team to possible error, and miscommunication among team members.
[8] Additionally, institutional factors, working conditions, such as increased workload and clinician pressure, create a work climate that is not conducive to meeting the standards required to maintain patient safety [9] and effective teamwork. [10] Currently, there are two essential international standards aiming to reduce NE occurrence: 1) the WHO Surgical Safety Checklist; [11] and 2) surgical counts of all items used during the surgery. [12] Yet, partial compliance, unstandardized implementation of these standards, [13] and other possible unknown factors keep the incidence of NEs unchanged. [14] In Israel, the incidence of retained foreign items during surgery is 3.2 in every 100,000 surgeries. [15] The incidence for wrong site procedure is unclear, but is generally estimated as 1 in every 100,000 surgeries.
This study adopts a machine learning (ML) approach [16] to identify currently unknown contributors to NE occurrence. Previous studies leveraging ML methods in healthcare have demonstrated the bene ts of analyzing and revealing non-trivial insights from diverse data types when compared to traditional methods. [17] To the best of our knowledge, this is the rst study to use ML methods to identify potential contributing factors to the occurrence of NEs in ORs.
Study Design
We utilized a supervised ML method called Random Forest (RF), [18][19] incorporating the popular Extra Tree classi er. [20] RF is an ensemble learning method that trains multiple "simple" decision tree models and merges them to achieve a more accurate and stable prediction.
The use of RF entails several desired elements needed for properly conducting the analysis for this study. First, RFs are used to rank the importance of features in a natural way. Speci cally, the importance of features can be determined by examining to what extent the tree nodes using a feature reduce the impurity (i.e., the uncertainty in classi cation) across all "trees in the forest." Second, RFs are known to cope well with imbalanced datasets (as is the case in this study), and avoid over tting the data. Finally, RFs compared favorably with several other supervised ML algorithms we tested using our data, including popular deep neural networks and support vector machines (SVMs). It is worthwhile mentioning that RFs have been used extensively in the medical eld for clinical risk prediction, [21] among other applications.
Safety Standards used in the OR (surgical safety checklists and surgical counts) were divided into safety veri cation at three distinct time periods -preprocedure, sign in and time out - [11] and addressed incorrect surgery site errors, which we will de ne as type A errors. Surgical counts were divided into three separate counts throughout the surgery to address retained foreign body errors, which we will de ne as type B errors: prior to skin incision; initiation of closure of fascia/cavity; and following skin closure. [22] In addition, we added general features, such as the name of the hospital, length of surgery, patient's gender and age, surgeon's specialty, and number of physicians and nurses present during surgery.
Data Collection and Annotation
Data were collected from 29 Israeli hospitals and consisted of two types of data entries: observations of 9,234 surgeries performed between January 2018 and February 2019 in which no NE occurred in the surgeries observed, and root cause analyses (RCA) of 101 NEs that occurred between January 2016 and February 2020 in the examined hospitals.
Observations
Initiated by the supervisory arm of the Israeli Ministry of Health, passive observations by medical students, physicians, nursing students or RNs are routinely performed in ORs. Observers for this study underwent an eight-hour long training that included simulations. In each OR, at least two observers passively observed randomly selected surgeries, and recorded and annotated the surgery process using a pre-de ned set of features. Observations were then transferred to a central database and routinely assessed for variability and reliability. Overall, 9,234 observations were conducted. Each observation was translated into a 93-feature long vector, representing characteristics of the surgery (Appendix 1). To maintain reliability, entries with greater than 5% discordance among annotators in one OR were discarded (<1%).
Root Cause Analyses (RCA)
RCAs were performed in response to NEs that occurred between January, 2016 and February, 2020. Overall, we reported 101 NEs: 49 of Type A and 52 of Type B. The obtained RCAs were manually annotated by the authors using the same 93-feature-long representation used to characterize the observations. However, unlike the observations, RCAs were performed retrospectively and, thus, a signi cant portion of the features was missing and could not be obtained. Speci cally, up to 40% of all other feature values were missing, a challenge we address further on.
Pre-Processing and Analysis Technique
As some features were non-binary (e.g., patient age, length of surgery), we rst discretized them, resulting in 250 binary features. This and subsequent steps were performed using a designated Python 3 program implemented by the authors that uses the standard scikit-learn ML package (https://scikitlearn.org/stable).
Examination of the 40% missing feature values revealed that most were strongly dependent on the NE type. Namely, for type A NEs, features that were assumed to be more related to NEs of type B were not investigated and vice versa. For example, for an NE on which the wrong hand was operated, there was no indication as to whether the surgeon scanned the surgical cavity for retained surgical items before closure. To mitigate this artifact, we used the popular iterative data imputation approach [23] where we predicted the value of each missing value while relying on the present features and available examples. Speci cally, using the entire dataset, each missing value was estimated using a standard Decision-Tree Regressor.
In addition, balancing steps were taken to cope with the high imbalance of the dataset. Speci cally, with over 9,000 observations and only 101 NEs, we adopted a cost-sensitive training approach [24] whereby our model adjusted for prediction mistakes on the minority class (NEs) by an amount proportional to how underrepresented it was (here, approximately 90 times under-represented).
We implemented three RF models using our data: Model 1 for differing between observations and NEs; Model 2 for differing between observations and NEs Type A; and Model 3 for differing between observations and NE Type B. We used a standard 10-cross validation technique to evaluate the model's metrics and adopted the standard Gini impurity, [25] measure to estimate the importance of features and their combination in our models. Intuitively, Gini impurity captures the "noise" in a set by measuring of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the set. Feature importance ranking was conducted using the trained RF models and we reported the change in NE occurrence probability given the entire data set. We considered each feature separately and calculated the probability of NE occurrence when that feature assumed the value True as compared to the value False.
The study was approved by the University's and Ministry of Health Ethics Committee (MOH 032-2019).
Results
The majority of NEs (62.32%) occurred in six main departments: General Surgery, 19 (18.81%); Gynecology, 17 (16.83%); Orthopedics, 16 (15.84%); Cardiac and Cardiothoracic 15 (14.85%); Ophthalmology 8 (7.92%); and Urology, 7 (6.93%) ( Table 1). Therefore, our analysis focused on the occurrence of NEs in these six departments. In order to evaluate our models, we adopted the Area Under the Curve (AUC) measure which is especially suited for imbalanced data, as in our case in this study, since it does not have any bias toward models that perform well on the minority of majority classes in the expense of the other.
[26] Our three RF models demonstrated good performance, exhibiting an Area Under the Curve (AUC) between 0.81 and 0.85. Generally, AUC scores between 0.8 to 0.9 are considered excellent. [27]. AUC is interpreted as the probability that our model will rank a randomly chosen positive instance higher than a randomly chosen negative one.
[28] As such, our models can be considered relatively strong and accurate despite their limitations.
Feature Importance Figure 1 presents the top contributing features to the occurrence of NEs (of both types combined) in the six departments along with the associated probability change.
The top 14 contributing features varied signi cantly across departments, and there was no single feature set which was consistently more informative across all operations in predicting NEs. For example, feature [C], Discrepancy in second count, varied signi cantly across departments (160% to 1,950%). Feature [B], Surgery is paused because of discrepancy in third count, appeared in four of the six departments, and the associated probability change varied dramatically as well, between 269% and 1,540%. There were 10 features that consistently decreased the chance of an NE, including [F]; Surgeon scans the cavity/fascia before closure during the second count, which affected ve out of six departments, which was consistent in its probability change between
Effects of Feature Combinations
In the following analysis (Figure 2), we examine the effects of paired features, i.e., features that occur together in the data. It is important to note that, when considering feature combinations, their occurrence is expected to be very low especially in the NEs class. As such, the estimated effects are likely to be very high, yet their con dence is signi cantly low.
Interestingly, in General Surgery, there were 14 feature combinations that caused a probability change of 13,600% (Figure 2A). In comparison, the single feature analysis (Figure 1) revealed a probability change of 1,287% and 1,168%, surprisingly by two features that were not part of the 14 feature combinations identi ed here.
In Figure 2A (Gynecology), the effect of every feature combination is associated with a probability change of 1,000-2,000%. In the single feature analysis (Table 2), the effect of two of the features separately was <900%, and the rest lagged behind with <150%. In Urology ( Figure 2B), results show there were dozens of pairs with an effect of 1,900-2,500%, while the effect of a single feature had <1150% effect on error. In General Surgery ( Figure 2E), the accumulated effect of two features together showed a dozen pairs with an effect of 1,900-4,200%, while the effect of a single feature had an <1,950% indication on error, and the rest even lower percentages.
Features Affecting Types A and B
Turning to Models 2 and 3, there is an overlap in three of the top ve contributing features to Types A and B errors (Figures 3 and 4): 1) the presence of two nurses during the surgery predicts a greater occurrence of Type A (66%) and Type B (88%); 2) an operation < 1 hour had a greater occurrence of Type A (122%), and Type B (87%); and 3) when the operation lasted between one to two hours, both Types A and B were less frequent, decreasing by 60% and 74%, respectively. The surgical department that was most affected regarding the occurrence of Type A NEs was Ophthalmology, with a prevalence of 504%, while General Surgery was associated with a decrease of 63% in Type A (Figure 3). For Type B, the two remaining features were staff driven; the feature "more than three physicians" was associated with an increased prevalence of Type B (151%), while "two physicians" was associated with a decreased prevalence of 52% with Type B (Figure 4).
Discussion
Surgical errors are a serious public health problem and uncovering their causes is challenging. [29] In this study, we aimed to uncover contributing factors to NEs by using ML methods to identify heretofore unknown contributors, since ML automatically looks for patterns not seen by classic methods. [18,30] Despite the widespread use of the surgical safety checklist and strict surgical counts, the prevalence of NEs has not decreased signi cantly since their widespread implementation. [31][32] The human factor, and not system error, has been identi ed as the main contributing factor to NEs. 31,33] For example, in one study using an analysis and classi cation system, 628 human factors were divided into four categories that in uenced NEs: preconditions for action, unsafe actions, oversight and supervisory factors, and organization in uences.
[6] Additional studies have identi ed lack of communication and lack of empirical evidence as barriers to the implementation of the universal safety standards. [29,34] Some studies have suggested that counting alone is insu cient, and even when declared correct, there have been items left in the patient, [35][36] mostly in the abdomen and pelvis [ 35,37 This may explain our higher probability of Type B error in General Surgery and Urology, which involve those regions.
We further analyzed paired contributing factors representing the relative risk in the OR's complex work environment, when the graded risk increased compared to single feature analysis. For example, in Orthopedics, discrepancy in the count in combination with a surgery length of 1-2 hours increased the chances for an NE, what can be explained by partial compliance with the standards. In shorter surgeries, the staff rushes and skips some phases of the checklists [38] and the complex sets used challenges the counts. [31,39] We found that the occurrence of wrong site surgery increases in Ophthalmology during short surgeries and when two nurses are present. Its occurrence decreased in general surgery. This increased risk in could be due to the di culty of performing a time out because the surgeons have antiseptic hands and cannot review charts, or perhaps doing so is not made a priority. [40] The decrease in general surgery could be explained by better implementation of the time out process in that specialty. [41][42] One of the main factors contributing to the occurrence of NEs is lack of communication among participating members in the surgery, [33] which may explain our ndings that the number of staff had an increasing/decreasing effect on NE occurrence.
We recognize that the current study is limited by the amount, quality and diversity of the data used. In the context of this work, our samples come from two distinct sources: prospective observations and retrospective investigations of NEs where the latter consists of a small number of NEs compared to the relatively high number of analyzed observations. We believe that these limitations are inherent to the problem at hand as performing prospective analyses of NEs is virtually impossible due to their infrequency and the number of NEs is nominally small. To mitigate some of these concerns, we have used grounded statistical techniques that allowed us to train adequate model and estimate feature importance. Nevertheless, given the above, the feature impact should be considered carefully and validated in future study.
In future study we plan to further expand our data pool with newly obtained observations and NEs as those are accumulated. In another avenue, we explore the use of transfer learning of NEs from other countries which could be used to better inform our model. This avenue could prove valuable in mitigating the imbalanced nature of our data yet may introduce signi cant biases due to the variety of data sources.
Conclusion
Our results suggest that the existing "one size ts all" safety approach currently in place may signi cantly bene t from tailored adjustments that will consider additional factors such as those identi ed in this work. These more speci c guidelines may be used adjust risk management programs to improve patient safety.
Declarations
Ethics approval and consent to participate: Availability of data and materials: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests-
To the best of our knowledge, the named authors have no competing interests, nancial or otherwise to disclose. Effect of two features' combination on prediction by surgical departments Figure 3 Features affecting the wrong site surgery (Type A) Figure 4 Features affecting retained foreign item during surgery (Type B)
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. appendix1.docx | 4,538.8 | 2021-10-06T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Rhythms of the collective brain: Metastable synchronization and cross-scale interactions in connected multitudes
Crowd behaviour challenges our fundamental understanding of social phenomena. Involving complex interactions between multiple organizational levels and spanning a variety of temporal and spatial scales of activity, its governing mechanisms defy conventional analysis. Using a data set comprising 1.5 million Twitter messages from the 15M movement in Spain as an example of multitudinous self-organization, we investigate the processes that underlie the coordination its spatial and temporal scales. We propose a generic description of the coordination dynamics of the system measuring phase-locking statistics at different frequencies using wavelet transforms, identifying 8 frequency bands of entrained oscillations between 15 geographical urban nodes. Then we apply maximum entropy inference methods to describe Ising models capturing transient synchrony between geographical nodes in our data at each frequency band. The models show that 1) all frequency bands of the system are operating near critical points of their parameter space and 2) while fast frequencies present only a few metastable states displaying all-or-none synchronization, slow frequencies present a diversity of metastable states of partial synchronization. Furthermore, describing the state at each frequency band using the energy landscape of the corresponding Ising model, we compute transfer entropy to characterize cross-scale interactions between frequency bands, showing 1) a cascade of upward information flows in which each frequency band influences its contiguous slower bands and 2) downward information flows where slow frequencies modulate distant fast frequencies.
Introduction
Coordinated activity is a powerful force in creating and maintaining social ties [1]. From communal dances in ancient human groups to civic festivals in the French Revolution or collective muscular manifestations in Nazi marches and rallies [1, p.136, p.148-149], visceral, emotional sensations of shared movement have been used to create communal identities and to shape political landscapes. Historically, forms of distributed communication and coordination have often come together with episodes of large-scale mobilization and social change, as the widespread print-shop networks of radical reforming movements with the generalization of the printing press during the 16th century German Reformation [2] or postal networks of the Republic of Letters in the Age of Enlightenment a century later [3]. Today, amidst unprecedented development of communication technologies, new forms of coordination for large and scattered communities have been unleashed around the globe.
The rise of new digital communication tools and network technologies is accelerating fast bidirectional communication, generating new forms of collective communication and action. Digital communications increase the autonomy and influence of the social groups using them facilitating forms of mass self-communication [4], collective intelligence using pools of social knowledge [5] or smart mobs exploiting new found communication and computing capabilities via ubiquitous devices [6]. From protest movements including the Arab Spring or the Occupy movements to autonomous responses in the face of natural disasters, for example Hurricane Sandy or the Tōhoku earthquake, several examples highlight the increasing power of digitally connected social and political grassroots movements to shape events. Recognition of a growing influence has brought with it heightened scholarly interest in its explanation: how such movements arise and self-organize, what mechanisms underlie their formation and how are they able to constitute autonomous social and political subjects? [7]. Recent advances have described specific elements of connected multitudes: the geographical diffusion of trends [8]; the interplay between exogenous and endogenous dynamics [9]; or the connection between social media and collective activity in physical spaces [10]. Nevertheless, many of the mechanisms so far explored are specific to a particular scale or level of description of social dynamics. General mechanisms offering explanatory insights across different levels remain poorly articulated. The same problem applies to qualitative analyses trying to capture general principles of connected multitudes. These include perspectives stressing the individualistic logic pervading digital communication tools operating through sharing personalized content in social media [11], in sharp contrast
Results
We use a data set of 1,444,051 time-stamped tweets from 181,146 users, collected through the Twitter streaming API between 13 May 2011 and 31 May 2011 [20] using T-Hoarder [24]. Messages were captured during 17 days during the Spanish 15M social unrest events in 2011 containing at least one of a set of 12 keywords or hashtags related to the protest (see reference [20] for a detailed description). We extracted geographical information from the location information of users (see SI Appendix), selecting the 15 urban areas with the largest number of messages. Using this information, we generated time-stamped series reflecting the number of tweets emitted from each city for intervals of 60 seconds.
Synchronization at multiple frequencies
One of the most prominent features of the 15M movement was its fast territorial development. Without any coordination centre or any formal organization, the movement was able to reproduce a network of camps across Spanish cities in a period of a few days. As this coordination between geographical nodes takes place at several temporal scales, we propose a generic description of these interactions based on the temporal coordination of oscillations at multiple frequencies. We analyse the coordination between populations at main Spanish cities using Morlet wavelet filtering to extract the phase content θ i ( f ,t) of the activity time series at city i at time t and frequency f , with a span of frequencies in the range [1.67 · 10 −3 Hz, 9.26 · 10 −5 Hz] (from 10 minutes to 3 hours) logarithmically distributed with intervals of 10 0.01 . We use phase-locking statistics [25] to define phase-locking values between two cities i and j as: where δ is the size of the window of temporal integration: δ = n c f , being n c the number of cycles in which we analyse phase-locking. We use a value of n c = 8 cycles, similar to the values typically used in neuroscience, ensuring that we are detecting sustained synchronization. A i j (t) is a corrector factor removing spurious synchronization when the network is inactive (e.g. during nighttime, see SI Appendix).
Statistical significance of phase-locking values is determined by comparing them to phase-locking values of surrogate time series obtained using the amplitude adjusted Fourier transform [26]. We use 200 surrogate time series to estimate a significance threshold for the values of φ i j ( f ,t) for all values of f . The average phase-locking values of surrogate time series were used to compute a threshold φ th ( f ), indicating a value higher than 99% of surrogate data. Using this threshold, we define phase-locking links between two cities i and j as statistically salient values of φ i j ( f ,t): As we document in reference [27], using phase-locking statistics we find widespread moments of significant synchrony at different instants often corresponding with important moments of the 15M protests. For illustrative purposes, in Figure 1 we show the total number of phase-locking links S( f ,t) = ∑ i, j Φ i j ( f ,t) for a specific day of the protests. At faster frequencies (lower period) we observe short and less intense instants of synchrony, while at slower frequencies synchrony lasts for longer periods of time. Using wavelet pattern matching [28] over S( f ) = S( f ,t) after applying a linear detrending, we detect frequency peaks of synchronization in the system (see SI Appendix), identifying eight main frequency bands of synchronization f k , k = 1, ..., 8, where larger k corresponds to larger timescales (i.e. slower frequencies).
Pairwise maximum entropy modelling of phase-locking statistics
In order to inspect how these phase-locked coalitions are operating at each frequency band, we derive from our data statistical mechanics models of the system. With these models we can infer macroscopic properties from microscopic descriptions of the system. Using maximum entropy models we infer the probability distribution of possible states s of the network, corresponding to all the combinations of binary possibilities of each node being or not being phase-locked to other nodes in the network at a particular frequency of synchronization. For simplicity, we consider the state of a node i being phase-locked to a synchronized cluster, that is s i = 1, when it has at least one synchronization link and (i.e. Φ i j = 1 for at least one value of j), and otherwise the state of the node is set to s i = −1. We extract a pairwise maximum entropy model described by the Boltzmann distribution of an Ising model. This is the least-structured model that is consistent with the mean activation rate and correlations of the nodes in the network. Pairwise correlation maximum entropy models have been successfully used to map the activity of networks of neurons [29], antibody sequences [30] or flocks of birds [31]. These models, instead of being postulated as approximations of real phenomena, can infer exact mappings capturing measured properties of a system (means and correlations in our case), making them good candidates for capturing the structures underlying social coordination.
The maximum entropy distribution consistent with a known average energy is the Boltzmann distribution P(s) = Z −1 e −β E(s) , where s is a state of the network, Z is the partition function and β = 1 T k B , being k B Boltzmann's constant and T the temperature. The energy of the model with pairwise interactions is defined as E(s) = − ∑ i h i s i − 1 2 ∑ i< j J i j s i s j , where 'magnetic fields' h i represent influences in the activation of individual nodes and 'exchange couplings' J i j stand for the tendencies correlating the activity between nodes. Without loss of generality we can set the temperature T = 1. Considering a pairwise model, the resulting distribution of the maximum entropy model is: where the h i and J i j are adjusted to reproduce the measured mean and correlation values between nodes in the network.
From the frequency bands f k extracted in the previous section we extract models of pairwise correlations at the corresponding frequencies. For each frequency band, we infer an Ising model P f k (s) solving the corresponding inverse Ising problem, using a coordinate descent algorithm (see Methods) for fitting the parameters h i and J i j that reproduce the means and correlations found in the series of states s for the description of phase-locking relations at each frequency.
The accuracy of the inferred models can be evaluated by testing how much of the correlation structure of the data is captured. One measure to evaluate this is the ratio of multi-information between model and real data [32]. In our case, our data limits us to computing the entropy of small sets of nodes (between 5 and 7, see SI Appendix). Limiting our entropy calculations to random sets of five to seven nodes, we can see in Figure S2 and Table S3 that our models are able to capture around 70% of the correlations in the data for subsets of the indicated sizes (see SI Appendix for a detailed description).
Once we have extracted a battery of models P f k (s), indicating the probability distributions of phase-locking configurations at different frequency bands, we explore the thermodynamic (macroscopic) properties associated to them. First, we observe how all the models are poised at critical points. One signature of criticality we find is that the probability distribution of P f k (s) follows a Zipf's law ( Figure 2.A), specially for slower values of f k . Finding a scale-free distribution in our model is consistent with power laws appearing in the dynamics of the temporal series of tweet activity found in this data set [27] or in structural parameters in similar data sets [22]. Furthermore, the Ising models allow us to find further evidence of the critical behaviour of the model by exploring divergences in their heat capacity. By introducing a fictitious temperature value changing the temperature parameter T (previously assumed to be equal to 1), we compute the heat capacity of the system as: where H[P(s)] is the Shannon Entropy of the probability distribution of an Ising model. A divergence in the heat capacity of the system is an indicator of critical phenomena. As we observe in Figure 2.B, for all f k the peak of the heat capacity is around the value T = 1 suggesting that the models are poised just at critical points. Inferring the Ising models to match subsets of the network nodes (see SI Appendix), we observe how the normalized peak in the heat capacity diverges with the system size (the specific -although representative -case of f 5 is shown in Figure 2.C), where C(T )/N grows with N with a linear rate in the range [0.012, 0.021] (see SI Appendix). Together with the Zipf distribution, the divergence of the heat capacity strongly suggests that social coordination phenomena in the 15M social network are operating in a state of criticality [32]. The fact that all frequency bands are operating near critical points does not mean that they are displaying the same behaviour. We can extract more information about the behaviour of the system at each frequency by analysing the presence of locally-stable or metastable states in the system. Metastable states are defined as states whose energy is lower than any of its adjacent states, where adjacency is defined by single spin flips. This means that in a deterministic state (i.e. a Hopfield network with T = 0) these points would act as attractors of the system. In our statistical model metastable states are points in which the system tends to be poised, since their probability is higher than any of its adjacent states. Finding the metastable states of the models at each frequency, we observe how the number of metastable states increases for slower frequencies ( Figure S4.B), as the model presents a higher number of negative (inhibitory) couplings J i j (see Figure S4.A).
Moreover, if we count the number of nodes that are phase-locked (i.e. the sum of all nodes with s i = 1) for each metastable state represented in Figure 3, we observe important distinctions among frequency bands. For faster values of f k there are only a few metastable states: a state where all nodes are not phase-locked (i.e. the system is completely desynchronized), and a few values where almost all nodes are phase-locked. Thus, at fast frequencies synchronization rapidly spreads from zero to all nodes in the network. On the other hand, for slower frequencies the number of metastable states grows and the number of phase-locked nodes for each state decreases. This shows that slow frequency synchronizations allows the creation of a variety of clusters of partial synchronization, allowing parts of the network to sustain a differentiated behaviour.
These results suggests that fast and slow synchronization frequencies in the network operate in complementary regimes -all operating near critical points-the former rapidly propagating information to all the network and the latter sustaining a variety of configurations responding to specific situations. Systems in critical points present a wide range of dynamic scales of activity and maximal sensitivity to external fluctuations. These features may be crucial for large systems that are self-organized in a distributed fashion. The presence of these complementary modes of critical behaviour at different frequency bands suggests that the system might be operating in a state of self-organized criticality, in which frequency bands adaptively regulate each other in order to maintain a global critical behaviour.
Cross-scale interactions in synchronization dynamics
Modelling phase-locking statistics provides a characterization of the interactions within frequency bands of synchronization. Furthermore, differences in the metastable states at each frequency band suggest what kind of interactions take place between distinct temporal scales. Because our definition of phase-locking statistics is restricted to interactions within the same frequency, we cannot use the computed phase-locking statistics to directly model inter-scale phase-locking between different frequencies (e.g. 2:1 phase-locking). However, we can use the thermodynamic descriptions of the system provided by maximum entropy models to simplify the analysis of inter-scale relations in real data. Analysis of multiscale causal relations is typically a difficult task, and in our case we have to deal with a system of a high number of dimensions (15 · 8 = 120 dimensions). Nevertheless, the Ising models describe the stability of the configurations of the 15 nodes in the network at each frequency band with an energy value. Thus, an easier way to describe multiscale interactions is to observe how fluctuations in the energy at one level affect the energy of the system at other levels, reducing the dimensions we have to deal with to only the 8 frequencies of synchronization.
We characterize the information flow between frequency bands using transfer entropy [33] between energy levels at each frequency E f k (s). Transfer entropy captures the decrease of uncertainty in the state of a variable Y derived from the past state of other variable X: where x t denotes the state of X at time t and τ indicates the temporal distance used to capture interactions. In order to compute transfer entropy over energy values between timescales, we discretize the values of energy E f k (s) into a variable with 3 discrete bins E * f k (s) using the Jenks-Caspall algorithm [34]. The value of 3 bins was selected to optimize the computation of joint probability density functions (see SI Appendix) although we tested values from 2 to 6 bins with similar results. Using transfer entropy we estimate the causal interactions between energetic states at each timescale by computing the values of T E * observe upward and downward flows of information. As we can see, upward flows decrease importantly with distance between scales. In contrast, downward flows increase slightly with distance between scales. These results show an interesting picture of cross-scale interactions. While in upward interactions energy at each frequency band only influences neighbouring slower bands, in downward interactions slow frequency bands modulate distant faster bands. We also observe this in the schematic in Figure 5.A, where for simplicity only the largest values of F down ( f k , d) and F down ( f k , d) are displayed for each frequency band. These results suggest that there might be general rules for scaling up and scaling down social coordination dynamics in a nested structure of frequency bands. The mechanisms involved might resemble those found in neuroscience, where upward cascades have been found to take place in the form of avalanches propagating local synchrony and downward cascades take the form of phase-amplitude modulation of local high-frequency oscillations by large-scale slow oscillations [16]. Future research is required for testing the application of these rules to other social coordination phenomena and the specific mechanisms operating behind upward and downward cross-scale interactions.
Discussion
It is appealing to think that general coordinative mechanisms may be suited to explain the behaviour of social systems at different scales. Here, using a large-scale social media data set, we have shown how the application of maximum entropy inference methods over phase-locking statistics at different frequencies offers the prospect of understanding collective phenomena at a deeper level. The presented results provide interesting insights about the self-organization of digitally connected multitudes. Our contribution shows that phase-locking mechanisms at different frequencies operate in a state of criticality for rapidly integrating the activity of the network at fast frequencies while building-up an increasing diversity of distinct configurations 7/9 at slower frequencies. Moreover, the asymmetry between upward and downward flows of information suggests how social systems operating through distributed transient synchronization may create a hierarchical structure of temporal timescales, in which hierarchy is not reflected in a centralized control but in the asymmetry of information flows between the coordinative structures at different frequencies of activity. This offers a tentative explanation of how an unified collective agency, such as the 15M movement in Spain, might emerge in a distributed manner from mechanisms of transient large-scale synchronization. Of particular interest would be to test the extent our findings about the structural and functional relations of social coordination apply to other self-organizing social systems, or their relation with mechanisms of cross-scale interactions known from large-scale systems neuroscience. A new generation of experimental findings based on statistical mechanics models may provide the opportunity to discover the mechanisms behind multitudinous social self-organization.
Data availability
The data employed in this study was kindly provided by the authors of reference [20].
Learning pairwise maximum entropy models from data Ising models are inferred using an adapted version of the coordinate descent algorithm described in reference [35]. The coordinate descent algorithm works by iteratively adjusting a single weight h i or J i j that will maximize an approximation of the change in the empirical logarithmic loss between the observed data and the model, computed through the means and correlations present in the empirical data and the model. The code implementing the coordinate descent algorithm is available at https://github.com/MiguelAguilera/ising.
Data preprocessing
We use a data set of 1,444,051 tweets from 181,146 users, collected between 13 May 2011 and 31 May 2011. This data set was extracted from the Twitter streaming API, which provides information on the time and content of the tweet, as well as information on the sender, including location. Messages were captured when they contained one of the following hashtags or keywords (which were selected as some of the most relevant during the emergence of the 15M movement): #15M, 15-M, #democraciarealya, #tomalacalle, #Nolesvotes, #spanishrevolution, #acampadasol, #acampadabcn, #indignados, #notenemosmiedo, #nonosvamos, #yeswecamp. We filter messages in the data set using the location field in the description of the user that sent the message. Since the 15M was (at least during the first days) mainly an urban phenomena, we analyse geographical interactions between the 15 cities with more activity in Twitter during 17 days of the protests. We find the 15 names of cities most repeated in the data set, and counted messages corresponding to a specific city when the city name appeared in the location field. Since the location is a field of the description of the user, it does not necessarily correspond to the real location of the user at the moment the message was sent. We ran a test on geolocalized Twitter data from Spain, observing that for a set of 20.000 random tweets in a 80.25% the profile location corresponds with the actual geolocation of the user, giving the information of the user's location field a moderately high reliability.
Phase-locking statistics
Time series of activity at each city are generated by counting the number of messages from users located at the city in intervals of 60 seconds for a period comprising 17 days, starting at 2AM May 14th 2011. Each time series is filtered using Morlet wavelets at different frequencies. For each city i and frequency f we extract the phase content θ x ( f ,t) for each moment of time t, with a frequency span between [1.67 · 10 −3 Hz,9.26 · 10 −5 Hz] (from 10 minutes to 3 hours) mapped into a logarithmic sequence with intervals of 10 0.01 .
Phase-locking values are defined for each pair of cities i and j as defined in Equation 1. We introduce a corrector factor A i j (t) to remove spurious synchronization when the network is inactive. A i j (t) is zero when the mean activity of node i or node j for a moving window of 30 minutes is below a threshold of 0.25 times its mean activation, which generally happened during some periods at night.
From phase-locking values we extract phase-locking links, which are activated when the phase-locking value is higher than 99% of a set of 200 surrogate time series we generate for purposes of statistical validation, as indicated in Equation 2. Surrogate time series are generated using amplitude adjusted Fourier transform using the TISEAN software (Available at http://www.mpipks-dresden.mpg.de/˜tisean/Tisean_3.0.1/). Amplitude adjusted Fourier transform surrogates are time series that preserve the power spectrum of a distribution and a distribution of values, but remove the temporal correlations present in the original signal.
Detection of salient synchronization frequency bands
We localize frequency bands synchronization by detecting peaks of salient phase-locking links in the logarithmic frequency space. We compute the mean number of synchronization links for each frequency as the temporal mean of phase locking links at that frequency S( f ) = ∑ i, j Φ i j ( f ,t) ( Figure S1.A). As S( f ) increases with slower frequencies following approximately a log-linear trend, we approximate the trend computing a least squares first order polynomial fit respect to the logarithm of f and remove it from S( f ) obtaining a detrended function. S( f ) * In order to robustly detect peaks, we apply a twodimensional wavelet transform of the detrended S( f ) over the vector of logarithmic frequencies. Using 10 Ricker wavelets Figure S1.B. of widths from 1 to 10 steps in the selected logarithmic range of frequency (i.e. a range of [1.67 · 10 −3 Hz, 9.26 · 10 −5 Hz] logarithmically distributed with intervals of 10 0.01 ) we compute the wavelet transform matrix and detect its ridge lines to find eight peaks of salient synchronization (code available at https://github.com/scipy/scipy/blob/v0.14. 0/scipy/signal/_peak_finding.py#L410). As the position of the detected peaks vary slightly depending on the parameters employed, we adjust the position of each peaks by climbing to the nearest local maxima if one is found within a distance of two steps. In Figure S1.B we observe the result of the process and the 8 detected peaks. From these peaks we extract 8 frequencies f k , with k = 1, ..., 8, indicating the position of the peaks in S( f ) ( Table S1).
Number of samples required to compute probability distributions
When we compute multi-informations and transfer entropies from the data set, we face a compromise between the size of the probability distribution function of the system (corresponding to 2 N states) and the number of samples we employ for calculating it. In order to correctly compute these probability distributions, we need to ensure that the number of samples found in our data is sufficiently large for describing the frequency of occurrence of all possible states. Although the number of samples in our data is large, as data changes at different frequencies, slower frequencies may present a smaller number of transitions between states than fast frequencies, therefore offering a reduced effective sample of visited states. In order to quantify the number of states visited at each frequency, we count the number of transitions between states s used to infer the Ising models at each frequency (Table S2). Knowing that number, we can estimate a threshold of how many states can have a probability distribution to be accurately estimated from our samples at different frequencies. We arbitrarily establish a requirement of the number of transitions being larger than 2 4 times the number of possible states of the objective probability distribution function. Although the exact value of the threshold is arbitrary, during the analysis we tried different thresholds to ensure the robustness of the results.
Multi-information measures for assessing the accuracy of maximum entropy models
Once we infer the maximum entropy models that correspond to the means and correlations found in phase-locking data, it is important to characterize what is the accuracy of the model, that is, to what extent the statistical model generated is mapping the data we used in the inference. The accuracy of the model can be further evaluated by asking how much of the correlative structure found in the data is captured. We can measure the overall strength of correlations in the network using multi-information, which is defined as the total reduction in entropy relative to an independent model H[P r ] is the entropy of the distribution of the real system whose data we are analysing and H[P 1 ] is the entropy of an independent model. In our case, an independent model would be the equivalent of adjusting an Ising model in which the couplings are zero, and thus its energy function is defined as E(s) = − ∑ i h i s i . Multi-information can as well be used to compute the reduction of entropy of the distributions P 2 of the pairwise Ising model we inferred from data as The ratio between these two quantities gives the fraction of the correlations captured by the pairwise Ising model: Unfortunately, the data available is not enough for reliably computing P r . The probability distribution P r has a number of possible states of 2 15 , while in our data the number of different states transited by the system is one or two orders of magnitude inferior, depending on the frequency. However, we can compute accurately subsets of the complete probability distribution P r (s i ), with {s i } ⊂ {s i }. For each frequency, we count the number of transitions between states found in the time series in our data, and contrast that number with the dimension of the subset of the probability function using a number n of nodes, i.e. 2 n . We use an arbitrary threshold requiring the number of states being at least 2 4 times larger than the number of values of the probability distribution function. Different thresholds yield slightly different results, although they don't change significantly the final results. We find that for frequencies from f 4 to f 8 we can compute reliably subsets with up to 5 nodes. For frequencies f 2 and f 3 the number increases to up to 6 and for f 7 it is 7 nodes.
In Figure S2 we can observe the distribution of the values of I 2 I for 100 random choices of subsets for each number of nodes. We can observe that most of the subsets the values of I 2 I indicate that between 60% and 80% of the correlations are captured (Table S3) The limited availability of data, specially for slower frequencies, prevents us to compute the accuracy of the model for subsets with larger number of nodes. Future analysis applied to larger data sets should test if the accuracy of the model holds for capturing the correlations between larger subsets of nodes.
Parameters of the Ising models
Here we display the parameters h and J inferred for the Ising models at each frequency. As we observe in Figure S5, as we move from faster to slower frequencies, the amplitude of h and J increases. As well, the percentage of negative couplings increases. We compute the ratio of negative couplings as: Table S3. Distributions of multi-information ratios. Mean and standard deviation for each distribution in Figure S2. In figure S3.A we can observe how the ratio of negative coupling increases with slower frequencies. It is known from spin glass theory that metastable states emerge when some of the couplings between variables are negative. In Figure S3 we can observe how there is a correlation between the ratio of negative couplings and the number of metastable states.
Divergence of the heat capacity
At critical points, derivatives of thermodynamic quantities of the system as the entropy may diverge. An example of this is the heat capacity, whose divergence is a sufficient condition for criticality (though not a necessary one). To test the divergence of the heat capacity of the system, we extract Ising models of different sizes related to each frequency f k . From sizes 5 to 15, we extract models inferring the set of means and correlation from the first N nodes of the system in order of increasing number of tweets emitted. For each model, we compute the normalized heat capacity C(T )/N. In Figure S4.A we observe the divergence of the maximum value of the peaks for sizes N = 6, 9, 12, 15. As size increases, the peak is higher and closer to T = 1. In Figure S4.B we represent the linear trend of the peaks from size 5 to 15. Trends are computed using a least squares first order polynomial fit. We identify trends with slopes in the range [0.012, 0.021]. A linear trend of max[C(T )/N] corresponds to a quadratic increment in the peak of the heat capacity C(T ) as N increases, suggesting a divergence of the heat capacity of the system.
Transfer entropy
Using the energy of the Ising models E f k at different frequencies, we compute transfer entropy by discretizing energy functions into clusterized variables E * f k . We apply natural Break classification through the Jenks-Caspall algorithm (code available at https://github.com/domlysz/Jenks-Caspall.py), which for each cluster minimizes the average deviation from the cluster's mean to determine the best arrangement of values intro different clusters. Since computing transfer entropies requires to compute joint probability functions with three variables, to meet the same criteria we used to compute multiinformation, we use a number of 3 clusters to ensure that for all the frequencies we have a number of samples of transited states which is at least 2 4 times larger than the values in the probability distribution. After discretizing the energy functions we compute the values of transfer entropies T kl (τ) = T E * | 7,739.2 | 2016-11-21T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Kelvin and Rossby Wave Contributions to the Mechanisms of the Madden–Julian Oscillation
: The Madden–Julian Oscillation (MJO) is a large-scale tropical weather system that gen-erates heavy rainfall over the equatorial Indian and western Pacific Oceans on a 40–50 day cycle. Its circulation propagates eastward around the entire world and impacts tropical cyclone genesis, monsoon onset, and mid-latitude flooding. This study examines the mechanism of the MJO in the Lagrangian atmospheric model (LAM), which has been shown to simulate the MJO accurately, and which predicts that MJO circulations will intensify as oceans warm. The LAM MJO’s first baroclinic circulation is projected onto a Kelvin wave leaving a residual that closely resembles a Rossby wave. The contribution of each wave type to moisture and moist enthalpy budgets is assessed. While the vertical advection of moisture by the Kelvin wave accounts for most of the MJO’s precipitation, this wave also exports a large amount of dry static energy, so that in total, it reduces the column integrated moist enthalpy during periods of heavy precipitation. In contrast, the Rossby wave’s horizontal circulation builds up moisture prior to the most intense convection, and its surface wind perturbations enhance evaporation near the center of MJO convection. Surface fluxes associated with the Kelvin wave help to maintain its circulation outside of the MJO’s convectively active region.
MJO Overview and Impacts
The Madden Julian Oscillation (MJO) is a large-scale tropical weather disturbance that produces heavy rainfall over the equatorial Indian and western Pacific Oceans on a 45-50 day cycle [1][2][3][4]. The MJO's circulation, which includes low-level cyclonic gyres and an upper-level quadrapole [5], is strongly coupled with moist convection over the western Pacific warm pool and more weakly coupled with convection in the eastern Pacific and western hemisphere [6][7][8]. Although the MJO's convective envelope generally moves slowly eastward, it contains smaller-scale convective systems that propagate in a variety of directions, including eastward and westward in the equatorial west Pacific [9][10][11] and northward in Asian monsoon regions [12].
The MJO has important impacts on weather and climate all over the world. First, it modulates tropical cyclones over the Atlantic, Indian, and Pacific Oceans [13,14]. Second, it affects the timing of active and break periods in Asian, Australian, and North American Monsoons [15,16], as well as the frequency and intensity of monsoon disturbances [17]. Third, it is known to impact mid-and high-latitude weather, modulating atmospheric rivers that can cause extreme flooding [18], and interacting with the stratospheric polar vortex and North Atlantic Oscillation [19]. Finally, westerly wind bursts associated with the MJO can impact the development of El Nino events, including their timing and diversity [20,21]. In particular, extreme El Nino events (in 1982, 1997, and 2015) all started after a strong sequence of westerly wind bursts generated within the MJO [22].
Proposed MJO Mechanisms
Despite decades of study and numerous modeling attempts and theoretical interpretations, the mechanisms of the Madden-Julian Oscillation (MJO) are not fully understood. Many different MJO theories have been put forward to explain the MJO's instability and/or slow eastward propagation, and they do not agree on which physical processes are the most important [23][24][25]. The processes these theories emphasize include enhanced surface evaporation in the MJO's perturbation easterlies [26,27] or westerlies [28,29]; frictional surface convergence to the east of the MJO's convection [25,30,31]; perturbations to atmospheric radiation [29,[32][33][34][35]; momentum transport, moistening, and/or convective triggering due to smaller-scale disturbances [36][37][38]; and baroclinic instability [39] to name a few.
The various MJO theories also differ in the roles played by the two key dynamical components of the MJO's circulation: the Kelvin wave and the Rossby wave [40][41][42]. Early on, the MJO was essentially interpreted as a convectively coupled Kelvin wave, gaining energy from wind-induced surface heat exchange where MJO perturbation easterlies increased surface wind speed [26,43,44]. In later theories, the role of the Kelvin wave shifted to being a region of low pressure, where frictionally induced meridional moisture convergence helped convection move eastward [25,30], and/or enhanced surface fluxes where it contributed perturbation westerlies in regions of basic state westerly winds [7,29]. While there has generally been a consensus that the MJO's low-level Rossby gyres on the western edge of its precipitation envelope cause drying by pulling in off-equatorial air (e.g., [45,46]), there have been diverse perspectives on the role of the Rossby wave in initiating the MJO and/or causing eastward propagation, which include moistening where the Rossby wave contributes off-equatorial flow [21,47], advection from suppressed MJO phase Rossby waves over the Indian ocean [48], and eastward advection of the MJO's moisture perturbation by Rossby wave perturbation westerlies [49,50]. Moreover, in many MJO theories, the individual roles of Kelvin and Rossby waves are hard to discern, as both are included in the dynamics, and only the effects of their superposition are considered.
Modeling the MJO
For decades, the MJO has been a challenge to simulate, with models often having too little variance in the MJO wavenumber/frequency band and/or lacking sufficient eastward propagation [51][52][53]. While improvements in modeling the MJO have been obtained by either using embedded two-dimensional cloud-permitting models to represent the effects of convection [54,55] or increasing convective entrainment to make parameterized convection more sensitive to atmospheric moisture, the former approach is much more computationally intensive than traditional convective parameterizations, and the latter technique can lead to inaccuracies in modeling the atmosphere's basic state [56]. Moreover, despite recent advances in modeling the MJO, even cutting edge forecast models have substantial room for improvement in predicting rainfall on MJO time scales [57].
MJO Changes with Global Warming
There is growing evidence that the MJO is becoming more frequent and intense with time as the oceans warm. Slingo et al. [58] used zonally integrated equatorial zonal wind as a metric of MJO activity, and noted a substantial increase in the late 1970s, which seemed to be associated with warming in the Indian Ocean. Jones and Carvalho [59] examined changes in the MJO starting in 1958, and found positive trends in lower-and upper-level zonal wind anomalies and the number of summer and winter MJO events, some of which were statistically significant at the 95 percent confidence level. Moreover, many simulations suggest that the MJO will become more frequent and intense and will propagate more rapidly as the climate warms [55,[60][61][62][63][64][65][66]. Possible mechanisms for these changes include sharper vertical and horizontal gradients in basic state moisture, changes to dry and moist stratifications, and enhanced evaporation over warmer oceans [46,55,64,[67][68][69].
A changing MJO could lead to many significant climate impacts. First, its influence on tropical cyclones, Asian, Australian, and American monsoons and mid-and high-latitude weather could all increase with time. Second, the momentum transport associated with the MJO could also increase, leading to fundamental changes in the atmospheric circulation, such as equatorial superrotation [62]. Third, combined direct effects of changes to MJO wind perturbations, and indirect effects such as MJO-induced changes to tropical cyclonic disturbances, could alter the wind stress on equatorial oceans [70], possibly even causing structural changes to equatorial ocean SSTs [71].
Motivation
Because of the importance of the MJO for global weather and climate, and the growing evidence that it is intensifying as the oceans warm, it should be a high priority to warn society about the potential impacts of a strengthening MJO. Both future MJO changes and their impacts depend on the mechanism(s) of the MJO, which are still a matter of debate, and which are not adequately represented in many climate and forecasting models. Therefore, in this study, we use a novel Lagrangian atmospheric model (LAM) that has been tuned to simulate robust and realistic MJOs, and recently developed dynamical analysis techniques, to better understand the mechanisms of the MJO. In particular, we use the Kelvin/Rossby dynamical decomposition developed by [42] to isolate the contributions of Kelvin and Rossby waves to moisture and moist enthalpy budgets for the LAM MJO. Because the structure of the LAM MJO closely resembles that of the observed MJO [49], particularly its decomposition into Kelvin and Rossby components [42], we expect that many of the conclusions drawn about the mechanisms of the LAM MJO will also apply to the observed MJO. Moreover, we also hope this analysis will shed light on why Lagrangian models predict the MJO's circulation will intensify with ocean warming [64,66], whereas some other models do not predict such intensification [65].
This paper builds directly on the results of Haertel [42], who developed a method of decomposing MJO circulations into components associated with Kelvin and Rossby waves. That study also established that the Kelvin wave portion of the MJO's circulation is well represented by simple linear dynamics, and that it can be simulated as a linear response to a heat source. In contrast, the Rossby wave component of the circulation has important deviations from linear theory for a basic state of rest: gyres become centered much farther from the equator than simple linear theory predicts, and they move eastward instead of westward during the life cycle of the MJO. Haertel [42] did not consider how Kelvin and Rossby wave circulations feed back on the moisture and moist enthalpy budgets of the MJO, however, which is the focus of this paper.
This study is organized as follows. Section 2 describes the Lagrangian model and methods for creating a composite MJO, decomposing circulations into Kelvin and Rossby components, and computing moisture and moist enthalpy budgets. In Section 3, we compare the dynamical structure of the LAM MJO to that of the observed MJO, and then isolate contributions of Kelvin and Rossby waves to the moisture and moist enthalpy budgets. Section 4 discusses the results in light of other studies, and Section 5 provides the conclusions.
Lagrangian Atmospheric Model
The Lagrangian Atmospheric Model (LAM) simulates atmospheric circulations by predicting the motions of individual air parcels [72]. It includes a unique convective parameterization in which vertical positions of parcels are exchanged in convectively unstable regions [49,73]. The LAM has been shown to simulate MJOs with realistic horizontal structure, vertical structure, and convective life cycles [8,49,64,66,73,74]. In this study, we use a 4-year simulation with prescribed sea surface temperatures (SSTs) that are monthly averages for the years 1998-2009. The equivalent Eulerian horizontal resolution is approximately 3.75 degrees in longitude and 1.875 degrees in latitude, with average vertical spacing of parcels of about 29 hPa. Nineteen MJOs occur during this simulation. Haertel [42] previously used this simulation to show that the LAM reproduces the observed partitioning of MJO circulations between Kelvin and Rossby Waves, which is why it was selected as the primary data source for this study. However, in that study, fields were analyzed on pressure surfaces, whereas here, we use a sigma coordinate system for more precise calculations of column integrated moisture and moist enthalpy budgets.
Composite MJOs
In previous research, we found it useful to create an observational composite of MJO wind, temperature, and moisture perturbations in the vicinity of the MJO's convective envelope using only raw atmospheric sounding data [8]. This composite depicts MJO circulations for each of the developing, mature, and dissipating stages of the MJO's convective envelope, and it uses a coordinate system centered on the MJO's convective envelope, and not a particular location. This is important because the formation and dissipation locations for the convective envelope vary from one MJO to the next, and MJO circulations are intimately connected to convection. This approach also puts particular sounding locations in different positions (in the MJO-centered coordinate system) for different MJOs, which increases the spatial coverage of the observations. Soundings for the observational MJO composite were taken from the Integrated Global Radiosonde Array for the period 1996-2009 [75]. There were 44 MJOs that occurred over a 13-year period (3.4 MJOs per year).
In this study, we use the same basic approach to create an MJO composite for the LAM simulation. Here, we use the objective tracking algorithm developed by [66] for identifying MJOs. We analyze MJO perturbations to wind, temperature, and moisture on sigma surfaces with a 0.05 resolution for each of the developing, mature, and dissipating stages of the MJO in a MJO-centered coordinate system following Haertel et al. [8]. We plot data for the corresponding pressure surface (e.g., sigma = 0.2 corresponds to p = 200 hPa) for ease of comparison with observational data and other studies. Nineteen MJOs occur during the 4-year LAM simulation (4.75 MJOs per year).
Moisture and Moist Enthalpy Budgets
In order to understand the mechanisms of the LAM MJO, we examine how different components of the MJO's circulation bring moisture and heat to the center of convection. The column integrated moisture budget is as follows: where q is specific humidity, u is zonal velocity, v is meridional velocity, ω is pressure velocity, E is evaporation, P is precipitation, t is time, and [] denotes column integration, with mass as the metric (e.g., dp/g). Now, let h = CpT + Lq, where Cp is the specific heat at constant pressure for dry air and L is the latent heat of vaporization. Then, the column integrated moist enthalpy budget is as follows: where T is temperature, α is specific volume, R is radiative heating, and S is the sensible heat flux. The first three terms on the right hand side of (2) represent the column integrated advection of moist enthalpy, compressional warming, and radiative heating, respectively. Note that precipitation does not change the moist enthalpy (other than through unbalanced melting/freezing effects), but rather converts enthalpy from the moisture term to the temperature term. Equation (2) can be derived by combining (1) with the thermodynamic equation [76] without any approximations beyond the hydrostatic equation and neglecting melting/freezing effects. The compressional warming ([αω]) can be written as the vertical advection of geopotential (gz), so that it is the vertical advection of moist static energy that changes the column integrated moist enthalpy. In contrast, the horizontal advection terms involve moist enthalpy itself (see also [45]). This distinction is often not made in applications of moist static energy budgets to tropical convection systems, but here, we choose to include only those terms provided by combining the column integrated moisture and thermodynamic equations for accuracy.
Kelvin/Rossby Decomposition
In order to isolate the contributions of Kelvin and Rossby waves to the moisture and moist enthalpy budgets of the MJO, we project its vertical structure onto an empirical deep convective mode (DCM) and then project the horizontal structure of the DCM onto an equatorial Kelvin wave and subtract out the Kelvin wave component to obtain the Rossby wave structure following [42]. Let φ dc (p) be the average, mass-balanced horizontal divergence within 10 degrees longitude and 5 degrees latitude of the LAM MJO's convective center during the mature stage ( Figure 1). We normalize φ dc : pressure (hPa) where <> denotes an inner product applied over the vertical coordinate, and we then project MJO wind perturbations onto φ dc : where u p , v p are the zonal and meridional components of the projected flow. We then compute the mean tropospheric temperature, and 850-200 hPa wind shear for the projected flow, and apply the method described in the Appendix of [42] to these fields to separate Kelvin and Rossby wave components. Note that this only partitions the portion of the MJO's circulation that projects onto the deep convective mode into Rossby and Kelvin waves; however, as we show below, the deep convective circulation accounts for most of the moisture and moist enthalpy transport. Note also that the deep convective mode would likely project strongly onto 2-3 vertical normal modes of a dry tropical troposphere [77][78][79]. However, previous research has shown that in tropical deep convective systems, the partial cancelation of adiabatic cooling due to vertical motion by convective heating varies in such a way that these modes have the roughly same "moist" equivalent depth [11,79], and that a single mode such as that shown in Figure 1 can capture the gross flow structure of the MJO [42].
Composite MJO and Kelvin/Rossby Projection
As previously noted by [42], the LAM reproduces the observed life cycle of Kelvin and Rossby wave circulations in the MJO for the developing, mature, and dissipating stages of the convective envelope. In that study, the total flow was used to analyze the Kelvin and Rossby wave components. Here, we show that the same can be said for the projection of the flow onto the deep convective mode ( [41,42]. However, at later times, the Rossby gyres are centered farther off the equator than linear theory predicts (Figure 4c-f), as was previously noted by [42].
The amplitudes of Kelvin wave wind and temperature perturbations are slightly weaker in the LAM than in nature (Figure 3), but Rossby wave perturbations are, on average, of a similar magnitude (Figure 4). The areal coverage of precipitation is slightly smaller in the LAM than in nature (Figures 2-4). However, the phasing of Kelvin and Rossby wave circulations, as well as their positioning relative to precipitation regions, are similar in the LAM DCM and in the observations for each of the developing, mature, and dissipating stages of the MJO convective envelope (Figures 3 and 4), which suggests that these circulations might play a similar role in the mechanisms of the simulated and observed MJOs. for the developing, mature, and dissipating stages of the convective envelope, respectively. For the LAM, MJO regions of perturbation rainfall less than −1 mm/day are shaded light gray, and areas with rainfall greater than 1 and 3 mm/day are shaded medium and dark gray, respectively. Green contours indicate budget-derived rainfall rates of 1 and 3 mm/day, respectively. Observed MJO regions of rainfall less than −1 and more than 1 mm/day are shaded light and dark gray, respectively (adapted from [42] Figure 2). Regions of rainfall less than −1 and more than 1 mm/day are shaded light and dark gray, respectively. Panels (b,d,f) are adapted from [42] Figure 2). Regions of rainfall less than −1 and more than 1 mm/day are shaded light and dark gray, respectively. Panels (b,d,f) are adapted from [42]. Figure 5 shows the key terms in the moist enthalpy budget of the LAM MJO for each of the developing, mature, and dissipating stages. Here, we use the units of mm/day for all variables to make it easy to compare changes in heat to those in moisture content, and to relate moisture tendencies to rainfall and evaporation. For each stage, advection (the black line) dominates the budget, followed by surface evaporation (the green line), which has roughly one-third of the amplitude of advection. These two terms are roughly out of phase throughout the convective life cycle. The deep convective mode's advection (dashed black line) is generally close to that of the total circulation near the MJO's convective center, except for the region to the east of the MJO during the dissipating stage ( Figure 5). This region lies to the east of the Dateline; suffice it to say that the deep convective circulation transports most of the moist enthalpy over the Indian Ocean and Western Pacific Oceans within the LAM MJO. Other terms that make non-negligible impacts include advection of moisture perturbations by basic state flow (blue line), and radiation (red line). While the radiation term is roughly in phase with rainfall, so that it destabilizes the LAM MJO, it is much weaker than in other studies (e.g., [80]), probably owing to the idealized nature of the LAM's radiative scheme. In Figure 6, we decompose advective tendencies of moist enthalpy into vertical advection of moisture (red line), vertical advection of heat (black line), total vertical advection (green line), and horizontal advection of moisture and heat (gold line). The vertical advection of moisture can explain most of the rainfall perturbation (blue line) in the MJO, and it has a strong positive impact on the moist enthalpy budget during the heaviest rain. However, it is accompanied by an even greater export of heat (black line), so that in total, vertical advection exports a small amount of moist enthalpy near the convective center (solid green line). Horizontal advection (gold line) increases the column integrated moist enthalpy prior to the MJO's heaviest convection, and reduces moist enthalpy after heavy rainfall. In order to test if our moist enthalpy budget balances, we compute each of the terms on the right side of (2) and compare their sum to the tendency in moist enthalpy computed as a time difference (Figure 7; compare blue and red lines). The two curves mostly follow each other for each of the developing, mature, and dissipating stages, with deviations generally being a few tenths of a mm/day or less (Figure 7). Considering that the largest individual terms on the right-hand side of (2) have amplitudes of several mm/day (e.g., Figure 6), and circulations are sampled at 1-day intervals, such small random imbalances in the budget are expected. Moreover, the budget excludes melting and freezing effects, which do not necessarily cancel within a column, and could also contribute to the small discrepancies. Overall, Figure 7 supports the idea that there are not large systemic errors in our moist enthalpy budget.
Moist Enthalpy Budget
We also note that much of the tendency in moist enthalpy is attributable to horizontal advection (compare red and green lines in Figure 7). Over most of the domain, meridional advection contributes the vast majority of the change in moist enthalpy due to horizontal advection. However, there are a few exceptions. During the developing and mature stages, zonal advection causes most of the moist enthalpy increase due to horizontal advection on the eastern edge of the precipitation region, and during the mature and dissipating stages, zonal advection causes most of the moist enthalpy decrease due to horizontal advection on the western edge of the precipitation region (not shown). Horizontal advection is often substantially lower than the tendency well to the east of the MJO (Figure 7), because vertical advection plays an important role in increasing moist enthalpy there ( Figure 6).
Kelvin and Rossby Wave Contributions
We now consider the individual contributions of Kelvin and Rossby waves to the moisture and moist enthalpy budgets of the LAM MJO. Figure 8a shows the advective tendency of moisture due to the Kelvin wave circulation during the developing stage. Most of the MJO's rainfall perturbation can be explained by vertical advection of moisture associated with the Kelvin wave, which occurs where low-level zonal winds converge (Figure 8a). There is also some rainfall associated with vertical advection of moisture near the eastern edge of low-level westerly wind perturbations of the Rossby wave circulation (Figure 8b). The Rossby wave also creates a positive moisture tendency where there is flow away from the equator ahead of the MJO's convective center, which is largely due to horizontal advection. During the mature stage, the advective moistening due to vertical motion within the MJO's precipitation region increases for both the Rossby and Kelvin wave components (Figure 8c,d). Moistening ahead of the convective center where there is offequatorial flow continues, and drying intensifies behind the convective center, where winds have an equatorward and westerly component. During the dissipating stage, moistening due to vertical motion remains strong for the Rossby wave, but begins to diminish for the Kelvin wave. Drying associated with equatorward flow west of the MJO in the Rossby wave intensifies. Figure 9 partitions the advective moisture tendency into components associated with vertical [−ω ,f and 9e,f). For all stages, much of the off-equatorial moistening ahead of the MJO precipitation region and drying behind it is associated with horizontal advection (Figure 9b,d,f), which is primarily contributed by the Rossby wave (Figure 8b,d,f). We now consider the spatial distribution of the advective moist enthalpy tendency in the LAM MJO ( Figure 10). This field includes the first four terms on the right-hand side of Equation (2). For each stage, there is an off-equatorial increase in moist enthalpy to the east of the precipitation region associated with off-equatorial flow (Figure 10a-c). The increase is the greatest in the developing stage (Figure 10a), is still strong in the mature stage (Figure 10b), is and somewhat weaker in the dissipating stage (Figure 10c). During all stages there is a decrease in moist enthalpy on the western side of the MJO's precipitation region, which is greater in the mature and dissipating stages. The reduction in moist enthalpy peaks on the northwest flank of the precipitation region in the developing stage, and on the southwest flank during the mature and dissipating stages. The general pattern of the moist enthalpy tendency in Figure 10 can be understood in terms of the differing impacts of enthalpy changes due to vertical [−ω ∂h ∂p + αω] and horizontal advection [−u ∂h ∂x − v ∂h ∂x ]. Vertical advective moistening is accompanied by an export of heat that has a greater impact on the column integrated moist enthalpy than the moistening (e.g., Figure 6), so that upward motion causes a net reduction in moist enthalpy.
In contrast, moistening associated with horizontal advection is not counterbalanced in this way (i.e., horizontal advection of temperature has a minor impact on column integrated moist enthalpy). Moist enthalpy increases ahead of the precipitation region, where the Rossby wave circulation creates a positive advective tendency (Figures 8-10). Moist enthalpy decreases on the western side of the precipitation region, where the Kelvin wave contributes upward motion (Figures 6, 8 and 9), and the Rossby wave contributes drying owing to horizontal advection (Figure 8). In Figure 11, we break down the individual contributions of Kelvin and Rossby waves to the moist enthalpy budget. During all stages, the Kelvin wave reduces moist enthalpy on the western side of the precipitation region (Figure 11a,c,e). During the developing and mature stages, it also increases moist enthalpy in a small area on the southeastern side of the precipitation region, where it contributes downward motion. During all stages, the Rossby wave increases moist enthalpy on the eastern side of the precipitation region and decreases moist enthalpy on the western side of the precipitation region, with the greatest impact in off-equatorial regions (Figure 11b,d,f). Note that while the both Rossby and Kelvin waves reduce moist enthalpy on the western side of the precipitation region (Figure 11), the mechanism is very different; the horizontal circulation of the Rossby wave advects dry air (Figure 8b,d,f), whereas upward motion associated with the Kelvin wave exports heat, even though it is bringing in moisture (Figures 6 and 8a,c,e).
So far, we have examined which Kelvin and Rossby wave circulations contribute to the moisture and moist enthalpy budgets of the LAM MJO through advective tendencies of moisture and heat. They also change column-integrated moist enthalpy by altering surface fluxes. In Figure 12, we show how each wave type changes the surface wind speed when added to basic state winds, which illustrates where surface fluxes are enhanced or reduced by that wave. During the developing stage, the Kelvin wave reduces the surface wind speed near the equator on the eastern edge of the precipitation region, where easterly wind perturbations overlap westerly basic state flow (Figure 12a). Over almost half of the world opposite the MJO, westerly Kelvin wave wind perturbations reduce the surface wind speed, because they occur in a region with basic state easterlies. Note that this happens in an area where the troposphere is cool owing to a circumnavigating Kelvin wave (Figure 3a), and that the resulting surface flux perturbation reinforces this wave. Rossby wave wind perturbations enhance the surface wind speed and thereby increase surface fluxes of heat and moisture near the center of the MJO precipitation region (Figure 12b), and also off the equator just east of the convection, and on the equator more than 90 degrees east of the MJO. This is because there is a narrow band of basic state westerlies that extends into the western Pacific near the equator in the LAM, which is surrounded by basic state easterlies off of the equator (e.g., [49]). During the mature stage, when the MJO precipitation region lies on the eastern edge of these basic state westerlies, the Kelvin wave circulation enhances the surface wind speed in a broad region to the east of the MJO (Figure 12c). This region of surface flux enhancement overlaps the warm-phase Kelvin wave extending eastward from the MJO (Figure 3c), and therefore reinforces this wave. Rossby wave wind perturbations continue to enhance the surface wind speed near the center of the MJO, and off of the equator to the east of the MJO (Figure 12d). In the dissipating stage, the Kelvin wave wind perturbation enhances surface wind speed almost halfway around the world to the east of the MJO (Figure 12e), underneath a broad warm-phase Kelvin wave (Figure 3e). At this time, the Rossby wave wind perturbation only enhances surface wind speed in a small area near the western edge of the precipitation region, and reduces surface wind speed in a much larger area (Figure 12b). We conclude that Kelvin wave surface wind perturbations reinforce circumnavigating Kelvin waves throughout the MJO life cycle, and Rossby wave wind perturbations enhance surface fluxes within and to the east of the precipitation region in the developing and mature stages of the MJO.
Discussion
In this study, we isolate contributions of Kelvin and Rossby waves to the column integrated moisture and moist enthalpy budgets of the Madden-Julian Oscillation (MJO) in the Lagrangian Atmospheric Model (LAM). The LAM MJO resembles the observed MJO in vertical structure, horizontal structure, and life cycle (e.g., Figures 1-3). Our analysis reveals differing roles for Kelvin and Rossby waves in the mechanisms of the MJO. While vertical motion associated with the Kelvin wave accounts for most of the MJO's rainfall perturbation, this circulation exports a large amount of heat, so that it has a net negative impact on column-integrated moist enthalpy (Figures 6, 8, 9 and 11). In contrast, the Rossby wave circulation accumulates less moisture, but much of the moisture it does bring in is through horizontal advection with little heat export, so it has a net positive impact on moist enthalpy in the developing and mature stages of the MJO (Figures 8, 9 and 11). The Kelvin wave circulation alters surface wind speed outside of the MJO's convectively active region in a manner that reinforces both positive-and negative-phase circumnavigating Kelvin waves ( Figure 12). Rossby wave wind perturbations enhance surface fluxes at the center of the MJO's precipitation region, as well as ahead of the MJO, during the developing and mature stages (Figure 12).
Our analysis reveals several other aspects of the mechanism of the LAM MJO. In particular, neither radiation nor surface fluxes can account for the rapid build-up of moist enthalpy to the east of the LAM MJO convective center, nor the reduction in moist enthalpy to the west of the precipitation region, which are primarily achieved via horizontal advection by the Rossby wave (Figures 8-11). During the developing stage, the greatest enthalpy increase occurs in off-equatorial anticyclonic flow near the eastern edge of the precipitation region (Figure 11b), which wraps around negative temperature perturbations (Figure 4a). By the mature stage, the greatest enthalpy decrease occurs in off-equatorial cyclonic flow on the western edge of the precipitation region (Figure 11f), which wraps around positive temperature perturbations (Figure 4e). These flow features have a similar positioning relative to the precipitation region in the observed MJO (Figure 4b,f), which suggests that this portion of the MJO mechanism may be the same in nature and in the LAM. The Rossby gyres deviate from predictions of linear theory [41] in that they are centered 25-30 degrees off the equator, and they move slowly eastward along with the MJO convective center, which is presumably a consequence of the non-resting basic state [42,81,82]. Many other studies have noted the important role played by these Rossby gyres in the moisture and/or moist static energy budgets of the MJO (e.g., [45][46][47]80,82,83]).
The results in this paper also help to tie together two seemingly contradictory theories of the MJO. Some scientists suggest that perturbation easterlies enhance surface fluxes in basic state easterlies to maintain the MJO circulation (e.g., [26]). Others believe that perturbation westerlies in basic state westerlies enhance surface fluxes to destabilize the MJO (e.g., [28]). Here, we show that both theories are correct for different components of the MJO's circulation. Perturbation westerlies associated with the Rossby wave circulation enhance surface fluxes near the center of the MJO's precipitation region during the developing and mature phases of the convective envelope (Figure 12b,d). Perturbation easterlies associated with both Kelvin and Rossby wave circulations enhance surface fluxes to the east of the convective envelope, aiding in the build-up of moist enthalpy there (Figure 12b-f). Finally, both perturbation easterlies and westerlies help to maintain the different phases of the circumnavigating Kelvin wave (Figure 12a,c,e [8]).
The results presented here also help to explain why the LAM predicts substantial strengthening of the MJO with global warming [42,64], whereas some other climate models do not [65]. Several of the key processes in the LAM MJO either directly or indirectly depend on the magnitude of surface fluxes of moisture, which could increase substantially with global warming owing to the nonlinear nature of the Clausius Clapyron equation (e.g., [64]). First, the LAM MJO is often triggered by the arrival of a cool-phase circumnavigating Kelvin wave (e.g., [8,42]). In coupled LAM simulations, the amplitude of this component of the MJO increases substantially with ocean warming [66], possibly due to its coupling with surface fluxes. Second, owing to stronger surface fluxes, the meridional gradient of moisture increases with ocean warming in the LAM, and this could cause a more rapid build-up of moisture on the eastern edge of the MJO owing to horizontal advection. Finally, surface fluxes of moisture associated with Rossby wave westerlies near the center of the MJO could increase with ocean warming. It is quite possible that other climate models have a larger role in radiation or frictional moisture convergence in the mechanisms of the MJO, which may not change as much with ocean warming as the key processes in the LAM MJO.
Taken as a whole, the results in this paper and its predecessors (Haertel et al. [8], Haertel [42,64,66]) point out two key deficiencies in theoretical interpretations of the MJO, which typically use a dynamical system linearized about a basic state of rest and high damping, yielding a Matsuno-Gill (MG)-type solution that is a steady-state response to a heat source. First, such theories fail to account for the circumnavigating Kelvin wave component of the MJO's circulation. Haertel [42] showed that the cool-phase Kelvin wave lying to the west of the MJO's precipitation region in the developing stage (Figure 3a,b) is generated by the suppressed phase of a preceding MJO, and that it propagates around the world. Here, we show that much of the precipitation in the developing MJO is due to this wave (Figures 6a, 8a and 9a). Note that this part of the Kelvin wave circulation is completely absent in an MG solution of a steady-state response to positive heating. Moreover, the results presented here suggest that in a warming climate, surface fluxes will enhance the circumnavigating component of the MJO's circulation (Figure 12), possibly leading to MJO intensification with time [66]. The second key deficiency of MJO theories that use a system of equations linearized about a basic state of rest is that they fail to properly model the Rossby wave component of the MJO's circulation. Not only does a simple linear dynamical system place the center of circulation of Rossby gyres too close to the equator, but it also causes the Rossby wave to propagate westward. Haertel [42] showed that the linear response to an MJO-like heating produces an elongated Rossby gyre growing westward from the heat source (e.g., Figure 6f,i from [42]) that differs substantially from the observed Rossby wave component of the MJO's circulation (Figure 4). While using high damping makes the Rossby gyres more compact, it misses the point that in nature, the gyres extend a long way off of the equator into regions with westerlies, and they move eastward alongside the convective envelope with time ( Figure 4; note that the Rossby gyres essentially stay fixed in the MJO's frame of reference that moves eastward at 5 m/s). The reason the MJO's precipitation envelope moves eastward along with the Rossby gyres is plain from the results presented in this study: Rossby wave circulations moisten ahead of the convective envelope and dry whilst they are a part of it (Figures 7-11), and they also enhance surface fluxes at the center of the convective envelope ( Figure 12). While recent theoretical interpretations of the MJO are making progress in that they include non-linear moisture advection [50], MJO theories would also benefit from using a non-resting basic state for the dynamics (i.e., height and wind terms), which would allow Rossby gyres to move eastward as they do in nature.
Another point worth mentioning is that in contrast with several leading MJO theories, neither radiation, negative gross moist stability, nor frictional moisture convergence appear to be of first-order importance to the LAM MJO. Radiation has a tiny impact on the moist enthalpy budget compared to the other terms (red line in Figure 5); upward motion reduces moist enthalpy in the MJO's convective envelope (green line in Figure 6); and moist enthalpy advection is dominated by the deep convective circulation (compare black and dashed lines in Figure 5). Apparently, the instability of the LAM MJO is largely driven by surface fluxes, both from the Rossby wave westerlies near the center of convection and from the fluxes that enhance the circumnavigating Kelvin wave (Figure 12), and the key factor for eastward propagation is not frictional moisture convergence, but instead, the eastward movement of the Rossby gyres (Figure 4a,c,e), which mimics the behavior of the observed Rossby gyres (Figure 4b,d,f).
Conclusions
In the Lagrangian Atmospheric Model, the Madden-Julian Oscillation is driven primarily by horizontal advection of moisture and wind-enhanced evaporation, with only a minor contribution from radiation. Horizontal advection of moisture by Rossby waves builds up moist enthalpy east of the convective center, and reduces moist enthalpy following the heaviest rainfall. Vertical motion associated with the Kelvin wave contributes most of the rainfall, but this circulation reduces moist enthalpy due to a strong export of heat. Westerly wind perturbations associated with the Rossby wave enhance surface fluxes in the center of the MJO, and both easterly and westerly wind perturbations in circumnavigating Kelvin waves alter surface fluxes in such a way that helps to maintain these waves. Because the LAM reproduces the observed life cycle of Kelvin and Rossby wave circulations in the MJO (Figures 1-3), there is a good chance that at least some of the key processes in the LAM MJO have a similar role in nature. This work also highlights two limitations of many theoretical interpretations of the MJO: (1) they fail to account for the circumnavigating Kelvin wave component of the MJO's circulation, which makes important contributions to MJO rainfall, and (2) using a basic state of rest distorts the low-level Rossby gyres on the western flank of the precipitation region, which are important for MJO propagation owing to horizontal moisture advection. | 9,119.6 | 2022-08-23T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Fast nanopore sequencing data analysis with SLOW5
Nanopore sequencing depends on the FAST5 file format, which does not allow efficient parallel analysis. Here we introduce SLOW5, an alternative format engineered for efficient parallelization and acceleration of nanopore data analysis. Using the example of DNA methylation profiling of a human genome, analysis runtime is reduced from more than two weeks to approximately 10.5 h on a typical high-performance computer. SLOW5 is approximately 25% smaller than FAST5 and delivers consistent improvements on different computer architectures.
in a relatively small reduction in the overall run time of a typical methylation calling job (Extended Data Fig. 1a); (2) this was due to inefficient data access (file reading) rather than inefficient data processing (Extended Data Fig. 1a-d); and (3) the underlying bottleneck was a limitation in the software library for reading HDF5 files, whereby parallel input/output (I/O) requests from multiple CPU threads are serialized, preventing efficient use of parallel CPU resources (Extended Data Fig. 1e and Supplementary Note 2).
Parallel computing enables scalable analysis of large datasets and is central to modern genomics.Unfortunately, our analysis shows that the FAST5 format suffers from an inherent inefficiency that ensures, even with access to advanced HPC systems, that the analysis of nanopore signal data will be prohibitively slow (Fig. 1b).For example, with the maximum resource allocation available on Australia's National Computing Infrastructure (among the world's largest academic supercomputers; see Supplementary Table 2-HPC-Lustre), genome-wide DNA methylation profiling on a ~30× human genome dataset runs for more than 14 days.Moreover, given that the vast majority (>90%) of the overall run time is spent simply reading FAST5 files, the performance benefits of further software optimization would be small compared to the time taken for file reading.
To overcome the inherent limitations in FAST5 format, we created SLOW5, a file format designed for efficient, scalable analysis of nanopore signal data (Fig. 1b).SLOW5 encodes all information found in FAST5 but is not dependent on the HDF5 library required to read FAST5 files.The human readable version of SLOW5 format is a tab-separated values (TSV) file encoding metadata and time series signal data for one nanopore read per line, with global metadata stored in a file header (Table 1 and Supplementary Note 3).Parallel file access is facilitated by an accompanying binary index file that specifies the position of each read (in bytes) within the main SLOW5 file (Supplementary Note 3).SLOW5 can be encoded in human readable ASCII format or a compact and efficient binary format, BLOW5, which is analogous to the seminal SAM/ BAM format for storing sequence alignments 21 .The binary format optionally supports compression with zlib and 'vbz' (Z-standard + StreamVByte) algorithms, thereby minimizing the storage footprint while permitting efficient parallel access (Methods).
BLOW5 format is smaller than FAST5 format due to simpler space allocation and reduced metadata redundancy.Comparison of equivalent files with matched compression (FAST5-zlib versus BLOW5-zlib or FAST5-vbz versus BLOW5-vbz) revealed space savings that ranged from 18% to 69%, depending on the dataset (Supplementary Table 3).The largest savings were observed for datasets with short read lengths, and this effect was independent of compression type (Extended Data Fig. 2a,b).On a ~30× human genome dataset, BLOW5 was approximately 25% smaller (Fig. 1c), equating to a reduction of ~300 GB.
To determine the performance benefits of SLOW5, we first measured data access using a small human DNA sequencing dataset of ~500,000 reads (Supplementary Table 1) on two different HPC systems (HPC-HDD and HPC-Lustre; Supplementary Table 2).The rate of SLOW5 data access (reads per second) was faster than FAST5 across the board and increased with the use of additional CPU threads, whereas FAST5 access was largely unchanged (Fig. 1d).This trend, which reflects the capacity of SLOW5 to be efficiently accessed by multiple CPU threads in parallel, was observed for SLOW5, BLOW5 and compressed BLOW5 format, with the latter exhibiting the most efficient data access (Fig. 1d).As a result, we observed substantial improvements in data access rates when using many CPUs on both HPC systems.Using 48 CPU threads on the Raw current signal data are generated on an ONT sequencing device and written in FAST5 format.Raw data are base-called into sequence reads (FASTQ/FASTA format).Downstream analysis involving both base-called reads and raw signal data is used to identify genetic variants, epigenetic modifications (for example, 5mC) and other features.b, Schematic diagram illustrating the bottleneck in ONT signal data analysis.FAST5 file reading requires the HDF5 software library, which serializes file access requests by multiple CPU threads, preventing efficient parallel analysis.SLOW5 files are not dependent on the HDF5 library and are amenable to efficient parallel analysis.A more detailed mechanistic diagram is provided in Extended Data Fig. 1e.c, Bar chart shows the relative file sizes (bytes per base) of a typical human genome sequencing dataset in ASCII SLOW5 (purple), binary BLOW5 format with no compression (orange), zlib compression (red) and vbz compression (pink), compared to FAST5 format with zlib compression (blue) and vbz compression (teal).d, Dot plots show the rate of file access (reads per second) for the above file types, as a function of CPU threads used on two HPC systems: HPC-HDD (left) or HPC-Lustre (right).e, Dot plots show the rate of execution (reads per second) for DNA methylation calling for the same file types on HPC-HDD (left) and HPC-Lustre (right).For the instance of maximum CPU threads, bar charts show the time consumed by individual workflow components: FAST5/SLOW5 data access (pink), FASTA data access (teal), BAM data access (orange) and data processing (navy).f, Bar charts show the time consumed by data access (pink) and data processing (navy) during DNA methylation calling on a range of different computer systems.Full specifications are provided in Supplementary Table 2. HPC-Lustre system, ~7 h were required to read this small dataset in FAST5 format, compared to just ~13 min in compressed BLOW5 (~32-fold improvement) (Fig. 1d).
This improvement in data access manifested in performance gains during DNA methylation profiling.When using SLOW5 input, the Nanopolish/f5c runtime was reduced in proportion to the number of CPUs available (Fig. 1e).This is indicative of efficient parallel computation and was not observed when using FAST5 (Fig. 1e).As a result, substantial improvements were observed when using many CPUs, with a maximum ~15-fold reduction in runtime with 48 CPUs on the HPC-Lustre system (Fig. 1e).The improvement is the result of efficient data access, with no difference observed in data processing among the different file formats (Extended Data Fig. 3a,b).Whereas data access was the major bottleneck during FAST5 analysis, it constituted a negligible fraction of the total run time during SLOW5 analysis (Extended Data Fig. 3c,d).Put simply, this means that overall performance is dictated by the efficiency of the program rather than the time taken to read the input data, thereby enabling optimization through further engineering.For example, using GPU acceleration available in f5c 20 with compressed BLOW5 input, we ran methylation profiling on a 30× human genome in ~10.5 h with 48 threads (>30-fold improvement compared to standard analysis with FAST5) (Supplementary Table 2).
Although the SLOW5 format is designed for scalable analysis on HPC systems, we reasoned that improved data access would be beneficial on almost any computer.To test this, we benchmarked DNA methylation profiling, as above, on a range of architectures (Supplementary Table 2).In all cases, the time consumed by data access was reduced, leading to improvements in overall execution time (Fig. 1f).As expected, improvements were greatest on systems with larger numbers of CPUs, such as a cloud-based virtual machine on Amazon AWS (~7-fold improvement at 32 CPU threads).However, benefits were observed even on miniature devices for portable computing, such as an Nvidia Xavier embedded module (~60% improvement) (Fig. 1f).In summary, SLOW5 delivered performance improvements during methylation profiling on a diverse range of hardware.
To ensure that FAST5 to SLOW5 file conversion is not a barrier to SLOW5 adoption (given that ONT devices currently write data in FAST5 format), we implemented software (slow5tools) for efficient, parallelizable, loss-less conversion from FAST5 to SLOW5 (Methods).File conversion times are proportionally reduced with high CPU availability and are trivial compared to execution times for typical FAST5 analysis (Extended Data Fig. 4a,b).For example, conversion of a ~30× human genome dataset from FAST5 to compressed BLOW5 takes just ~3 h with 48 CPUs.We additionally implemented software for live FAST5 to SLOW5 file conversion during a sequencing run, using the internal computer on an ONT PromethION device (Extended Data Fig. 4c).This means that the user can obtain raw data in compressed BLOW5 format with effectively zero additional workflow hours required for file conversion.
The inefficiency of FAST5 data access creates delays and expenses, limiting the feasibility of ONT sequencing for many applications in research and clinical genomics.Arguably, these frictions also discourage the development of bioinformatics software that directly accesses nanopore signal data.This is in stark contrast to the simple, efficient and open-source SAM/BAM sequence alignment format, developed in 2009 (ref. 21), which was a key catalyst in the growth of genome informatics.
The SLOW5 format provides the framework for efficient, parallelizable analysis of nanopore signal data for any intended application.SLOW5 reading and writing is managed by efficient software application programming interfaces (APIs) for both the C (slow-5lib) and Python (pyslow5) languages (Methods).This facilitates integration of SLOW5 into third-party software, including with existing packages, by replacing the existing FAST5 API.Notably, just ~70 lines of code were required for adoption of SLOW5 by the third-party software Sigmap 22 , compared to ~2,600 lines of code for FAST5 access within the same tool.This shows the simplicity of the SLOW5 API, which is fully open source and not dependent on the HDF5 library required to read FAST5.Along with the simple, intuitive structure of SLOW5 format, this will support active and open software development for nanopore data analysis.
Methods
Reading and writing SLOW5 files with slow5lib and pyslow5.Slow5lib (https:// hasindu2008.github.io/slow5lib/) is implemented using the C programming language.To maximize portability, the slow5lib code follows the C99 standard with X/Open 7 POSIX 2008 extensions.Sequential access to SLOW5 ASCII files and SLOW5 binary files is performed using the getline() and fread() functions, respectively.For performing random disk accesses to SLOW5, the SLOW5 index is first loaded to a hash table in RAM.The read identifier serves as the hash table key.For a given read identifier, the file offset and the record length are obtained from this hash table, and pread() system call is used to load the record to the memory.Pread() allows multiple threads to perform I/O on the same file descriptor in parallel without any locking.Pyslow5 (https://hasindu2008.github.io/slow5lib/pyslow5_api/pyslow5.html) is a Python wrapper built on top of slow5lib (interfaced using Cython) to allow easy access to SLOW5 for Python programmers.BLOW5 file compression.Currently, three separate compression/decompression schemes have been implemented in slow5lib, namely: (1) Z-Library (zlib, also referred to as gzip or DEFLATE), which is an established library that is available by default on almost all systems; (2) Zstandard (zstd), which is a recent, open-source compression algorithm developed by Facebook; and (3) StreamVByte (svb), which is a recent integer compression technique that uses Google's Group Varint approach 23 .Zlib and zstd are used for compressing SLOW5 records (a record is the collection of all primary and auxiliary fields of a particular read), whereas svb is for compressing the raw signal field alone.Our implementation supports first compressing the raw signal using svb and then compressing the SLOW5 record (now with the raw signal that svb compressed) using zlib or zstd, at the user's discretion.Each read is compressed/decompressed independently from one another by using an individual compression stream for each read.Thus, multiple reads can be accessed and decompressed in parallel using multiple threads.
The use of zstd on top of svb compression is equivalent to ONT's custom 'vbz' scheme (https://github.com/nanoporetech/vbz_compression),which uses these two open-source algorithms for FAST5 compression.For simplicity, we have adopted the 'vbz' terminology in this paper.However, we are careful to acknowledge the developers of the underlying algorithms, and slow5lib and slow5tools treat these as separate utilities.We also note that slow5lib was designed such that any other suitable compression scheme can be easily integrated if necessary, making it future proof.FAST5/SLOW5 conversion with slow5tools.Slow5tools (https://github.com/hasindu2008/slow5tools) is implemented on top of slow5lib using the C/C++ programming language and follows ISO C++ 2011 standard.Both slow5lib and slow5tools support Unix systems (Linux and MacOS) or even Windows using the Windows subsystem for Linux.They can be compiled using GNU C/C++ compiler (gcc/g++), LLVM C/C++ compiler (clang/clang++) or Intel C/C++ Compiler (icc/icpc).We have thoroughly tested both slow5lib and slow5tools on older systems (for example, Ubuntu 14) as well as modern systems (Ubuntu 20).We have also tested both slow5lib and slow5tools on Intel, AMD and ARM (both 32-bit and 64-bit) processors.
The fast5toslow5 (f2s) and slow5tofast5 (s2f) modules in slow5tools were implemented using a heavy multi-process approach (described in Supplementary Note 2) to circumvent the HDF5 multi-threading bottleneck, whereas other modules in slow5tools, such as view, merge and split, were implemented using lightweight POSIX threads.
SLOW5 benchmarking experiments. The benchmarking datasets described in
Supplementary Table 1 were generated by sequencing genomic DNA from the human NA12878 reference sample on an ONT PromethION device.Unsheared DNA libraries were prepared using the ONT LSK109 ligation library prep, and two flow cells were used to generate ~30× genome coverage.All benchmarking experiments were performed using multi-FAST5 files, as generated by MinKNOW (distribution v.20.06.9, core v.4.0.3, and configuration v.4.0.13).FAST5 files were originally generated with zlib compression.For benchmarking experiments where FAST5-vbz files were used, these were created using ONT's file compress_fast5 tool (v.4.0.0), which is part of the ont_fast5_api (https://github.com/nanoporetech/ont_fast5_api).
Although slow5tools is compatible with single-FAST5 format, meaning these can be easily converted to SLOW5 format, we did not consider single-FAST5 files during the benchmarking experiments described above.Data access to single-FAST5 format is slower than multi-FAST5 format because the many file-opening and file-closing operations are computationally expensive.Similarly, single-FAST5 files are larger than multi-FAST5 files due to greater metadata redundancy.We, therefore, chose not to consider single-FAST5 format here, because it would exaggerate the performance benefits of SLOW5.Given that single-FAST5 format is no longer supported by ONT, this is a reasonable omission.
To perform computational benchmarking experiments at realistic workloads, we integrated slow5lib to f5c v.0.2 CPU version, which is a restructured version of Nanopolish that enables accurate measurement of the time for each individual component of a methylation calling job.FAST5 benchmarks were performed using the same version of f5c that uses HDF5 (v.1.10.4) built with the threadsafe option enabled (see 'Data availability' and 'Code availability').POSIX threads are used in f5c to perform multi-threaded access to FAST5 and SLOW5.
To obtain FASTQ files for methylation calling, Guppy 4.0.11was used for base-calling under the dna_r9.4.1_450bps_hac_prom base-calling profile.To obtain the BAM file for methylation calling, the reads were mapped to the hg38 reference genome (with no alternate contigs) using minimap2 v.2.17-r941 (with -x map-ont -a --secondary = no options) and sorted using SAMtools v.1.9.
Measurements and calculations were performed as follows: (1) The overall execution time (wall clock time) and the CPU time (user mode + kernel mode) of the program were measured by running the program through the GNU time utility in Linux.(2) The CPU utilization percentage is computed as: (3) Note that this CPU utilization percentage is a normalized value based on the number of CPU threads that the program was executed with.(4) Execution time for individual components (I/O operations and data processing) was measured by inserting gettimeofday() function calls into appropriate locations in the software source code.To prevent the operating system disk cache from affecting the accuracy of I/O results, we cleared the disk cache (pagecache, dentries and inodes) each time before a program execution except on the NCI cluster where this was not permitted.On NCI, disk cache could not be cleaned as we did not have root access, so we implemented a custom program that writes and reads back hundreds of gigabytes of data (several times the size of RAM) to the storage after each experiment so the cache is filled with these mock data.Despite the effect of the hardware disk controller cache (8 GB) being negligible due to the large dataset size (>100 GB), we still executed a mock program run before each experiment.(5) 'Core-hours' is calculated as the product of the number of processing threads employed and the number of hours (wall clock time) spent on the job.This metric is inspired by the metric 'man-hours' used in the labor industry and is used in the cloud computing domain to calculate the data processing cost.In an ideally parallel program, this metric remains constant with the number of cores and threads.(6) The disk usage for different files was measured using the du command.
Reporting Summary.Further information on research design is available in the Nature Research Reporting Summary linked to this article.BAM data access (orange) and data processing (navy).To assess the impact of multi-threading, the analysis was run with various numbers of CPU threads on the HPC-HDD system (see Supplementary Table 2).The analysis was run on a downsampled human genome sequencing dataset of 500,000 reads (see Supplementary Table 1).(b) Dot plots show the rate of file access and processing (reads / second) during the DNA methylation calling job above, as a function of CPU threads used.(c,d) Bar charts show the proportional CPU utilisation (c) and total core hours (d) during the DNA methylation calling jobs above.The definition of core-hours is provided in the Methods section.(e) The upper schematic illustrates the architecture of a job with multi-threaded synchronous file access (I/O).The lower schematic illustrates the bottleneck created by the HDF5 library that is required to read FAST5 files.The HDF5 library serialises I/O requests, making multi-threaded analysis highly inefficient and causing the observed decline in CPU utilisation with increasing numbers of CPU threads.(f) Schematic illustrates the architecture of a multi-processing approach that was implemented to circumvent this limitation in the HDF5 library.The multi-processing approach is viable but requires challenging software engineering and is not a generalisable, long-term solution.
Fig. 1 |
Fig.1| SLOW5 format enables efficient parallel analysis of nanopore signal data.a, Schematic diagram illustrating the typical life cycle of nanopore data.Raw current signal data are generated on an ONT sequencing device and written in FAST5 format.Raw data are base-called into sequence reads (FASTQ/FASTA format).Downstream analysis involving both base-called reads and raw signal data is used to identify genetic variants, epigenetic modifications (for example, 5mC) and other features.b, Schematic diagram illustrating the bottleneck in ONT signal data analysis.FAST5 file reading requires the HDF5 software library, which serializes file access requests by multiple CPU threads, preventing efficient parallel analysis.SLOW5 files are not dependent on the HDF5 library and are amenable to efficient parallel analysis.A more detailed mechanistic diagram is provided in Extended Data Fig.1e.c, Bar chart shows the relative file sizes (bytes per base) of a typical human genome sequencing dataset in ASCII SLOW5 (purple), binary BLOW5 format with no compression (orange), zlib compression (red) and vbz compression (pink), compared to FAST5 format with zlib compression (blue) and vbz compression (teal).d, Dot plots show the rate of file access (reads per second) for the above file types, as a function of CPU threads used on two HPC systems: HPC-HDD (left) or HPC-Lustre (right).e, Dot plots show the rate of execution (reads per second) for DNA methylation calling for the same file types on HPC-HDD (left) and HPC-Lustre (right).For the instance of maximum CPU threads, bar charts show the time consumed by individual workflow components: FAST5/SLOW5 data access (pink), FASTA data access (teal), BAM data access (orange) and data processing (navy).f, Bar charts show the time consumed by data access (pink) and data processing (navy) during DNA methylation calling on a range of different computer systems.Full specifications are provided in Supplementary Table2.
. 1 |
Inefficient parallel access is a major bottleneck in analysis of FAST5 files.(a) Bar chart shows the time consumed by individual components of a Nanopolish DNA methylation calling job with signal data input in FAST5 format: FAST5 data access (pink), FASTA data access (teal),
. 2 |
Impact of read length on file sizes for FAST5 vs BLOW5 files.(a,b) Dot plot show relative file sizes (bytes / base) of various datasets (see Supplementary Table
. 3 |
) as a function of mean read length (shown on a log2 scale).File sizes are shown separately for FAST5-zlib vs BLOW5-zlib (a) and FAST5-vbz vs BLOW5-vbz (b) formats.File sizes are highly variable among different FAST5 files and largely stable among BLOW5 files.Libraries that have the shortest read lengths exhibit the largest space-savings, regardless of compression type.NATure BIOTecHNOLOGy | www.nature.com/naturebiotechnologyPerformance metrics for DNA methylation profiling with FAST5 / SLOW5 files.(a,b) Dot plots show the rate of data processing (reads / second) during DNA methylation calling with ASCII SLOW5 (purple), binary BLOW5 (orange), BLOW5-zlib (red) and FAST5-zlib (blue) files as a function of CPU threads.Analysis was performed on two HPC architectures: HPC-HDD (a) or HPC-Lustre (b; see Supplementary Table
2
). (c,d) Dot plots show the ratio of data-processing time relative to total execution time for the jobs above.(e,f) Bar charts show the proportional CPU utilisation (e) and total core hours (f) during the DNA methylation calling with BLOW5-zlib on the HPC-HDD system.The definition of core-hours is provided in the Methods section.NATure BIOTecHNOLOGy | www.nature.com/naturebiotechnology
Table 1 | example of a SLOW5 AScII file with a single read group
A SLOW5 file contains a header (rows with '@' and '#' prefixes) that stores metadata regarding the contents of the file and the ONT experiment(s) contained within, followed by data records (rows with no prefixes) for sequencing reads, with one read per line.SLOW5 format uses tabs ('\t') and newlines ('\n') as column and row delimiters, respectively.Complete format specifications are provided in Supplementary Note 3. | 5,000.8 | 2022-01-03T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Theoretical studies on concerted versus two steps hydrogen atom transfer reaction by non-heme Mn IV/III v O complexes: how important is the oxo ligand basicity in the C – H activation step? †‡
High-valent metal – oxo complexes have been extensively studied over the years due to their intriguing properties and their abundant catalytic potential. The majority of the catalytic reactions performed by these metal – oxo complexes involves a C – H activation step and extensive e ff orts over the years have been undertaken to understand the mechanistic aspects of this step. The C – H activation by metal – oxo complexes proceeds via a hydrogen atom transfer reaction and this could happen by multiple pathways, (i) via a proton-transfer followed by an electron transfer (PT-ET), (ii) via an electron-transfer followed by a proton transfer (ET-PT), (iii) via a concerted proton-coupled electron transfer (PCET) mechanism. Identifying the right mechanism is a surging topic in this area and here using [Mn III H 3 buea(O)] 2 − ( 1 ) and [Mn IV H 3 buea(O)] − ( 2 ) species (where H 3 buea = tris[( N ’ - tert -butylureaylato)- N -ethylene]aminato) and its C – H activation reaction with dihydroanthracene (DHA), we have explored the mechanism of hydrogen atom transfer reactions. The experimental kinetic data reported earlier (T. H. Parsell, M.-Y. Yang and A. S. Borovik, J. Am. Chem. Soc. , 2009, 131 , 2762) suggests that the mechanism between 1 and 2 is drastically di ff erent. By computing the transition states, reaction energies and by analyzing the wavefunction of the reactant and transitions states, we authenticate the proposal that the Mn III v O undergoes a step wise PT-ET mechanism where as the Mn IV v O species undergo a concerted PCET mechanism. Both the species pass through a [Mn III – OH] intermediate and the stability of this species hold the key to the di ff er-ence in the reactivity. The electronic origin for the di ff erence in reactivity is routed back to the strength and basicity of the Mn – oxo bond and the computed results are in excellent agreement with the experimental results.
Introduction
Selective C-H bond oxidative cleavage of aromatic/aliphatic hydrocarbons is one of the important synthetic transformations in enzymatic and industrial processes.5][16] Besides some recent reports suggest that basicity of oxo ligands play a vital role in the rapid reactivity of manganese(IV/V)-oxo complexes involving C-H activation. 17he primacy of the basicity of oxo ligands was further legitimated by Borovik et al. through the C-H bond cleavage ability of monomeric Mn(III/IV)-oxo units with a tetradentate tripodal ligands ([Mn III/IV H 3 buea(O)] 2−/− ) (H 3 buea = tris[(N′-tert-butylureaylato)-N-ethylene]aminato) having trigonal bipyramidal geometry.Their study clearly shows that the basicity of oxo ligands affect the kinetic aspects of the C-H bond of activation. 18The KIE experiments reveal that k Mn(O) H /k Mn(O) D values are twice as much for 2 compared to 1 (2.6 vs. 6.8 for 1 and 2 respectively).Further a larger pK a measured for 1 compared to 2 lead to the suggestion of an anionic intermediate for 1 and a radical based intermediate for species 2. Based on the KIE experiments and estimated activation barrier for the two species, a different mechanistic route has been proposed for Mn III vO and Mn IV vO species.The C-H bond activation by [Mn III H 3 buea(O)] 2− (1) suggested to proceed through a two step mechanism, proton transfer followed by electron transfer whereas in [Mn IV H 3 buea(O)] − (2) the reaction proceeds in a single step, i.e. via proton-coupled electron transfer (PCET) step.0][21][22][23][24] Further the mechanistic studies suggest that 1 follows anionic mechanism attributed to the strong basic character of the oxo group (larger pK a ) compared to species 2. Although the experimental study unfolds the differences in reactivity between 1 and 2, the precise reason for the preferred choice of the mechanism by these oxidants is unclear and this aspect is important to establish connection between observed reactivity of these models to that of enzymes.In this regard, we have undertaken a theoretical study based on density functional method to specifically address the following issues.(i) Between Mn III vO and Mn IV vO species, which one is a stronger oxidant and why? (ii) What are the exact mechanisms by which 1 and 2 activate the C-H bonds (PT-ET, ET-PT and PCET)?(iii) What are the electronic reasons behind the preferred choice of the mechanism?
Computational details
All the calculations were carried out using the Gaussian 09 suite of programs. 25The geometry optimizations have been performed with the B3LYP functional. 26,27The B3LYP has a proven track record of predicting the structures and the energetics accurately for such metal mediated catalytic reactions [28][29][30] (see ESI ‡ for discussion on chosen methodology).The LACVP basis set comprising the LanL2DZ-Los Alamos effective core potential for the Mn 31-34 and a 6-31G* 35 basis set for the other atoms have been employed for geometry optimization and the optimized geometries were then used to perform single point energy calculations using a TZVP basis set on all atoms. 36,37Solvation energies have been computed using PCM solvation model employing acetonitrile as the solvent.Frequency calculations were performed on the optimized structures at the B3LYP level to verify that they are minima on the potential-energy surface (PES) and also to obtain free energy corrections.The quoted DFT energies are the B3LYP/TZVP solvation energies incorporating free energies correction at the B3LYP/LACVP level, unless otherwise mentioned.The transition states were characterized by single imaginary frequency corresponding to the reaction coordinate and are verified by animating the frequency using visualization software such as Molden. 38,39The broken-symmetry approach available in Gaussian 09 is employed to aid smooth convergence in case of radical intermediates. 40[46]
Result and discussion
Mechanism of C-H activation by Mn III vO and Mn IV vO species Here we have investigated the mechanistic aspects of the C-H bond cleavage of dihydroanthracene (DHA) reaction by monomeric Mn III/IV -oxo complexes of tetradentate tripodal ligands with anionic nitrogen donors (H 3 buea) ligand using density functional methods.The proposed reaction mechanism involves two different pathways (anionic and radical) which are classified based on the type of intermediate formation (Scheme 1).In the anionic pathway, the monomeric oxomanganese species (1) [Mn III (H 3 buea)vO] 2− abstract a proton from DHA leading to the formation of anionic monohydroanthracene (MHA) intermediate (1 intA ) and [Mn III (H 3 buea)-OH] − species which further undergoes rapid electron transfer leading to the formation of [Mn II (H 3 buea)-OH] 2− and MHA radical intermediate (1 intR ).Further a second hydrogen abstraction (ts2) from the MHA leads to the formation of anthracene and [Mn II (H 3 buea)-OH] 2− species.In the radical pathway, species 2 ([Mn IV (H 3 buea)vO] − ) undergoes PCET mechanism leading to the formation of MHA radical intermediate (2 intR ) and [Mn III (H 3 buea)-OH] 1− species.Subsequent hydrogen abstraction from the MHA radical intermediate lead to the formation of anthracene and [Mn III (H 3 buea)-OH] − species.This adapted mechanism is consistent with the experimental observation. 18e have computed all the possible mechanistic pathways for the C-H bond activation reaction for both the oxidants 1 and 2 with DHA.Our B3LYP/TZVP calculations clearly show that both the oxidants possess a high spin ( 5 1 and 4 2) ground state with excited S = 1 and S = 1/2 states lying at 127.8 and 81.3 kJ mol −1 higher in energy for 1 and 2 respectively (Fig. 1 and 2).The Mn III/IV vO bond in 1 and 2 is stabilized predominately by the presence of the intramolecular H-bonds between the terminal oxo ligand with H-bond donors of the ureate nitrogen atoms.The Mn-O bond length in species 2 (1.68 Å) is found to be shorter than that of species 1 (1.73 Å) (shown in Fig. 3 and Table S1 of ESI ‡), this shorter and stronger MnvO bond observed in 2 is also reflected in the computed bond order (Wiberg bond index of 1.16 vs. 1.32 for 1 and 2 respectively).4][55] In addition to this, the Mulliken spin density on the oxo-oxygen in species 2 (ρ Mn = 2.6, ρ O = 0.26, Fig. 4 and Table S2 of ESI ‡) is substantially delocalized more than that of the oxo-oxygen in species 1 (ρ Mn = 3.7, ρ O = 0.13, Fig. 4 and Table S2 of ESI ‡).This implies that species 2 is having larger Mn-O covalent bond character and that leads to a shorter Mn-O bond.Further we would like to note here that the spin density on the oxygen atoms for both the species is significantly less compared to the corresponding iron-oxo species 5 and thus can very well be denoted as metal-oxo complexes rather than oxy-radical type species.Moreover, species 1 is found to have shorter Mn-N eq bonds (average Mn-N eq bond length = 2.13 Å) and longer Mn-N ax bonds (2.21 Å).In 2 although the same trend is visible, the Mn-N ax bonds are much longer and Mn-N eq bond are much shorter than that of 1.A shorter and stronger Mn-O bond attributes to longer Mn-N ax bonds in the high-valent Mn IV vO species. 56The larger basicity of species 1 compared to species 2 is also reflected in the computed NPA charges (−0.82 vs. −0.63 for 1 and 2 respectively).Due to this variation in the basicity, the strength of three weak N-H⋯O(Mn) hydrogen bonding interactions also varies significantly between 1 and 2. The three weak N-H⋯O(Mn) hydrogen bond lengths are equal (1.79 Å) in 1 while they differ drastically in 2 (see Fig. 3).On comparing with species 1, the weak N-H⋯O(Mn) bonds are elongated by 0.025-0.035Å in 2 while the N-H distances are proportionately shortened.The longer and weaker N-H⋯O(Mn) distances in 2 are also reflected in the O LP →BD * N-H donor-acceptor interactions in the NBO analysis.For 1 the stabilization energies are found to be in the range of 32.7-22.1 kJ mol −1 while for 2 they are in the range of 28.5-8.7 kJ mol −1 .These electronic and structural differences between 1 and 2 hold the key to the difference in the reactivity between these two species. 57ased on the computed reaction profile diagram (Fig. 1 and 2), it is clear that the C-H activation of DHA by these two oxidants occurs on the high spin surface.The high spin S = 2 surface in 1 is well separated from other spin states which suggests that the excited spin states are unlikely to participate in the reaction mechanism.The barrier for the proton transfer ( 5 1 ts1 ) from DHA is estimated to be 54.2 kJ mol −1 and the proton transfer leads to the formation of an anionic intermediate ( 5 1 intA ).This step is estimated to be endothermic (+52.4 kJ mol −1 ) in nature.In the next step this intermediate undergoes a rapid electron transfer from the anionic DHA to the metal complex and this leads to a radical intermediate ( 5 1 intR ).This radical intermediate is stabilized by 41.3 kJ mol −1 compared to the 5 1 intA intermediate suggesting that the electron transfer is feasible and the reaction proceeds further from the 5 1 intR species.For species 2 the initial hydrogen abstraction ( 4 2 ts1 ) from DHA requires a barrier height of 63.2 kJ mol −1 which is slightly higher than that computed for species 1.In contrast to the mechanistic steps described for 1, in 2 the formation of the radical intermediate 4 2 intR is thermodynamically more favored compared to the anionic intermediate 4 2 intA (energy margin of 162.8 kJ mol −1 ).Also the nature of the transition state advocates the direction of the course of reaction that the species go forth.Comparing just the Mn-O bond distance, it is clear that species 5 1 ts1 is reactant like while the 4 2 ts1 is product like (see Table S1 of ESI ‡).In case of 4 2 ts1 , significant spin densities are detected at the DHA moiety (group spin densities on the DHA ρ DHA = −0.25)suggesting a development of a partial radical character at the transition state (see Fig. 4).The computed energetics and the electronic structure is consistent with earlier theoretical reports on a Mn IV vO complexes. 58A similar radicaloid character for the transition state has also been reported for the non-heme Fe IV vO species. 59In 5 1 ts1 the DHA moiety is found to have negligible spin densities (ρ DHA = 0.01) but a large accumulated negative charge and this suggests a preference for an anionic type intermediate in this case.For species 1, although the 5 1 intA and 5 1 intA species are not separated by an energy barrier, these two species are related by one electron being in DHA or in the Mn III vO complex.The nature of the transition state clearly reveals that it is converging to an
Dalton Transactions
Paper anionic intermediate and does not possess any radical character.Although the anionic and radical intermediate in the potential energy surface is shown to have no barrier, the electron transfer from the MHA − to MH • might have some barrier associated with the deformation of MHA molecule, besides the potential energy surface for such reactions are likely to be multidimensional in nature and thus warrant a valence-bond theory based approach 60 to further probe the nature of the transition state.As far as other spin states of the metal-oxo complexes are concerned, those states are very high in energy compared to the ground state.Although their structure and bonding are not described here, a similar scenario to the ground state is also witnessed for other excited spin states for both 1 and 2 (see ESI Table S1 and Fig. S1 ‡ for the optimized structures of the transition states).Put together, all these data clearly indicates that 1 and 2 proceeds via different routes with the former favoring an anionic intermediate while for the latler the reaction is routed through a radical intermediate.
Electronic structure origin of difference in the reactivity
The energy decomposition analysis (EDA) were performed for 1 and 2 by treating H 3 buea, Mn and O as separate fragments (H 3 buea + Mn + O, see Table 1).The EDA results for 1 and 2 show that ΔE int for Mn IV vO is 1.6 times larger than that of the Mn III vO suggesting that species 2 is very stable compared to species 1 and accounts for the fact that 1 is more reactive than 2. A significant contribution to the interaction energy for 2 arise from orbital stabilization and this indicates a stronger Mn-L bond, particularly the Mn-oxo bond for 2 compared to 1.The trigonal bipyramidal structure observed in 1 and 2 and its related frontier orbitals are entirely different from that of an octahedral complexes of high-valent manganese or iron-oxo species. 61Fig. 5 and 6 show the FMO of the 5 1 and 4 2 and their respective first transition states.The occupation of four electrons in 5 1 is found to be in π in which the two π* orbitals (π * xz and π * yz ) are degenerate, similarly the two δ orbitals (δ xy and δ x 2 −y 2) are also degenerate and lie 0.35 eV higher in energy with respect to the π* orbitals.The empty σ z 2 orbital is found to lie much higher at 4.19 eV.In the transition state 5 1 ts1 , the degeneracy of the π* and δ orbitals are slightly perturbed and the δ-type orbitals are further destabilized.In the case of 5 2, the three unpaired electrons are found to be in π * xz 1 π * yz 1 δ xy 1 δ 0 x 2 z 2 σ 0 z 2 orbitals.Unlike in 1, here the degeneracy of the two π* orbitals (π * xz and π * yz ) are lifted and the σ z 2 orbital is significantly destabilized.In the first transition state 4 2 ts1 , one of the C-H bond electron is found to be transferred to δ x 2 −y 2 orbitals (vide infra) and this reduces the energy gap between the π * xz and π * yz orbitals (see Scheme 2).We would like to note here that earlier report on an octahedral Mn IV vO species suggest such a transfer to π * xz orbital while here our calculations indicate the electron being transferred to δ type δ x 2 −y 2 orbital. 59This orbital has significant density on the oxygen and also the orbital is polarized to accept the electron (see Fig. 6).Such electron transfer does not occur in 5 1 ts1 for two reasons: (i) electron transfer to σ * z 2 demands a very high energy as this orbital is significantly destabilized, (ii) the other option of transferring the downward spin electron (β-electron) of the σ C-H bond to the singly occupied orbitals will lift the degeneracy and this again will demand a significant energy (see Scheme 2).The above two factors in fact favour electron transfer in 2 where the δ x 2 −y 2 orbital is not too high lying (see Scheme 2) and other singly occupied orbitals are non-degenerate and thus the Mn IV vO readily accepts an electron from the DHA leading to a transition state with a Mn III -OH like character.
Thermodynamic rational for the difference in the reactivity
To understand the origin of the difference in the mechanism and to probe various possible pathways by which the hydrogen atom transfer can takes place (PT-ET, ET-PT and PCET), a thermodynamic formation energy cycle for the Mn III vO and Mn IV vO has been constructed 62 and is shown in Fig. 7.A quick look at the thermodynamic energies suggests a favourable PCET for both the species, however if the kinetic results discussed above are assimilated, one can easily differentiate the subtle mechanistic change between the two species.The first transition state for the Mn III vO species is clearly a PT transition state as discussed above and thermodynamics suggest that this step is endothermic by 43.1 kJ mol −1 and the subsequent ET step more than compensate the energy loss, leading to the final [Mn II -OH] product.Given the barrier height of 52.5 kJ mol −1 for the first transition state, a metastable [Mn III -OH] + species is certainly feasible (note the characterized transition state has a strong Mn III -OH character, see below) and thus one can suggest a PT-ET mechanism is operational for this species.On the other hand, the ET-PT mechanism is unlikely as the ET step is exceedingly endothermic compared to the transition state barrier, thus the transition state is unlikely to converge to a [Mn II vO] meta-stable species.The spin density plots of Mn III vO, 5 1 ts1 and high-spin state of [Mn III -OH] are shown in Fig. 4. As one can see the transition state clearly resembles [Mn III -OH] type species (3.776 vs. 3.783) and this affirmatively suggests that the reaction proceed through a PT-ET mechanism here and not a through a PCET or ET-PT mechanism.Unlike 1, for species 2 both the initial PT (for PT-ET mechanism) and the alternative initial ET (for ET-PT mechanism, see Fig. 7) steps are exceedingly endothermic and are much higher than the barrier height computed for 4 2 ts1 (63.2 kJ mol −1 ).The spin density plots for the Mn IV vO, 4
Dalton Transactions Paper
[Mn IV -OH] + and this clearly directs the discussion towards a concerted PCET mechanism for species 2. This step is an energetically favourable process (−41.7 kJ mol −1 ).The reason for the difference between 1 and 2 is related to the stability of the electronically preferred [Mn III -OH] type species, independent of the starting oxidation state of the oxo species.As the Mn-O bonds are stronger and less basic in Mn IV vO compared to Mn III vO, a protonated meta-stable intermediate for the Mn IV vO is extremely unstable and this leads to a switch in the mechanism.Thus the thermodynamic cycle constructed essentially verifies the mechanistic insights discussed earlier and provide confidence to the established mechanism for species 1 and 2.
Second HAT reaction
The second hydrogen abstraction takes place ( 5 1 ts2 ) with the barrier of 21.3 kJ mol −1 and leads to the formation of anthracene in 1.For 2, the barrier height for second hydrogen abstraction ( 4 2 ts2 ) is computed to be 43.8 kJ mol −1 .In accord to the first step the second hydrogen atom transfer reaction is also found to proceed via the similar mechanism, i.e. species 1 is found to follow a PT-ET mechanism while 2 is found to follow a PCET path.This is again easily visible from the spin density plots computed for the 5 1 ts2 and 4 2 ts2 species (see Fig. 4) where again both the species resemble a [Mn III -OH] complex as discussed for the first hydrogen atom transfer reaction.
Correlation to experiments
Comparing both the transition states ts1 and ts2 barrier heights for both the species, it is clear that for both the species the first transition state is the rate-limiting.The computed barrier height of 54.2 (for 1) and 63.2 kJ mol −1 for the rate-determining step correlates well with the experimental kinetic data (75.3 and 79.5 kJ mol −1 for species 1 and 2 respectively), although the absolute values are slightly underestimated. 18Besides the computed imaginary frequencies corresponding to the transition states 5 1 ts1 and 5 1 ts2 are 1280.9iand 1191.7irespectively while for 4 2 ts1 and 4 2 ts2 are 2032i and 1668.2irespectively.Larger imaginary frequencies are obtained for species 2 compared to species 1 and this suggest a large tunneling contribution for 2 and a narrow and sharper reaction barrier than that of 1.This is clearly envisaged from the larger kinetic isotopic effect (KIE) values (6.8) obtained for species 2.
Conclusions
DFT calculation have been used to investigate the kinetic aspects of the C-H bond activation reactions of monomeric Mn III/IV voxo units with tetradentate tripodal ligands.The initial H-abstraction on the high spin surface is found to be the rate determining step for both the species 1 and 2. From a detailed electronic and structural analysis of the transition states along with the computation of reaction energies of various species in the hydrogen atom transfer reaction, our calculations unequivocally suggest that the highly reactive 1 prefers to undergo a step-wise proton transfer followed by electron transfer (PT-ET) mechanism while relatively less reactive 2 prefers a concerted proton-coupled electron transfer (PCET) path (see Scheme 3).This drastic difference in reactivity between 1 and 2 is mainly attributed to the strength of the MnvO bond, its basicity and the nature of electron delocalization during the C-H bond activation.Interestingly the computed transition state for the hydrogen atom transfer reactions for the Mn III vO and Mn IV vO species reveal the course of the reaction where structurally and electronically resembling [Mn III -OH] like transition state has been detected for both the species.This essentially en route to a PT-ET type mechanism for 1 and a PCET mechanism for species 2. All our computed results are in excellent agreement to the experimental reports.
Although the presented results are likely to be general for Mn III/IV vO species with different set of ligand architecture, factors which are expected to directly affect the Mn-O bond covalency (such as H-bonding interaction etc.) can lead to some difference in the predicted reactivity pattern.
To this end, here for the first time using DFT methods we have demonstrated that a subtle mechanistic difference in hydrogen atom transfer reactions for such large metal-oxo complexes can be apprehended if an apt bunch of computational tools are employed (PES modelling, reaction energy computation, MO/NBO analysis).This procedure will be expanded to other examples in future.
Fig. 2
Fig. 2 Computed reaction profile of C-H activation reaction involving DHA and species 2.
Fig. 1
Fig. 1 Computed reaction profile of C-H activation reaction involving DHA and species 1.
Fig. 5
Fig.5Schematic MO diagrams of species 1 and its corresponding first transition state 5 1 ts1 .The energy differences given here are in eV.
2 ts1 and Mn III -OH are shown in Fig. 4 clearly suggest that the transition state here also resembles the [Mn III -OH] type species (spin density values 2.60 vs. 3.40 in 2 and 4 2 ts1 respectively) and not
Fig. 6
Fig.6Schematic MO diagrams of species 2 and its corresponding first transition state4 2 ts1 .The energy differences given here are in eV.
Fig. 7
Fig.7Relative thermodynamic free energies between Mn III/IV vO and its hydroxo complexes.
Table 1
Summary of EDA performed for species 1 and 2. The ΔE are given in kJ mol −1 | 5,794.6 | 2013-11-12T00:00:00.000 | [
"Chemistry"
] |
Admissible Control for Non-Linear Singular Systems Subject to Time-Varying Delay and Actuator Saturation: An Interval Type-2 Fuzzy Approach
: Applied in many fields, nonlinear systems involving delay and algebraic equations are referred to as singular systems. These systems remain challenging due to saturation constraints that affect actuators and cause harm to their operation. Furthermore, the complexity of the problem will increase when uncertainty also simultaneously affects the system under consideration. To address this issue, this paper investigated a feasible control strategy for nonlinear singular systems with time-varying delay that are subject to uncertainty and actuator saturation. The IT-2 fuzzy model was adopted to describe the dynamic of the non-linear delayed systems using lower and upper membership functions to deal with the uncertainty. Moreover, the polyhedron model was applied to characterize the saturation function. The goal of the control approach was to design a relevant IT2 fuzzy state feedback controller with mismatched membership functions so that the closed-loop system is admissible. On the basis of an appropriate Lyapunov–Krasovskii functional, sufficient delay-dependent conditions were established and an optimization problem was formulated in terms of linear matrix inequality constraints to optimize the attraction domain. Simulation examples are provided to verify the effectiveness of the proposed method.
Introduction
This section includes the literature review, notations, and acronyms used in the document, as well as an outline of the publication and its goals.
Literature Review
Singular systems that are described by a couple of algebraic and differential equations are characterized by their different modes, namely finite dynamic modes, infinite nondynamic modes, and infinite dynamic modes, respectively. The infinite dynamic modes have the feature to destroy the stability and performance of the system. Thus, the admissibility, which includes stability, regularity, and non-impulsiveness/causality, should be verified when dealing with this class of systems. As a consequence, the investigation of singular systems is both theoretically and practically important [1,2]. It is worth noting that time delays are common in many physical plants, and they can have a substantial negative impact on the performance and even the stability of practical systems [3][4][5][6][7]. Singular models and time-delay phenomena are general enough to enable some fundamental results (ii) The delay property and actuator saturation for the IT-2 fuzzy singular system under consideration were simultaneously considered in this study. Moreover, compared with the results suggested in [36,41], a more realistic problem was investigated in this paper that cannot be solved by the methods in the previous references. (iii) A new Lyapunov-Krasovskii functional candidate was constructed, and the delayrange-dependent approach was adopted to derive an admissibilization criterion via LMI formulation. Furthermore, the domain of attraction of the origin can be estimated for the underlying system.
After outlining the introduction and the objectives of our study, the paper is organized as follows: Section 2 presents the model and assumptions, as well as a description of the problem under study. In Section 3, we present and discuss the main findings of the paper. Specifically, this section is dedicated to developing a new delay-dependent admissibility criterion using the IT-2 fuzzy model from (3) and selecting a suitable Lyapunov-Krasovskii functional. To further ensure the usage of this scheme, we developed an LMI criterion to establish that the closed-loop system is admissible and to optimize the attraction domain. To demonstrate the potential applications of the proposed scheme and validate its effectiveness, numerical simulations on mass-spring-damper and inverted pendulum systems are presented in Section 4. Lastly, we conclude with some conclusions regarding the obtained results, as well as some suggestions for future research in Section 5. Table 1 lists the notations and acronyms that should be used in this study. Table 1. List of notations and acronyms used in the paper.
Symbol
Acronym/Notation R set of the real numbers X ∈ R n n-dimensional Euclidean space X ∈ R n×m n × m real matrix X > 0 real symmetric positive definite matrix X X norm of the matrix X X transpose of the matrix X sym(X) term that is induced by symmetry r number of if-then rules LMI linear matrix inequalities IT-2 Interval type-2 fuzzy model TS Takagi-Sugeno
Preliminaries and Problem Statement
The aim of this section is to introduce some preliminaries that facilitate the understanding of our proposal and state the problem that we are investigating.
IT-2 TS Fuzzy Model
Consider a class of non-linear singular systems that can be described by the following IT-2 TS fuzzy model: where M k i is an IT-2 fuzzy set of rule i corresponding to the premise variable θ i (x(t)), k = 1, 2, · · · , s, k is the number of premise variables, and i ∈ S {1, 2, . . . , r} is the number of rules. x(t) ∈ R n and σ(u(t)) ∈ R m define, respectively, the state and saturated input vectors. Matrices A i , A di and B i in model (1) are known with appropriate dimensions. d(t) stands for the time-varying delay, and φ(t) defines the initial state for all t ∈ [−d 2 0].
Assumptions and Resulting Model
A1 d(t) is a continuous function such that where d 1 represents the lower delay bound, d 2 stands for the upper delay bound, and d r is the delay variation rate. A2 Singular matrix E satisfies rank(E) = q < n. A3 σ(u(t)) is the saturation that affects the actuator according to the following model: Based on the IT-2 fuzzy approach, the following interval defines the firing strength of the ith rule: where µ i (x(t)) ≥ 0 andμ i (x(t)) ≥ 0 are, respectively, the the lower and upper membership functions, and ω M k i (θ(x(t))) ≥ 0, andω M k i (θ(x(t))) ≥ 0 stand, respectively, for the lower and upper grades of membership. Therefore, the non-linear singular system can be described as µ i (x(t)) denotes the grade of the membership of the ith local system defined as Note that, by introducing weighting coefficient functions, we can represent any timevariant or time-invariant unmeasured parameters of the general non-linear system. Moreover, these functions are not necessarily known but exist and satisfy (4).
As a matter of convenience, µ i (x(t)) will be referred to as µ i in the sequel.
IT-2 Fuzzy State Feedback Controller Design
Here, the subsequent IT-2 fuzzy state-feedback controller structure was adopted to admissibilize the system under consideration: where K j is the gain matrix to be designed. Similarly, ϑ(x(t)) = [ϑ 1 (x(t)), ϑ 2 (x(t)), . . . , ϑ s (x(t))] defines the premise vector, and N k c j , (k c = 1, 2, · · · , s) represents the type-2 fuzzy sets of the j-th controller rule.
The following is the firing interval for the jth rule: ) define, respectively, the lower and upper membership functions. ω N kc j (ϑ(x(t))) ≥ 0 andω N kc j (ϑ(x(t))) ≥ 0 stand for the lower and upper grades of the membership of ϑ(x(t)) in N k j , respectively. The global fuzzy model can be inferred as follows: Remark 1. Over the past several decades, type-1 T-S fuzzy systems have been extensively investigated. It is interesting to note that all of these studies are founded on the PDC approach, in which the controller and the plant both have the same membership functions. Nevertheless, this assumption is not always valid since membership functions may be uncertain in practice. As proposed in [36,41], we aim to address this issue using the interval-valued type-2 fuzzy controller (5).
From (5), the fuzzy-model-based actuator saturation control input is expressed as: To deal with the saturation function, the following lemmas should be provided for further development.
Definition 1.
• For a positive scalar ρ, an ellipsoid set is defined as where E PE defines a positive definite matrix.
• For a given matrix H ∈ R m×n , a polyhedral set is given by L(H,ū) = {x ∈ R n ; |H l x| ≤ū l , l = 1, . . . , m}, where H l represents the lth row of H, andū l is a positive given scalar.
Lemma 1 ([40]
). For given matrices K, H ∈ R m×n , if x ∈ L(H,ū), then where x ∈ R n is a vector and co stands for the convex hull. In addition, we have where M s is an appropriate dimensional diagonal matrix with either elements 1 or 0, and δ 1 , . . . , δ ς are positives scalars such that ς ∑ s=1 δ s = 1.
By combining (3) and (7), the following closed-loop system is obtained according to the above lemma: where
Problem Statement
The main objective of this paper is to develop an IT-2 fuzzy controller that maintains the closed-loop system admissible in the presence of actuator saturation for non-linear singular systems expressed by an IT-2 fuzzy model as defined in (3).
Admissibility Analysis
In this section, the admissibility of the closed-loop system (8) will be addressed using the following lemmas.
Lemma 2 ([42]
). For a given vector x(t) : [u, v] −→ R n where the derivativeẋ(t) is a piecewise continuous function on the interval [u , v], the following inequality holds for any given matrix
Lemma 3 ([43]
). For any singular matrix E with rank(E) = q, and decomposed as E = E L E R , a full row rank matrix U and a full column rank matrix V can be found such that UE = 0 and EV = 0.
For any symmetric matrix P ∈ R n×n and non-singular matrix X ∈ R (n−q)×(n−q) , we define a non-singular matrix PE + U XV so that Theorem 1. Letū l be a positive scalar, and d 1 , d 2 , d r be scalars satisfying assumption A1 if matrices P > 0, X, and Z > 0 exist and verify the following conditions: Then, the controller in (5) exists, and for any compatible initial condition satisfying (11), the closed-loop system (8) is admissible within the set ε(E PE, ρ).
Matrices U ∈ R (n−q)×n and V ∈ R n×(n−q) are as defined in Lemma 3.
Next, we prove the regularity and impulse-free properties of system (8). From (9), we know that Ψ 11ijs < 0, which implies that For matrix E, there exist two non-singular matrices M and N, such that Based on Lemma 3, we know that E Π = Π E. Pre-and post-multiplying Π E and (16) by N and N, respectively, using (17), we know thatΠ 12 = 0 and sym (Π 22Âijs22 ) < 0 Thus, this would mean that ijs22 is non-singular; then, it can be concluded, given the definition suggested in [1], that system (8) is regular and impulse-free.
Pre and post-multiplying (10) by diag( Using (18) yields: Thus, it can be verified that ε(E PE, ρ) ⊂ r j=1 L(H j ,ū), and using the fact thaṫ V(x(t)) ≤ 0, it is easy to verify that V(x(t)) ≤ V(x(0)); this results in Thus, the constraint in (11) is verified for any compatible initial condition, and this completes the proof.
Fuzzy Controller Design
Our task here is to translate the conditions in Theorem 1 into LMI terms that can be solved with the existing solvers.
Remark 2.
For the purpose of maximizing the the set of initial conditions on (11), the following optimization problem can be solved: where where variables w i , (i = 1, 2, . . . 10) are introduced for the optimization procedure, and τ c , (c = 1, 2, 3) represents the weighting relative to the objective function.
From LMI b), we have The satisfaction of LMI b) implies that Additionally, the satisfaction of the LMIs in c) implies that We know that ς i = ρ χ i . If we minimize the criterion as defined in (24), then the bounds on φ and φ tend to be greater.
Numerical Applications
As part of this section, we present two examples illustrating the proposed control strategy.
Mass-Spring-Damper System
With the help of the example of a mass-spring-damper system shown in Figure 1 and borrowed from [44], the efficiency and correctness of the proposed control scheme can be demonstrated. Define x 1 (t), x 2 (t), and x 3 (t) as the displacement, velocity, and acceleration of the system, respectively, and u(t) is the applied force. Newton's law can be used to describe the mechanical system as follows: (25) where m is the mass, and the non-linear functions in the model are defined as follows: h(x 2 (t)) = 1 + 0.13(x 2 (t) + ∆x 2 (t)) 3 m u x 1 (t) Assume that m = 1, x 1 (t) ∈ [−1.5 1.5], x 2 (t) ∈ [−1.5 1.5], and 0 ≤ ∆x 2 (t) ≤ 0.1. Given the uncertainty associated with the parameter ∆x 2 (t), it is evident that the IT-2 T-S fuzzy system should be adopted to model the non-linear system (25). The lower and upper bounds of membership functions of the corresponding IT-2 TS fuzzy model are listed in Table 2. Table 2. Lower and upper membership functions of the plant.
Lower Membership Functions
Upper Membership Functions 3 7.471μ 1 = 3.375 + (x 2 (t) + 0.1) 3 7.471 The weighting functions are chosen as α i = sin 2 (x 2 (t)) andᾱ i = 1 − α 2 i for i = 1, 2. The interval-valued fuzzy system (1) is defined by the following matrices: This example aims to design a fuzzy controller (5) that guarantees the admissibility of closed-loop systems. To accomplish this goal, Table 3 lists the lower and upper bounds of membership functions to characterize the interval-valued fuzzy controller. Table 3. Lower and upper membership functions of the controller.
Then, by solving the problem formulated in (24) The simulation results are presented in Figure 2 using the above-mentioned fuzzy control gains. In particular, Figure 2a depicts the state responses of the saturated closedloop for the initial condition φ(t) = [1.25, 1, −0.7] , t ∈ [−0.25 0]. Based on Figure 2c, it is evident that the closed system is well-controlled. Figure 2b illustrates the estimated domain of attraction for various initial conditions when actuator saturation is present and time-varying delay exists.
Inverted Pendulum System
Our goal in this section is to illustrate the effectiveness of the proposed control scheme by comparing it with the relevant work proposed in [45] for an inverted pendulum system described by the interval type-2 fuzzy model without delay and saturation. The following is a system that describes the process: Non-linear functions f (x(t)) and h(x(t)) are defined as f (x(t)) = g − am p Lx 2 2 (t) cos(x 1 (t)) sin(x 1 (t))) 4L/3 − am p Lx 2 2 (t) cos 2 (x 1 (t)) x 1 (t) , h(x(t)) = −a cos(x 1 (t)) 4L/3 − am p L cos 2 (x 1 (t)) x 1 (t) represents the angle between the pendulum and vertical, x 2 (t) represents the angular velocity, and x 3 (t) is the relative horizontal distance between the pendulum center and cart. The force applied to the cart is given by u(t). The numerical values of the model are: 2L = 1 m is the length of the pendulum, g = 9.8 m/s 2 is the gravity acceleration, m c ∈ [2 3] kg is the mass of the cart, m p ∈ [8 16] kg is the mass of the pendulum, and a = 1/(m c + m p ). For the sake of this study, the inverted pendulum was taken to operate in a domain fixed by x 1 (t) ∈ [− 5π 12 , and x 2 (t) ∈ [−5 5]. Next, we describe the inverted pendulum system (26) as follows: where β = 0.1, and Table 4 defines the lower and upper membership functions used for this example.
To perform some simulations, based on the parameters mentioned above and the controller in [45] with the following gains: (29) different cases are considered: Case 1: Delay does not affect system. Here, we set β = 0, and both controllers are applied to the system under φ(t) = [− π 6 , 3, 0.5] . The evolution of the state signals is plotted in Figure 3.
Case 2: A delay affects the system. By using the gains in (28) and (29), respectively, for β = 0.1 and the above delay parameters, the simulation results are shown in Case 3: A delay and saturation affect the system For this case, Figure 5 shows the states and saturated input signals of the pendulum system when the control law calculated using (28) is implemented for different delays and different initial conditions. Figure 3 indicates that the two implemented controllers are capable of guaranteeing a convergence of the system's states when the latter is not affected by delay and saturation. However, in contrast, as can be seen from Figure 4, the controller proposed by [45] is not able to stabilize the system when the delay occurs. Upon examination of the plotted figure in Figure 5, it is evident that the presented control law stabilizes the system despite the time-varying delay and saturation of inputs.
It should be noted that the simulation is conducted assuming measurement errors in m p and m c , which appear in membership functions such that m c = 2.5 + 0.5 sin(t) and m p = 12 + 4 sin(t). This means that the stability conditions based on the type-1 fuzzy system cannot be applied. In light of these results, the synthesized control law is effective in stabilizing the underlying system as well as remaining robust despite input saturation and uncertainty.
Comparative Explanations
The suggested strategy in this article can effectively solve the problem of admissibilization for mass-spring-damper and inverted pendulum mechanical systems based on the IT-2 fuzzy singular model subject to the time-varying delay and actuator saturation constraints. When compared to previous findings, the following are the primary advantages of the suggested method: (i) Compared to existing findings in [22,24,46], the outcome developed in this paper is more realistic and general, since the IT-2 fuzzy model incorporates the system uncertainties. In addition, the premise membership functions of fuzzy controllers and fuzzy systems are not the same. (ii) Though further results for interval-valued fuzzy singular systems have been published, such as [36,45], none of these results will be applicable when the system under examination exhibits actuator saturation. (iii) For this class of systems, considering the effects of dynamic quantization, using a dynamic/static output feedback controller or an observer-based controller [16] can be a significant issue.
Conclusions
An attempt was made in this study to provide solutions to the main challenges that come up when dealing with non-linear singular systems, such as uncertainty, timevarying delay, and saturation. The proposed control scheme emphasizes the use of a state feedback controller based on an IT-2 fuzzy model that exploits both the lower and upper membership functions to adequately characterize uncertainties. By employing an appropriate Lyapunov-Krasovskii functional with convex optimization techniques, the controller existence was analyzed. The proposed control scheme was validated by numerical simulations considering mass-spring-damper and inverted pendulum systems. Research areas that need to be pursued in the near future include quantized output feedback stabilization problems for Markovian jump singular IT-2 fuzzy systems with sensor and actuator saturation. | 4,526.2 | 2023-01-07T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Improved YOLOv4 Marine Target Detection Combined with CBAM
Marine target detection technology plays an important role in sea surface monitoring, sea area management, ship collision avoidance, and other fields. Traditional marine target detection algorithms cannot meet the requirements of accuracy and speed. This article uses the advantages of deep learning in big data feature learning to propose the YOLOv4 marine target detection method fused with a convolutional attention module. Marine target detection datasets were collected and produced and marine targets were divided into ten categories, including speedboat, warship, passenger ship, cargo ship, sailboat, tugboat, and kayak. Aiming at the problem of insufficient detection accuracy of YOLOv4’s self-built marine target dataset, a convolutional attention module is added to the YOLOv4 network to increase the weight of useful features while suppressing the weight of invalid features to improve detection accuracy. The experimental results show that the improved YOLOv4 has higher detection accuracy than the original YOLOv4, and has better detection results for small targets, multiple targets, and overlapping targets. The detection speed meets the real-time requirements, verifying the effectiveness of the improved algorithm.
Introduction
As an important carrier of marine resource development and economic activities, accurate monitoring of marine targets has become increasingly important. Carrying out automated research on marine target monitoring is of great significance for strengthening the management of sea areas, and whether illegal targets can be detected and located in time is the focus of marine target monitoring.
Among the traditional marine target detection algorithms, Fefilatyev et al. used a camera system mounted on a buoy to quickly detect ship targets, and proposed a new type of ocean surveillance algorithm for deep-sea visualization [1]. In the context of ship detection, a new horizon detection scheme for complex sea areas has been developed. Shi W. et al. [2] effectively suppressed the noise of the background image and detected the ship target by combining morphological filtering with multi-structural elements and improved median filtering. The influence of sea clutter on the ship target detection was eliminated by using connected domain calculation. Chen Z et al. proposed a ship target detection algorithm for marine video surveillance [3], the purpose is to reduce the impact of clutter in the background and improve the accuracy of ship target detection. In the proposed detector, the main steps of background modeling, model training update, and foreground segmentation are all based on the Gaussian Mixture Model (GMM). This algorithm not only improves the accuracy of the target, but also greatly reduces the probability of false alarms and reduces the impact of dynamic scene changes. Although traditional methods have achieved good results, in the face of complex and changeable sea environments with a lot of noise interference, traditional detection algorithms have problems such as low detection accuracy and poor robustness. Therefore, traditional methods have great limitations in practical applications. In Figure 1, the input image is sent to the backbone network to complete feature extraction, then through SPP and PANet to complete the fusion of feature maps of different scales, and, finally, output feature maps of three scales to predict the bounding box, category, and confidence, and the head of YOLOv4 is consistent with YOLOv3.
Feature Extraction Network
YOLOv4 uses a new backbone network CSPDarknet-53 for feature extraction of the input data. CSPDarknet-53 is an improvement of Darknet-53. Darknet-53 is composed of 5 large residual modules, each of residual module is separate corresponding to 1,2,8,8, and 4 small residual units, the residual module solves the problem of gradient disappearance caused by the continuous deepening of the network, greatly reduces network parameters, and makes it easier to train deeper convolutional neural networks. The network structure is shown in Figure 2. In Figure 1, the input image is sent to the backbone network to complete feature extraction, then through SPP and PANet to complete the fusion of feature maps of different scales, and, finally, output feature maps of three scales to predict the bounding box, category, and confidence, and the head of YOLOv4 is consistent with YOLOv3.
Feature Extraction Network
YOLOv4 uses a new backbone network CSPDarknet-53 for feature extraction of the input data. CSPDarknet-53 is an improvement of Darknet-53. Darknet-53 is composed of 5 large residual modules, each of residual module is separate corresponding to 1,2,8,8, and 4 small residual units, the residual module solves the problem of gradient disappearance caused by the continuous deepening of the network, greatly reduces network parameters, and makes it easier to train deeper convolutional neural networks. The network structure is shown in Figure 2. In Figure 1, the input image is sent to the backbone network to complete feature extraction, then through SPP and PANet to complete the fusion of feature maps of different scales, and, finally, output feature maps of three scales to predict the bounding box, category, and confidence, and the head of YOLOv4 is consistent with YOLOv3.
Feature Extraction Network
YOLOv4 uses a new backbone network CSPDarknet-53 for feature extraction of the input data. CSPDarknet-53 is an improvement of Darknet-53. Darknet-53 is composed of 5 large residual modules, each of residual module is separate corresponding to 1, 2, 8, 8, and 4 small residual units, the residual module solves the problem of gradient disappearance caused by the continuous deepening of the network, greatly reduces network parameters, and makes it easier to train deeper convolutional neural networks. The network structure is shown in Figure 2. In Figure 2, the Convolution layer consists of a convolutional layer, a batch normalization layer, and a Mish activation function. Cross Stage Partial is a newly added cross-stage local network. The residual layer is a small residual unit. CSPDarknet-53 also removes the pooling layer and the fully connected layer, greatly reducing parameters and improving calculation speed. During training, the image is stretched and scaled to a size of 416 × 416, then sent it to the convolutional neural network. After 5 times of 3 × 3/2 convolution (convolution kernel size is 3 × 3, step size is 2), the size is reduced to 13 × 13, and three scales of 52 × 52, 26 × 26, 13 × 13 are selected as the size of the output feature map. Different sizes of feature maps are used to detect different sizes of targets. Small feature maps detect large targets, and large output feature maps detect small targets.
The CSP module in CSPDarknet-53 solves the problem of increased calculation and slower network speed due to redundant gradient information generated in the network deepening process. The structure of the CSP module is shown in Figure 3, and the structure of the small residual unit is shown in Figure 4. In Figure 2, the Convolution layer consists of a convolutional layer, a batch normalization layer, and a Mish activation function. Cross Stage Partial is a newly added cross-stage local network. The residual layer is a small residual unit. CSPDarknet-53 also removes the pooling layer and the fully connected layer, greatly reducing parameters and improving calculation speed. During training, the image is stretched and scaled to a size of 416 416 , then sent it to the convolutional neural network. After 5 times of 3 3/2 convolution (convolution kernel size is 33 , step size is 2), the size is reduced to 13 13 , and three scales of 52 52 , 26 26 , 13 13 are selected as the size of the output feature map. Different sizes of feature maps are used to detect different sizes of targets. Small feature maps detect large targets, and large output feature maps detect small targets.
The CSP module in CSPDarknet-53 solves the problem of increased calculation and slower network speed due to redundant gradient information generated in the network deepening process. The structure of the CSP module is shown in Figure 3, and the structure of the small residual unit is shown in Figure 4. In Figure 3, the feature map of the base layer is divided into two parts, and then they are merged through a cross-stage hierarchical structure. The CSP module can achieve a richer gradient combination, greatly reduce the amount of calculation, and improve the speed and accuracy of inference. In Figure 4, there are more shortcuts in the small residual unit than in the normal structure. The shortcut connection is equivalent to directly transferring the input features to the output for identity mapping, adding the input of the previous layer and the output of the current layer.
Feature Fusion Network
After the feature extraction network extracts the relevant features, the feature fusion network is required to fuse the extracted features to improve the detection ability of the model. The YOLOv4 feature fusion network includes PAN and SPP. The function of the SPP module is to make the input of the convolutional neural network not restricted by a fixed size, and it can increase the receptive field and effectively separate important context features, while not reducing the running speed of the network. The SPP module is located after the feature extraction network CSPDarknet-53, and the SPP network structure is shown in Figure 5. In Figure 2, the Convolution layer consists of a convolutional layer, a batch normalization layer, and a Mish activation function. Cross Stage Partial is a newly added cross-stage local network. The residual layer is a small residual unit. CSPDarknet-53 also removes the pooling layer and the fully connected layer, greatly reducing parameters and improving calculation speed. During training, the image is stretched and scaled to a size of 416 416 , then sent it to the convolutional neural network. After 5 times of 3 3/2 convolution (convolution kernel size is 33 , step size is 2), the size is reduced to 13 13 , and three scales of 52 52 , 26 26 , 13 13 are selected as the size of the output feature map. Different sizes of feature maps are used to detect different sizes of targets. Small feature maps detect large targets, and large output feature maps detect small targets.
The CSP module in CSPDarknet-53 solves the problem of increased calculation and slower network speed due to redundant gradient information generated in the network deepening process. The structure of the CSP module is shown in Figure 3, and the structure of the small residual unit is shown in Figure 4.
CSPX = Conv
Conv Res unit Conv Conv X small residuals Conv Concat In Figure 3, the feature map of the base layer is divided into two parts, and then they are merged through a cross-stage hierarchical structure. The CSP module can achieve a richer gradient combination, greatly reduce the amount of calculation, and improve the speed and accuracy of inference. In Figure 4, there are more shortcuts in the small residual unit than in the normal structure. The shortcut connection is equivalent to directly transferring the input features to the output for identity mapping, adding the input of the previous layer and the output of the current layer.
Feature Fusion Network
After the feature extraction network extracts the relevant features, the feature fusion network is required to fuse the extracted features to improve the detection ability of the model. The YOLOv4 feature fusion network includes PAN and SPP. The function of the SPP module is to make the input of the convolutional neural network not restricted by a fixed size, and it can increase the receptive field and effectively separate important context features, while not reducing the running speed of the network. The SPP module is located after the feature extraction network CSPDarknet-53, and the SPP network structure is shown in Figure 5. In Figure 3, the feature map of the base layer is divided into two parts, and then they are merged through a cross-stage hierarchical structure. The CSP module can achieve a richer gradient combination, greatly reduce the amount of calculation, and improve the speed and accuracy of inference.
In Figure 4, there are more shortcuts in the small residual unit than in the normal structure. The shortcut connection is equivalent to directly transferring the input features to the output for identity mapping, adding the input of the previous layer and the output of the current layer.
Feature Fusion Network
After the feature extraction network extracts the relevant features, the feature fusion network is required to fuse the extracted features to improve the detection ability of the model. The YOLOv4 feature fusion network includes PAN and SPP. The function of the SPP module is to make the input of the convolutional neural network not restricted by a fixed size, and it can increase the receptive field and effectively separate important context features, while not reducing the running speed of the network. The SPP module is located after the feature extraction network CSPDarknet-53, and the SPP network structure is shown in Figure 5. In Figure 5, the SPP network uses four different scales of maximum pooling to process the input feature maps. The pooling core sizes are 11 , 55 , 99 , 13 13 , and 11 is equivalent to without processing, the four feature maps are subjected to a concat operation. The maximum pooling adopts padding operation, the moving step is 1, and the size of the feature map does not change after the pooling layer. After SPP, YOLOv4 uses PANet instead of the feature pyramid in YOLOv3 as the method of parameter aggregation. PANet adds a bottom-up path augmentation structure after the top-down feature pyramid, which contains two PAN structures. And the PAN structure is modified. The original PAN structure uses a shortcut connection to fuse the down-sampled feature map with the deep feature map, and the number of channels of the output feature map remains unchanged. The modified PAN uses the concat operation to connect the two input feature maps, and merge the channel numbers of the two feature maps. The top-down feature pyramid structure conveys strong semantic features, and the bottom-up path augmentation structure makes full use of shallow features to convey strong positioning features. PANet can make full use of shallow features, and for different detector levels, feature fusion of different backbone layers to further improve feature extraction capabilities and improve detector performance.
Predictive Network
YOLOv4 outputs three feature maps of different scales to predict the bounding box position information, corresponding category, and confidence of the target. YOLOv4 continues the basic idea of YOLOv3 bounding box prediction and adopts a prediction scheme based on priori box. YOLOv4 bounding box prediction is shown in Figure 6. In Figure 5, the SPP network uses four different scales of maximum pooling to process the input feature maps. The pooling core sizes are 1 × 1, 5 × 5, 9 × 9, 13 × 13, and 1 × 1 is equivalent to without processing, the four feature maps are subjected to a concat operation. The maximum pooling adopts padding operation, the moving step is 1, and the size of the feature map does not change after the pooling layer.
After SPP, YOLOv4 uses PANet instead of the feature pyramid in YOLOv3 as the method of parameter aggregation. PANet adds a bottom-up path augmentation structure after the top-down feature pyramid, which contains two PAN structures. And the PAN structure is modified. The original PAN structure uses a shortcut connection to fuse the down-sampled feature map with the deep feature map, and the number of channels of the output feature map remains unchanged. The modified PAN uses the concat operation to connect the two input feature maps, and merge the channel numbers of the two feature maps. The top-down feature pyramid structure conveys strong semantic features, and the bottom-up path augmentation structure makes full use of shallow features to convey strong positioning features. PANet can make full use of shallow features, and for different detector levels, feature fusion of different backbone layers to further improve feature extraction capabilities and improve detector performance.
Predictive Network
YOLOv4 outputs three feature maps of different scales to predict the bounding box position information, corresponding category, and confidence of the target. YOLOv4 continues the basic idea of YOLOv3 bounding box prediction and adopts a prediction scheme based on priori box. YOLOv4 bounding box prediction is shown in Figure 6. In Figure 5, the SPP network uses four different scales of maximum pooling to process the input feature maps. The pooling core sizes are 1 1 9 9 × , 13 13 × , and 1 1 × is equivalent to without processing, the four feature maps are subjected to a concat operation. The maximum pooling adopts padding operation, the moving step is 1, and the size of the feature map does not change after the pooling layer.
After SPP, YOLOv4 uses PANet instead of the feature pyramid in YOLOv3 as the method of parameter aggregation. PANet adds a bottom-up path augmentation structure after the top-down feature pyramid, which contains two PAN structures. And the PAN structure is modified. The original PAN structure uses a shortcut connection to fuse the down-sampled feature map with the deep feature map, and the number of channels of the output feature map remains unchanged. The modified PAN uses the concat operation to connect the two input feature maps, and merge the channel numbers of the two feature maps. The top-down feature pyramid structure conveys strong semantic features, and the bottom-up path augmentation structure makes full use of shallow features to convey strong positioning features. PANet can make full use of shallow features, and for different detector levels, feature fusion of different backbone layers to further improve feature extraction capabilities and improve detector performance.
Predictive Network
YOLOv4 outputs three feature maps of different scales to predict the bounding box position information, corresponding category, and confidence of the target. YOLOv4 continues the basic idea of YOLOv3 bounding box prediction and adopts a prediction scheme based on priori box. YOLOv4 bounding box prediction is shown in Figure 6. In Figure 6, (c x , c y ) are the coordinates of the upper left corner of the grid cell where the target center point is located, (p w , p h ) is the width and height of the priori box, (b w , b h ) is the width and height of the actual prediction box, and (σ(t x ), σ(t y )) is the offset value predicted by the convolutional neural network. The position information of the bounding box is calculated by Formulas (1)-(5), where t w , t h is also predicted by the convolutional network, and (b x , b y ) is the coordinates of the center point of the actual prediction box. In the obtained feature map, the length and width of each grid cell are 1, so (c x , c y ) = (1, 1) in Figure 6, and the sigmoid function is used to limit the predicted offset to between 0 and 1.
The loss function of YOLOv4 includes regression, confidence, and classification loss functions. Among them, the bounding box regression loss function uses CIOU to replace the mean square error loss, which makes the boundary regression faster and more accurate [25]. By minimizing the loss function between the predicted box and the real box, the network is trained and the weights are constantly updated. Confidence and classification loss still use cross entropy loss.
CBAM-Based YOLOv4 Network Structure Improvement
The attention mechanism generates a mask through the neural network, and the values in the mask represent the attention weights of different locations. Common attention mechanisms mainly include channel attention mechanism, spatial attention mechanism, and mixed domain attention mechanism. The channel attention mechanism is to generate a mask for the channel of the input feature map, and different channels have corresponding attention weights to achieve channel-level distinction; the spatial attention mechanism is to generate a mask on the spatial position of the input feature map, and different spatial regions have corresponding weights to realize the distinction of spatial regions; the hybrid attention mechanism is to introduce the channel attention mechanism and the spatial attention mechanism at the same time. In this paper, the mixed attention CBAM module is introduced to make the neural network pay more attention to the target area containing important information [26], suppress irrelevant information, and improve the overall accuracy of target detection.
CBAM is a high-efficiency, lightweight attention module that can be integrated into any convolutional neural network architecture, and can be trained end-to-end with the basic network. The CBAM module structure is shown in Figure 7. x y c c = in Figure 6, and the sigmoid function is used to limit the predicted offset to between 0 and 1.
( ) The loss function of YOLOv4 includes regression, confidence, and classification loss functions. Among them, the bounding box regression loss function uses CIOU to replace the mean square error loss, which makes the boundary regression faster and more accurate [25]. By minimizing the loss function between the predicted box and the real box, the network is trained and the weights are constantly updated. Confidence and classification loss still use cross entropy loss.
CBAM-Based YOLOv4 Network Structure Improvement
The attention mechanism generates a mask through the neural network, and the values in the mask represent the attention weights of different locations. Common attention mechanisms mainly include channel attention mechanism, spatial attention mechanism, and mixed domain attention mechanism. The channel attention mechanism is to generate a mask for the channel of the input feature map, and different channels have corresponding attention weights to achieve channel-level distinction; the spatial attention mechanism is to generate a mask on the spatial position of the input feature map, and different spatial regions have corresponding weights to realize the distinction of spatial regions; the hybrid attention mechanism is to introduce the channel attention mechanism and the spatial attention mechanism at the same time. In this paper, the mixed attention CBAM module is introduced to make the neural network pay more attention to the target area containing important information [26], suppress irrelevant information, and improve the overall accuracy of target detection.
CBAM is a high-efficiency, lightweight attention module that can be integrated into any convolutional neural network architecture, and can be trained end-to-end with the basic network. The CBAM module structure is shown in Figure 7. In Figure 7, the CBAM module is divided into a channel attention module and a spatial attention module. First, input the feature map into the channel attention module, output the corresponding attention map, then multiply the input feature map with the attention map, the output passes through the spatial attention module, and performs the same In Figure 7, the CBAM module is divided into a channel attention module and a spatial attention module. First, input the feature map into the channel attention module, output the corresponding attention map, then multiply the input feature map with the attention map, the output passes through the spatial attention module, and performs the same operation, and, finally, get the output feature map, the mathematical expression of which is as follows: where ⊗ represents element-wise multiplication, F is the input feature map, M c (F) is the channel attention map output by the channel attention module, M s (F ) is the spatial attention map output by the spatial attention module, and F is the feature map output by the CBAM.
Channel Attention Module
Each channel of the feature map represents a feature detector. Therefore, channel attention is used to focus on what features are meaningful. The structure of the channel attention module is shown in Figure 8. operation, and, finally, get the output feature map, the mathematical expression of which is as follows: where ⊗ represents element-wise multiplication, F is the input feature map, is the channel attention map output by the channel attention module, ( ) s M F ′ is the spatial attention map output by the spatial attention module, and F ′′ is the feature map output by the CBAM.
Channel Attention Module
Each channel of the feature map represents a feature detector. Therefore, channel attention is used to focus on what features are meaningful. The structure of the channel attention module is shown in Figure 8. In Figure 8, the input feature map F is first subjected to global maximum pooling and global average pooling based on width and height, and then a multi-layer perceptron (MLP) with shared weights is passed in. The MLP contains a hidden layer, which is equivalent to two fully connected layers. The two outputs of the MLP are added pixel by pixel, finally, the channel attention map is obtained through the Sigmoid activation function. Its mathematical expression is: where σ is the Sigmoid activation function, 0 W and 1 W are the weights of MLP, , r is the dimensionality reduction factor, and 16 r = in this paper.
Spatial Attention Module
After the channel attention module, the spatial attention module is used to focus on where the meaningful features come from. The structure of the spatial attention module is shown in Figure 9. In Figure 8, the input feature map F is first subjected to global maximum pooling and global average pooling based on width and height, and then a multi-layer perceptron (MLP) with shared weights is passed in. The MLP contains a hidden layer, which is equivalent to two fully connected layers. The two outputs of the MLP are added pixel by pixel, finally, the channel attention map is obtained through the Sigmoid activation function. Its mathematical expression is: where σ is the Sigmoid activation function, W 0 and W 1 are the weights of MLP, W 0 ∈ R C/r×C , W 1 ∈ R C×C/r , r is the dimensionality reduction factor, and r = 16 in this paper.
Spatial Attention Module
After the channel attention module, the spatial attention module is used to focus on where the meaningful features come from. The structure of the spatial attention module is shown in Figure 9.
operation, and, finally, get the output feature map, the mathematical expression of which is as follows: where ⊗ represents element-wise multiplication, F is the input feature map, is the channel attention map output by the channel attention module, ( ) s M F ′ is the spatial attention map output by the spatial attention module, and F ′′ is the feature map output by the CBAM.
Channel Attention Module
Each channel of the feature map represents a feature detector. Therefore, channel attention is used to focus on what features are meaningful. The structure of the channel attention module is shown in Figure 8. In Figure 8, the input feature map F is first subjected to global maximum pooling and global average pooling based on width and height, and then a multi-layer perceptron (MLP) with shared weights is passed in. The MLP contains a hidden layer, which is equivalent to two fully connected layers. The two outputs of the MLP are added pixel by pixel, finally, the channel attention map is obtained through the Sigmoid activation function. Its mathematical expression is: where σ is the Sigmoid activation function, 0 W and 1 W are the weights of MLP, , r is the dimensionality reduction factor, and 16 r = in this paper.
Spatial Attention Module
After the channel attention module, the spatial attention module is used to focus on where the meaningful features come from. The structure of the spatial attention module is shown in Figure 9. In Figure 9, the spatial attention module takes F as the input feature map, and, respectively, through channel-based global maximum pooling and global average pooling, then merges the two feature maps F s avg and F s max to obtain a feature map with a channel number of 2, it passes a 7 × 7 convolutional layer to reduce the channel number to 1, and finally gets a spatial attention map M s (F) through a Sigmoid activation function. Its mathematical expression is as follows: where 7 × 7 represents the size of the convolution kernel.
Improved YOLOv4 Algorithm
This paper adds a CBAM module to each of the three branches at the end of the YOLOv4 feature fusion network. Aiming at the characteristics of denseness, mutual occlusion, and multiple small targets of marine targets. By integrating the CBAM module, the weights of the channel features and spatial features of the feature map are assigned to increase the weights of useful features while suppressing the weights of invalid features, paying more attention to target regions containing important information, suppressing irrelevant information, and improving the overall accuracy of target detection. The improved network structure is shown in Figure 10. and finally gets a spatial attention map ( ) s M F through a Sigmoid activation function.
Its mathematical expression is as follows:
Improved YOLOv4 Algorithm
This paper adds a CBAM module to each of the three branches at the end of the YOLOv4 feature fusion network. Aiming at the characteristics of denseness, mutual occlusion, and multiple small targets of marine targets. By integrating the CBAM module, the weights of the channel features and spatial features of the feature map are assigned to increase the weights of useful features while suppressing the weights of invalid features, paying more attention to target regions containing important information, suppressing irrelevant information, and improving the overall accuracy of target detection. The improved network structure is shown in Figure 10. In Figure 10, assuming that the input image size is 416 × 416 × 3, take the first CBAM module as an example, the input feature map size is 52 × 52 × 256, after global maximum pooling and global average pooling, gets two feature maps of size 1 × 1 × 256 and 1 × 1 × 256, and then passes through a multi-layer perceptron with shared weights. The dimensionality reduces to 1 × 1 × 16, the dimensionality reduction coefficient is 16, and then increases the dimensionality to 1 × 1 × 256, adds the operation to the two feature maps, and gets the channel attention map with a size of 1 × 1 × 256 through the Sigmoid activation function, multiplies the input feature map and the attention map to get the output of 52 × 52 × 256 size. Next, the spatial attention module is entered, through channel-based global maximum pooling and global average pooling respectively. Two feature maps of 52 × 52 × 1 size are obtained, and the number of channels of the two feature maps is combined to obtain a feature map of size 52 × 52 × 2, and then a 7 × 7 convolution is used to reduce the number of channels to 1. Finally, the Sigmoid activation function is used to obtain a spatial attention map of size 52 × 52 × 1, the input of the spatial attention module and the spatial attention map are multiplied to obtain an output feature map of size 52 × 52 × 256. The output feature map of the CBAM module is consistent with the input feature map.
Marine Target Data Set
The target detection data set in this article is 3000 images of marine targets collected from the Internet. The data format is JPG. The marine targets are divided into 10 categories, namely speedboat, warship, passenger ship, cargo ship, sailboat, tugboat, kayak, boat, fighter plane, and buoy.
LabelImg is used to label the obtained data. The labeled data is divided into the training set, validation set, and test set according to the ratio of 5:1:1. The format of the data set is produced according to the VOC data set. Part of the marine target data is shown in Figure 11. ries, namely speedboat, warship, passenger ship, cargo ship, sailboat, tugboat, kayak, boat, fighter plane, and buoy.
LabelImg is used to label the obtained data. The labeled data is divided into the training set, validation set, and test set according to the ratio of 5:1:1. The format of the data set is produced according to the VOC data set. Part of the marine target data is shown in Figure 11. In order to further increase the generalization ability of the model and increase the diversity of samples, offline data augmentation is performed by mirroring, brightness adjustment, contrast, random cropping, etc., and then the enhanced data is added to the training dataset to complete the data expansion, and, finally, get 10,000 pictures.
Experimental Environment and Configuration
In order to further accelerate the network training speed, this experiment introduces transfer learning technology, loads the pre-trained model on the COCO dataset, and then trains the marine target dataset. The hardware environment and software version of the experiment are shown in Table 1. The parameters of the training network are shown in Table 2. In order to further increase the generalization ability of the model and increase the diversity of samples, offline data augmentation is performed by mirroring, brightness adjustment, contrast, random cropping, etc., and then the enhanced data is added to the training dataset to complete the data expansion, and, finally, get 10,000 pictures.
Experimental Environment and Configuration
In order to further accelerate the network training speed, this experiment introduces transfer learning technology, loads the pre-trained model on the COCO dataset, and then trains the marine target dataset. The hardware environment and software version of the experiment are shown in Table 1. The parameters of the training network are shown in Table 2. Among them, the learning rate decay of 0.1 means that the learning rate is reduced to one-tenth of the original after a certain number of iterations. In this experiment, the learning rate was changed at 16,000 iterations and 18,000 iterations.
The convergence curve of the loss function obtained after training the network is shown in Figure 12.
Among them, the learning rate decay of 0.1 means that the learning rate is reduced to one-tenth of the original after a certain number of iterations. In this experiment, the learning rate was changed at 16,000 iterations and 18,000 iterations.
The convergence curve of the loss function obtained after training the network is shown in Figure 12. It can be seen from Figure 12 that due to the loading of the pre-trained model, the loss value can be reduced to a lower value in a few iterations. It can also be seen that the loss value maintains a downward trend until it finally converges. The loss value drops to a relatively small value when the number of iterations is 4000, reaches a relatively stable level when iteration 18,000, and, finally, drops to 1.0397. The overall effect of training is ideal.
Marine Target Detection Performance Comparison
In order to verify the effectiveness of the improved YOLOv4 network, a comparative experiment was conducted between the original YOLOv4 training model and the improved YOLOv4 network model. The original YOLOv4 training parameters are consistent with the improved YOLOv4 training parameters. The commonly used target detection evaluation mAP is used to compare the model before and after the improvement.
In the target detection task, according to the intersection over union (IOU) to determine whether the target is successfully detected, the ratio of the intersection and union between the prediction box and ground truth box of the model is IOU. For a certain type of target in the dataset, assuming the threshold is α , when the IOU of the prediction box and ground truth box is greater than α , it means that the model prediction is correct; when the IOU of the prediction box and ground truth box is less than α , it means that the model predicts incorrectly. The confusion matrix is shown in Table 3 below. It can be seen from Figure 12 that due to the loading of the pre-trained model, the loss value can be reduced to a lower value in a few iterations. It can also be seen that the loss value maintains a downward trend until it finally converges. The loss value drops to a relatively small value when the number of iterations is 4000, reaches a relatively stable level when iteration 18,000, and, finally, drops to 1.0397. The overall effect of training is ideal.
Marine Target Detection Performance Comparison
In order to verify the effectiveness of the improved YOLOv4 network, a comparative experiment was conducted between the original YOLOv4 training model and the improved YOLOv4 network model. The original YOLOv4 training parameters are consistent with the improved YOLOv4 training parameters. The commonly used target detection evaluation mAP is used to compare the model before and after the improvement.
In the target detection task, according to the intersection over union (IOU) to determine whether the target is successfully detected, the ratio of the intersection and union between the prediction box and ground truth box of the model is IOU. For a certain type of target in the dataset, assuming the threshold is α, when the IOU of the prediction box and ground truth box is greater than α, it means that the model prediction is correct; when the IOU of the prediction box and ground truth box is less than α, it means that the model predicts incorrectly. The confusion matrix is shown in Table 3 below. In Table 3, TP represents the number of positive samples correctly predicted, FP represents the number of negative samples incorrectly predicted, FN is the number of positive samples incorrectly predicted, and TN represents the number of negative samples correctly predicted. The calculation formulas for precision rate and recall are as follows: AP value is usually used as the evaluation index of target detection performance. AP value is the area under the P-R curve, in which recall is taken as X-axis and precision as Y-axis. AP represents the accuracy of the model in a certain category. mAP represents the average the accuracy of all categories and can measure the performance of the network model in all categories. mAP50 represents the mAP value where the IOU of the prediction box and ground truth box is greater than 0.5, and mAP75 represents the mAP value where the IOU threshold is greater than 0.75. The calculation formula of mAP is as follows: where N represents the number of detected categories. FPS (Frame Per Second) is used to evaluate the detection speed of the algorithm. It represents the number of frames that can be processed per second. Models have different processing speeds under different hardware configurations. Therefore, this article uses the same hardware environment when comparing detection speed.
The data of the test set is sent to the trained target detection model, and different thresholds are selected for experimental comparison. The model comparison before and after the improvement of YOLOv4 is shown in Table 4. It can be seen from Table 4 that both map50 and map75 of the YOLOv4 combined with the CBAM algorithm are improved, in which mAP50 is increased by 2.02%, and mAP75 is increased by 1.85%. Because of the addition of three CBAM modules, the volume of the model becomes larger, from 256.2 MB to 262.4 MB, resulting in a slight decrease in FPS value from 53.6 to 50.4, but the speed still meets the real-time requirements. Figure 12 shows the test results before and after the improvement.
In Figure 13, the first column is the input pictures, the second column is the YOLOv4 detection results, and the third column is the improved YOLOv4 detection results. In the first row, YOLOv4 mis-detected the tug as a warship, and the improved YOLOv4 successfully detected it as a tug; from the second, fifth, sixth, and seventh rows, it can be seen that the improved YOLOv4 detects small targets more accurately than the original algorithm, and more small targets are detected. Among them, the fifth row of ship targets has more background environment interference. Improved YOLOv4 has stronger robustness and detects more targets. In the seventh row, the improved YOLOv4 successfully detects the small target of the occluded cargo ship; from the third and fourth rows, it can be seen that the improved YOLOv4 can detect more mutually occluded targets when the ship targets are dense and mutually occluded. The last line shows that when there is interference in the background, the original algorithm detection box has a position shift, and the improved algorithm detection box is more accurate. For dense targets, mutual occlusion targets, and small targets, the improved YOLOv4 network can effectively detect and reduce the missed detection rate. In addition, the original YOLOv4 network will cause false detections when detecting targets. The improved YOLOv4 network improves the target false detection situation. According to the experimental results, the improved YOLOv4 combined with the CBAM target detection algorithm proposed in this paper is more effective, improves the accuracy of target detection, can basically meet the needs of marine target detection tasks, and has practical application value.
Conclusions
Marine target detection technology is of great significance in the fields of sea surface monitoring, sea area management, ship collision avoidance, etc. and is focused on the problem of insufficient detection accuracy of YOLOv4 in the self-built ship dataset. On the basis of YOLOv4, the CBAM attention module is added to make the neural network pay more attention to the target area that contains important information, suppress irrelevant information, and improve detection accuracy. Experimental results show that the improved YOLOv4 model has higher accuracy in target detection tasks than the original YOLOv4, mAP50 is increased by 2.02%, mAP75 is increased by 1.85%, and the detection speed meets the real-time requirements, which verifies the effectiveness of the improved algorithm. It provides a theoretical reference for further practical applications.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
YOLO v4
You | 9,709 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Corrosive wear forecasting of steel elements on the basis of mathematical modeling methods
Life Extension and resistance increase of metal materials and constructions to the corrosion destruction processes is the most important scientific and technical problem. To solve this problem it is necessary to develop complex scientific research to study the corrosion phenomena, along with practical actions against corrosion directed to selecting new corrosion resistant metal materials and methods of their protection. This research is carried out for searching mathematical model which could predict corrosive wear in metal constructions with a certain accuracy taking into account design and the type of corrosion process.
Introduction
Damage assessment of bearing constructions and determination of their residual resource is an important task [1,2].Corrosion destruction is one of basic reasons of steel structures durability decrease.Researches in the field of durability increase and improvement of anticorrosive protection of building constructions are conducted in complex and include: on-site investigations, experimental and theoretical developments.
When studying development results of corrosion processes it is necessary to analyse a large number of factors.Handling of considerable amount of information and determination of common factors of damage formation due to corrosion require the use of modern computing devices and corresponding mathematical models [3].Mathematical models development is impossible without clear idea of process mechanism, experimental data characterizing the influence of different factors on process kinetics and reliability control of the forecast methodology in natural conditions.It particularly concerns corrosion processes which are multistage.Stages can proceed in consecutive, as well as in parallel in case of various combinations of internal and external factors.Chemical composition of metal and alloy, metal surface condition, protective coatings type and condition, operation mode, hostile environment characteristics belong to such factors [4,[5][6][7][8].
Main Part
In the pilot study process of corrosive wear of different metalwork it is necessary to carry out the analysis of corrosion damage nature, as well as quantitative indices of wear in long operational conditions.In this paper the installation for hydrochloric acid acceptance and pouring of the galvanic shop was chosen as a research object.The installation represents three level metal framework consisting of 8 racks (a square pipe 200х7) connected by platform beams (channel No. 14) at which boards from corrugated steel are put.On the upper platform (mark.+11.000) there were tanks with hydrochloric acid.All the constructions are made of S 235 steel (Russian standard -VSt3kp2 GOST 380-71*).Installation service life is 42 years in conditions of moderately aggressive environment [5] During on-site investigation residual elements thickness was measured: i.e. stay braces (4 pieces) and beams (4 pieces).Forty measurements in various section points were performed on each element.The total number of measurements was 320.Survey allowed to determine the type and nature of corrosion damages.Common uneven corrosion was inherent for all the elements.In particular, pitting and localized were found on certain sites.Hence, it is possible to state that corrosion process caused the damages formation of chaotic character on the surface of studied elements.And they can be described statistically.When analyzing statistical data, the measured values are most likely, to submit to lognormal distribution law as corrosion destructions happen owing to a large number of arbitrary factors.It means that accidental process is stationary, normal and can be described by means of the least significant moments of probabilities distribution.Mean square deviation value and points deviation dispersion from the average line belong to these very probabilities.These two values describe distribution function of geometrical sizes of construction elements damaged by corrosion (thickness).Long-term practice of scientific research conducted in the field of studying of corrosion wear influence on construction showed that a true mathematical wear size distribution model of steel elements of a framework is lognormal distribution [7].
To detect distribution nature of spreading depth on the elements surface and to determine construction wear value let us carry out the statistical analysis to draw distribution curves of wear values.(Fig. 1) Corrosion wear was determined as difference between the initial thickness of an element and residual one according to measurements.After visual assessment of this distribution it is possible to make the assumption of lognormal distribution since form and outline quite correspond to values distribution according to Gauss's law.There are several methods of justifying assumptions about distribution law of random variable.Pearson consent criteria used in case of large amount of test samples is the most widespread and it was considered in this paper.Non-negative random variable is known to be distributed logarithmically normal if the decimal logarithm of this value is distributed according to the normal law.Values of logarithmic and normal random variables are formed as "accidental misstatements" of some "true value".The last, eventually, acts not as an average value, but as a median.Let us take the logarithms of residual thickness and draw the histogram for the given values (Fig. 2,3).The results received showed that values of corrosion losses of steel elements of this installation (a beam, a stay brace) are of lognormal distribution [3].
Let's figure mathematical distribution models of corrosion wear values of steel elements on the example of steel stay braces (Fig. 4).Lognormal distribution has its own regularities with the use of which, it is possible to define elements average wear, and also the greatest and smallest probable wear values of constructions in the given operating conditions, with a certain accuracy.Let's consider the maximum corrosion damage depth with the probability of 0,95 which is quite accepted for technical calculations.In case of lognormal distribution it will be: where mlogarithms mean value of corrosion damage depth of test samples.
For the measured stay braces: For the measured beams: where x i -the logarithms values amount of corrosion damage depth of constructions; N -a number of measurements (N=320 -for stay braces, N=160 -for beams).σ -root mean squared deviation of test samples logarithms is defined: -For the measured stay braces: -For the measured beams: Thus, the maximum depth of corrosion damages, according to the statistic analysis of lognormal distribution, is equal: For the measured stay braces: For the measured beams: ݐ ௫ = 10 ାଵ.ହఙ = 10 ଶ.ସାଵ.ହ•.ସଽ = 2818 мкм = 2.82 ݉݉ Comparing these data we see that: Difference in thickness value of corrosive wear of beams and stay braces shows a significant influence role of a geometrical arrangement of elements in space.Smaller corrosion losses in stay braces can also be explained by successfully chosen section formthe closed square profile since corrosion process takes place on open profiles more intensively (corners, channels, I-beams, etc.).
As a result of the conducted research it is stated that corrosion wear process of various elements is described with the help of exponential function quite accurately by means of which it is possible to determine wear value of metal elements from various constructive forms at the definite operation moment in strong and the moderately aggressive medium, thus, predicting their durability, appointing the optimum inter-repair periods and different anticorrosive actions.The law of lognormal distribution allows to view processes, to predict them with high precision and that can be used in case of real inspection of the constructions subjected to corrosion.The application possibility of this mathematical model for the longterm forecast of corrosion process can be performed by indirect verification, i.e. comparison of model and the research results of similar processes reviewed in literature.
Fig. 2 .
Fig. 2. Distribution type of corrosion wear values for stay braces.
Fig. 3 .
Fig. 3. Distribution type of corrosion wear values for beams.
Fig. 4 .
Fig. 4. A mathematical distribution model of corrosion wear of steel elementslognormal distribution. | 1,724.8 | 2016-01-01T00:00:00.000 | [
"Materials Science"
] |
Excited States of Gold(I) Compounds, Luminescence and Gold-Gold Bonding
It has long been established by Khan that the superoxide anion, O2-, generates singlet oxygen, O21Δg, during dismutation. Auranofin, gold-phosphine thiols, β-Carotene, and metal-sulfur compounds can rapidly quench singlet O2. The quenching of the O21Δg, which exists at 7752 cm-1 above the ground state triplet, may be due to the direct interaction of the singlet O2 with gold(I) or may require special ligands such as those containing sulfur coordinated to the metal. Thus we have been examining the excited state behavior of gold(I) species and the mechanisms for luminescence. Luminescence is observed under various conditions, with visible emission ranging from blue to red depending on the ligands coordinated to gold(I). Triplet state emission can be found from mononuclear three coordinate Au(I) species, including species which display this behavior in aqueous solution. A description is given of the luminescent three coordinate TPA (triazaphosphaadamantane) and TPPTS (triphenylphosphine-trisulfonate) complexes, the first examples of water soluble luminescent species of gold(I).
Introduction
Khan demonstrated that singlet oxygen is produced in the dismutation of superoxide, O2, a product of dioxygen separation from transport enzymes. In healthy cells, supemxide build-up is prevented by the presence of copper and selenium containing enzymes. 2 In diseased cells, reduced levels of these enzymes have been found. Formation of the singlet state of dioxygen can be damaging to tissue through attack by this species on conjugated olefins and polynuclear aromatics and from the resulting organic peroxide formation. In "oxidative bursts" of human polymorphonuclear leukocytes both hydroxyl and oxychlom radicals can also result. These oxidative bursts are inhibited by Au(CN), and by gold drugs in the presence of thiocyanate which is converted into cyanide by the leukocytes. 3 Thus it may be important to remove singlet dioxygen efficiently from cells.
Emission from singlet oxygen occurs at 7752 cm1, leaving the molecule in the triplet ground state. Quenching of the excited singlet state may occur by energy transfer to species having compatible vibrational or electronic states, or by interactions with nearby heavy atoms which have large spin-orbit coupling, For either mechanism to occur, relatively close contact must take place between the excited state oxygen and the quenching species. The solvent is the most reasonable quencher and for water there are OH vibrational overtones with appropriate energies to accommodate the quenching. In the absence of other quenchers, singlet oxygen in water returns to the ground state with a rate constant around 10sec 1 In non-aqueous media the rate decreases substantially with a value of 1.7x103 sec 1 having been determined in CCI4. The lifetime of the singlet oxygen in cellular membranes and materials can be expected to be between these limits in the absence of other quenchers.
In view of the observation by Corey and Khan that the gold drug Auranofin quenches the singlet state of oxygen in organic solvents and their suggestion that this may be an important role for gold drugs, we became interested in investigating the possibility that energy transfer might be an important mechanism for the singlet-triplet interconversion. Khan s has demonstrated that spin-orbit effects are real in that the lifetime of singlet oxygen in CCI 4 is shortened progressively in the presence of PhYCH3, Y S, Se, Te. The heavier the chalcogenide, the faster the rate of conversion. However, with the halides, PhX, X F, CI, Br, I, a similar quenching effect did not occur. Thus, while the heavy gold atom could function as the quencher by a spin-orbit mechanism, the heavy atom itself does not provide the quenching.
Emission from Gold(I) Excited States
Wo6, and othors7, have observed that gold(I) complexes can display excited triplet state lifetimes in excess of 10 sec. These long life-times demonstrate that spin-orbit effects alone are insufficient to explain the very rapid quenching of OIA by Auranofin. The lowest energy triplet state for Au(I) )ccurs about 15,000 cm above the ground state for the free ion, much too high an energy to mix effectively with the singlet delta state of dioxygen (Figure 1). Without a substantial reduction in the energy of this state, energy transfer appears unlikely as a mechanism for quenching the O2 A state. However, if the D state is split by ligand field effects, LFS, as expected for a long lived (relative to nuclear motions) excited state, an energy match can occur. With ligand coordination by sulfur, one anticipates that the ligand field splitting could be 20,000 cm 1 or more, certainly enough to lower the lowest component of the split D state to the region of near 5000 cm allowing an energy match to occur between the excited state triplet of Au(I) and the singlet state of dioxygen. Thus we have been concerned with the excited state properties of Au(I) and the influence various ligands have on these electronic states.
, is estimated to be as much as 20,000 cm Luminescence of Au(I) compounds has been known since 1970 and since 1992 for mononuclear Au(I) compounds in solution. In mono-nuclear species which have a trigonal ligand coordination or with two coordinate complexes which have another Au(I) from another molecule in close proximity (< 3.3 A ), metal-centered excited states are observed? An example of a mononuclear trigonal complex which shows this luminescence can be seen with the tris(2-(diphenylphosphino)ethyl)amine complex, Au(NP)PF, Figure 2, which has a AuP coordination and no Au-N bonding With ligands coordinating through sulfur atoms, excitation can occur either on the ligands themselves or from ligand to metal charge transfer (LMCT) states. In the AuS coordinated species, the sulfur non-bonding electrons enable an n-n" excitation on the ligand to produce a ligand centered emission . The development of phosphine ligands which can impart water solubility to metal complexes for purposes of catalysis has suggested to us that it might be possible to synthesize gold(I) complexes which are luminescent in aqueous solution. One such ligand TM is 1,3,5-triaza-7-phosphaadamantane, TPA, Figure 3. This ligand readily forms complexes with Au(I). In addition, the nitrogen atoms on the ligand are sufficiently basic that protonation occurs below a pH of about 4.5. Thus at low pH, the ligand protonated complexes occur while at high pH, unprotonated species can be isolated. The structure of one unusual product formed from Au(TPA)3 already has been reported. 1 3Au]CI complex, which has not been crystallized to date, is strongly luminescent in the solid state and in aqueous solution above pH 4.5. Below pH 4, the complex disproportionates into the protonated tetrakis-complex and the bis-complex, neither of which is luminescent. A crystallographically characterized complex has been prepared by methylating a TPA nitrogen atom on each ligandTM. This complex, shown in By sulfonating the phenyl rings of the triphenylphosphine, it is also possible to prepare luminescent, water soluble Au(I) complexes. We have established that up to only three of these ligands can coordinate to the metal ion because of the ligands steric bulk. The visible emission spectrum of the complex is shown in Figure 5s, a spectrum which also is sensitive to changes which occur in the solvent. Since the tetrakis-species cannot form, it is believed, and P NMR results support, that the tris-complex dissociates into the bis-complex and free ligand as the polarity of the solvent diminishes. Table 1 presents some of this data including lifetimes. Multiple State Emission While studying the physical properties of the (TPA)AuX complexes 12 which crystallize as dimers, we observed that protonation of the TPA leads to a significant lengthening of the Au---Au distance. These complexes crystallize with a crossed "lollypop" [(TPA)AuX]2 geometry. With the (TPA)AuCI complex and its protonated analogue, [(H-PTA)AuCI]CI, it has been possible to establish that the change in the Au---Au distance is paralleled by a shift to lower energies of the visible emission. This shift is consistent with molecular orbital calculations which suggest that the HOMO-LUMO energy separation in the complex changes approximately linearly with Au-Au distance over the 3.1-3.5 angstroms. As the distance gets longer, the HOMO-LUMO gap increases. The lengthening of the Au---Au distance appears to be caused largely by the repulsive charge built up on the protonated ligands. Other distances in the complexes are nearly identical in the protonated and unprotonated species. The emission clearly is metal-based. With X Br or or with sulfur ligands, the electronic structure changes. LMCT is evident with these complexes. With the Br and species, multiple state emission is observed with low energy excitation giving rise to an emission that is at a higher energy than is the metal-centered emission. With the (TPA)AuBr complex at 78 K, excitation at 320 nm produces a broad, metal-centered emission centering at 647 nm while excitation at 340 nm leads to a structured emission centered around 475 nm, Figure 6. An energy level diagram generalizing this behavior is presented in Figure 7.
Sulphur Coordinated Complexes
Since compounds with sulfur coordination also display the LMCT emissionl, one might ask where the metal-centered emission occurs? One possibility is that it has moved even further to low energies as suggested by the changes observed when Br is replaced by I. Altematively, there could be a complete crossover between the LMCT and metal-centered states. With compounds which crystallize in a manner precluding any Au---Au interaction, the LMCT emission appears to occur at higher energies than with those compounds which show close, 3.1 , Au---Au contacts. Emission from ligand-centered states also has been observed from complexes containing triphenylphosphine ligands as shown in Figure 8. If the metal-centered, ligand field split 3D state has been lowered so that emission is shifted to the IR (below 700 nm), it is reasonable to believe that quenching of singlet oxygen occurs by energy transfer associated with a low lying triplet state on the metal. Efforts to test this hypothesis are currently underway. Figure 8 The luminescence spectrum of (Ph3P)AuSPh(o-OMe), a species with a P-Au-S coordination and no Au---Au interaction in the solid state. The luminescence appears to be a fluorescence centered on the triphenylphosphine ligand. | 2,311.4 | 1994-01-01T00:00:00.000 | [
"Chemistry"
] |
Optimal Pore Size of Honeycomb Polylactic Acid Films for In Vitro Cartilage Formation by Synovial Mesenchymal Stem Cells
Background Tissue engineering of cartilage requires the selection of an appropriate artificial scaffold. Polylactic acid (PLA) honeycomb films are expected to be highly biodegradable and cell adhesive due to their high porosity. The purpose of this study was to determine the optimal pore size of honeycomb PLA films for in vitro cartilage formation using synovial mesenchymal stem cells (MSCs). Methods Suspensions of human synovial MSCs were plated on PLA films with different pore sizes (no pores, or with 5 μm or 20 μm pores) and then observed by scanning electron microscopy. The numbers of cells remaining in the film and passing through the film were quantified. One day after plating, the medium was switched to chondrogenic induction medium, and the films were time-lapse imaged and observed histologically. Results The 5 μm pore film showed MSCs with pseudopodia that extended between several pores, while the 20 μm pore film showed MSC bodies submerged into the pores. The number of adhered MSCs was significantly lower for the film without pores, while the number of MSCs that passed through the film was significantly higher for the 20 μm pore film. MSCs that were induced to form cartilage peeled off as a sheet from the poreless film after one day. MSCs formed thicker cartilage at two weeks when growing on the 5 μm pore films than on the 20 μm pore films. Conclusions Honeycomb PLA films with 5 μm pores were suitable for in vitro cartilage formation by synovial MSCs.
Background
Tissue engineering of cartilage requires the appropriate selection of cells [1]. Several cell candidates are currently available, including chondrocytes, induced pluripotent stem (iPS) cells, and mesenchymal stem cells (MSCs). The use of chondrocytes is invasive, as cell collection requires that normal cartilage be sacrificed [2]; however, iPS cells require more time and effort than other cell types for cartilage differentiation [3]. MSCs are therefore more useful, as their cell sources are easy to harvest, the cells proliferate well, and they can be induced to differentiate into cartilage. Synovial MSCs are particularly attractive as a cell source for tissue engineering of cartilage because of their high chondrogenic differentiation potential [4,5].
The appropriate selection of artificial material is also important for cartilage engineering [6] in addition to the selection of synovial MSCs. Several artificial scaffold materials are already in clinical use [7]. In the field of orthopedics, one of the most popular scaffold materials is polylactic acid (PLA) because of its biodegradability; however, cells can have difficulty adhering to it. For this reason, PLA is not yet in common use as a scaffold for cells used clinically for cartilage regeneration. Efforts made to overcome this adhesion problem have included spinning of PLA nanofibers and arranging PLA fibers in lattice patterns [8,9], but the types of PLA scaffolds that are most suitable for tissue formation have not yet been identified.
PLA can be prepared in the form of honeycomb-like sheets, and these are expected to show highly biodegradability and improved cell adhesion due to their high porosity [10]. Porous films can be formed from water droplet templates by the breath figure method, and this method has attracted considerable interest because of its simplicity and wide applicability to a variety of materials. Recently, honeycomb films have been prepared by the breath figure technique [11,12]; however, the best pore size for efficient cell adhesion and cartilage formation has not been established. The purpose of the present study was to determine the optimal pore size of honeycomb PLA films for in vitro cartilage formation by synovial MSCs.
Preparation of Synovial MSCs.
All methods were carried out in accordance with relevant guidelines and regulations. All procedures performed in the study involving human participants were in accordance with the Declaration of Helsinki [13]. This study was approved by the Medical Research Ethics Committee of Tokyo Medical and Dental University (M2017-142), and informed consent was obtained from all study subjects.
Human synovium was harvested from the knees of patients with osteoarthritis who underwent total knee arthroplasty operations, and cell culture was performed according to the method established in our previous reports [4,14,15]. Briefly, the synovium was minced and digested at 37°C for 3 h in a solution of 3 mg/mL collagenase (Sigma Aldrich, MO, USA), and the digested cells were filtered through a 70 μm cell strainer (Greiner Bio-one GmbH, Kremsmuenster, Austria). The obtained nucleated cells were cultured in 150 cm 2 culture dishes (Nalge Nunc International, Thermo Fisher Scientific, MA, USA) in 18 mL alpha minimum essential medium (αMEM, Thermo Fisher Scientific) containing 10% fetal bovine serum (FBS, Thermo Fisher Scientific). The cells were treated with 0.25% trypsin-EDTA (Thermo Fisher Scientific) at 37°C for 5 min, harvested, and cryopreserved as passage 0. For cell culture, the frozen cells were slowly thawed, plated, and incubated for 4 days as passage 1. These passage 1 cells were then replated at 50 cells/cm 2 , cultured for 14 days, and the resulting passage 2 cells were used for analyses.
Calcification was studied by plating 100 synovial MSCs in a 60 cm 2 dish and culturing for 14 days in culture medium to allow formation of cell colonies. The adherent cells were further cultured in a calcification induction medium consisting of α-MEM supplemented with 50 μg/mL ascorbic acid 2phosphate, 10 nM dexamethasone, and 10 mM β-glycerophosphate (Sigma-Aldrich). After 21 days, calcification was assessed by alizarin red staining (Merck Millipore, MA, USA).
PLA Honeycomb
Films. Circular PLA films with a diameter of 6 mm and thickness of 5 μm were prepared as three types: a PLA film without any pores (0 μm), one with 5 μm pores, and one with 20 μm pores. Before cell plating, the films were immersed in 70% ethanol, washed with phosphatebuffered saline (PBS, Thermo Fisher Scientific) for hydrophilization, and incubated with FBS overnight at 4°C.
Scanning Electron Microscopy (SEM). Synovial MSCs
(1 × 10 4 cells) were suspended in 500 μL αMEM with 10% FBS and plated on PLA films in a 24-well plate. After 2 h, the films were fixed in 2.5% glutaraldehyde (TAAB Laboratories and Equipment Ltd., Berks, England) for 2 h and washed overnight in 0.1 M PBS at 4°C. Each film specimen was then postfixed with 1% osmium tetroxide (TAAB Laboratories and Equipment Ltd.) for 2 h at 4°C and dehydrated in graded ethanol solutions (Fujifilm Wako Pure Chemical Corporation). After exchanging with 3-methyl butyl acetate (Fujifilm Wako Pure Chemical Corporation) and critical point drying, the specimen was coated with platinum and the surface was observed by SEM (S-4500, Hitachi Ltd., Tokyo, Japan) [18].
The cartilage thickness was measured by drawing a single line along the long axis of the cartilage, determining the midpoint of both ends of the cartilage on that line, and then drawing seven perpendicular lines 500 μm on both sides of that midpoint. The midpoints of both ends of the cartilage were determined on each vertical line, and the minimum width through these points was determined. Finally, an average thickness of the cartilage and the coefficient of variance for cartilage thickness at 7 points were calculated using ImageJ.
2.10. Statistical Analysis. The Shapiro-Wilk test was used to confirm the normality of the data (P > 0:05). The analysis between the two groups were calculated by a paired Student's t test. Cell counts and time courses were statistically analyzed by two-way analysis of variance (ANOVA) with Tukey's multiple comparisons test using GraphPad Prism 8 software (GraphPad Software, CA, USA). All statistical analysis methods are described in the figure legends. Two-tailed P values of <0.05 were considered statistically significant.
SEM Images of Honeycomb PLA Films after Plating
MSCs. Synovial MSCs showed multipotency for chondrogenesis, adipogenesis, and calcification (Figure 1). At 2 h after plating onto the film without pores (0 μm), MSCs showed extended pseudopodia and attachment to the film (Figure 2(a)). MSCs on the 5 μm pore film also showed pseudopodia that extended and adhered to several pores. MSCs on the 20 μm pore film showed submergence of the cell body into the pore and pseudopodia extending around the pore.
Surface Markers of Synovial MSCs Plated onto
Honeycomb Films. One day after seeding onto films with pore sizes of 0, 5, and 20 μm, the MSCs in each film expressed 100% of the positive MSC markers CD44, CD73, and CD90, and less than 5% of the negative MSC markers CD14, CD45, CD106, and CD146 (Figure 2(b)). The expression of the four different integrins in MSCs did not differ among the films, but the expression of integrin α6 was (Figure 2(c)).
MSC Numbers Remaining in and Passing through Films.
One day after seeding the MSCs (Figure 3(a)), fewer cells were observed in the film without pores than in the films with pores, and more cells were observed in the bottom of the dish containing the film with 20 μm pores than dishes containing the other film types (Figure 3(b)). The number of cells on the film with no pores was 770 ± 100 cells, which was significantly lower than the cell number of 1110 ± 60 for the 5 μm pore film or 1100 ± 120 cells for the 20 μm pore film (Figure 3(c)). The number of cells on the bottom was essentially 0 cells for both the film with no pores and the 5 μm pore film, which was significantly lower than 170 ± 110 cells noted for the 20 μm pore film. The number of cells not adhered to the film or the bottom of the dish was 2200 ± 100 cells for the film with no pores and was significantly higher than the 1900 ± 60 cells obtained with the 5 μm pore film. The value was also significantly higher than 1700 ± 130 cells obtained with the 20 μm pore film.
The Early
Phase of Cartilage Formation. During in vitro cartilage formation by MSCs, the cell sheet formed by the plated MSCs peeled off the film without pores at one day, assumed a round shape at three days, and was maintained as a cartilage mass at five days (Figure 4(a)). By contrast, the cell sheets formed by MSCs plated on the 5 and 20 μm pore films remained sheet-like and did not peel off the film. Quantitative evaluation showed that the area of the sheet of MSCs plated on the film without pores significantly decreased to 20 ± 10% at 1 day, while the area of the sheet of MSCs plated on the film with 5 and 20 μm pores was maintained at 100% for 5 days (Figure 4(b)). 4 Stem Cells International of PLA films, cultured in the chondrogenic induction medium for two weeks, and then observed histologically ( Figure 5(a)). MSCs plated on all three PLA films produced extracellular matrix, as confirmed by red staining with safranin O and they formed cartilage. The MSCs plated on the films without pores were spherical in shape, whereas the MSCs plated on the films with pores were sheet-like. The thickness of the sheet-like cartilage formed by MSCs plated on 5 μm pore film ( Figure 5(b)) was 630 ± 130 μm and was significantly thicker than the 460 ± 100 μm thick sheet-like cartilage formed by plating MSCs on the 20 μm pore film ( Figure 5(c)). The coefficient of variance for thickness of sheet-like cartilage measured at seven different points was 0:05 ± 0:01 for MSCs seeded on 5 μm pore film and was significantly lower than the 0:15 ± 0:11 thickness measured for MSCs seeded on the 20 μm pore film. This suggests the formation of a more even cartilage by MSCs seeded on 5 μm pore film than on 20 μm pore film.
Effect of
3.6. Cartilage Formation of MSCs Plated on 5 μm Pore PLA Films. Synovial MSCs plated on 5 μm pore PLA films and cultured in the chondrogenic induction medium for three weeks showed multilayered cells above the film and a small number of cells below the film ( Figure 6). A few layers of cells were observed above and below the film and a slight cartilage matrix was produced at one week, and the thickness of the cartilage increased at two weeks and further increased to 300-500 μm at 3 weeks. The thickness ratio of the cartilage above and below the film was approximately 3 : 1.
Discussion
Synovial MSCs seeded on PLA films without pores exhibited extended pseudopodia. Some reports have indicated that the pseudopodia perform the function of cell adhesion [18,19]; however, in the present study, the number of adherent cells was on the film without pores was only about one third of the number on the PLA films with pores. This finding does not mean that the pseudopodia of synovial MSCs have no ability to adhere to films without pores; rather, it means that the pseudopodia of synovial MSCs are not sufficiently adhesive to withstand removal of the film from the dish, fixation of the cells with formalin, and microscopy observation. In Stem Cells International other words, pores of a suitable size are useful to ensure efficient functioning of the pseudopodia. The expression of surface markers by the synovial MSCs that adhered to the film did not differ with the presence or size of the pores, but the expression of integrin α6 was reduced compared to cells cultured on plastic dishes. The PLA film we used is a thin sheet of less than 10 μm thickness and has softer physical properties than a plastic dish. The expression level of integrin α6 might be altered when MSCs adhere to soft PLA materials, since the expression levels of other integrins depend on mechanosensing of the rigidity of the materials by the adhering cells [20][21][22][23][24].
Chondrogenic induction was observed in MSCs plated on the PLA films without pores, but the cell sheet peeled off from the films after one day and formed a spherical cartilage mass after two weeks. The ascorbate-2-phosphate contained in the chondrogenic induction medium promotes the production of extracellular matrix and the aggregation of MSCs [25]. The observation of peeling of the cell sheets from the films during the process of cartilage formation indicates that the strength of aggregation of the MSCs is greater than the strength of adhesion of the MSCs to the PLA films without pores.
MSCs plated on 5 μm pore films formed the thickest cartilage among the three types of films used in this study. This is because the MSCs did not fall through the pores after plating; instead, they firmly adhered to the pores via their pseudopodia. This firm adhesion prevented the cells from peeling off after chondrogenic differentiation. The chondrogenic differentiation of MSCs using scaffolds is affected by the size of the scaffold pores [26,27]. In the current study, the film with the 5 μm pore size was the best of the three tested films for cartilage formation by synovial MSCs under the conditions used here. MSCs seeded on 20 μm pore films formed thinner and more uneven cartilage compared to MSCs seeded on 5 μm pore films. The number of cells that adhered to the film one day after seeding was the same, but the number of cells that passed through the film and fell to the bottom was higher for the 20 μm pore films.
Although the MSCs extended their pseudopodia and were caught in the pores, some of the cells passed through the 20 μm pores, probably because the MSC body is less than 20 μm in diameter [28] and therefore smaller than the pores. One possibility why the thickness and smoothness of the cartilage formed at 2 weeks differed between the films with 5 μm and 20 μm pores, even though the number of cells remaining in the film at 1 day was the same, might be that cells fell through the 20 μm pores from 1 day to 2 weeks after seeding, especially in the early period before the cartilage matrix was fully formed.
The pore size in honeycomb films can be adjusted in a range from several tens of nanometers to tens of micrometers by controlling the template water droplets by the breath figure method. The honeycomb films have the added advantage of forming uniformly sized pores when compared to conventional porous scaffolds with size-distributed pores [11]. The pore sizes reported for honeycomb films have been found suitable for proliferation and function of endothelial cells [29], myocytes [30], and hepatocytes [31]. Kawano et al. compared PLA honeycomb films with 1.6 μm, 3.2 μm, and 4.7 μm pores and reported that the number of adherent bone marrow MSCs increased as the pore size increased [32]. This finding concurred with our results and suggests that a suitable pore size for MSC adhesion to a PLA honeycomb film is approximately 5 μm.
One limitation of our study is that we did not conduct in vivo investigations of the effectiveness of implanting honeycomb PLA films plated with MSCs into cartilage defects. Therefore, we cannot say at this time whether cartilage defects will be repaired when transplanted with undifferentiated MSCs adhering to 5 μm pore honeycomb PLA films or whether the MSCs will require further ex vivo differentiation into cartilage sheets. We also do not know whether the PLA film will be absorbed after implantation, or when this will occur. Another point that requires clarification is whether a PLA film will be recognized as a foreign substance in the joint and cause inflammation.
We compared in vitro cartilage sheet formation by synovial MSCs using honeycomb PLA films with 0, 5, and 20 μm pores. MSCs on the 5 μm pore film showed pseudopodia that extended out to several pores. MSCs on the 20 μm pore film showed cell bodies submerged in the pores. MSCs plated on 5 μm pore films formed the thickest and most even cartilage layer among the three types of films. Honeycomb PLA films with 5 μm pores were therefore considered suitable for in vitro cartilage formation by synovial MSCs. | 4,195.8 | 2021-08-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
A preference for educational philosophy and philosophical mindedness among Iranian faculty members at Semnan University of Medical Sciences
Background: The adequacy and efficiency of an educational system in the academic settings depend on the teachers’ philosophical mindedness and the ruling approach of educational philosophy. Therefore, the lack of knowledge about the philosophical foundations of education can adversely affect the educational system. The current study investigates the faculty members’ philosophical mindedness and educational philosophy of the Semnan University of Medical Sciences. Methods: This descriptive correlational study was conducted on full-time faculty members of Semnan University of Medical Sciences selected by convenience sampling method in 2020. Zinn’s Philosophy of Adult Education Inventory and Komeli’s philosophical mindedness questionnaire were used to assess participants’ educational philosophy and philosophical mindedness, respectively. Pearson’s and Spearman’s correlation coefficients and the regression analysis were used for the inferential analysis. Results: Data collected from 62 faculty members were finally analyzed. It was found that most of them (56 faculty members, 95.2%), had an average philosophical mindedness, and behaviorism dominated their educational philosophy. The variable components of philosophical mindedness (i.e., comprehensiveness, penetration, and flexibility) were not significantly different between participating faculty members from different faculties ( P > 0.05). The highest mean score of philosophical mindedness was related to comprehensiveness (47.54 ± 4.9), followed by penetration (43.40 ± 4), and finally flexibility (32.38 ± 3.7). Based on the results, philosophical mindedness and educational philosophy are significantly correlated. The regression coefficients revealed that, flexibility affected predicting the tendency towards liberalism and progressivism, among the elements of philosophical mindedness. In contrast, comprehensiveness and penetration affected predicting radicalism. The results obtained showed an average level of philosophical mindedness among the professors participating in this study. Conclusion: The results indicated an average level of philosophical mindedness among faculty members participating in this study. Therefore, courses should be held in their empowerment programs to strengthen the philosophical mindedness of the faculty members. Furthermore, these courses will positively affect educational philosophy. In addition, courses in critical thinking are required. This type of thinking is beyond the ability to solve problems; it gives a philosophical orientation to thinking.
Original Research
programs. 1 As a result, the primary responsibility lies with the faculty members, whose views and teaching methods follow philosophies, and to some extent, the success/ failure of the educational programs depends on their thinking and vision. 2 Faculty members and educational managers generally use a philosophy known as educational philosophy which is of great importance. It can affect learners' personalities and thoughts. The educational philosophy adopted by the faculty members can therefore be a precise indicator of the adequacy and efficiency of an educational system. 3 The faculty members' perspectives, philosophical mindedness, and personal philosophy significantly affect their work, such that it transforms them from someone with mere teaching skills into an intelligent individual who is given the heavy responsibility of educating others. 1 Education is primarily concerned with fostering the power of thinking, which in turn requires specific tools, including a philosophical mind.
An individual's philosophical mind can be discerned through their way of thinking, dealing with problems and tendencies, as well as mental characteristics, which can be observed through various aspects of their behavior. By providing ample learning opportunities and organizing educational materials, philosophically minded teachers create an environment conducive to learning. As a result, they provide a framework for the comprehensive development of students' thinking. 4 In addition to having a philosophical mindedness in educational activities; it is essential to select a philosophical foundation since the ruling approach of educational philosophy determines the objectives, values, and worldview of an educated person. Educational systems that have not been built on the values of a particular educational philosophy have a minimal chance of success. After selecting appropriate educational philosophy, planning becomes more accessible, thereby avoiding pointless plans without objective and content. 5 The importance and necessity of this study arise from the fact that the current educational system at Iranian Universities of Medical Sciences requires new concepts to open up new horizons for knowledge development. Philosophical mindedness and educational philosophy have received insufficient attention due to the priority placed on quantitative education and the lack of opportunities for quality improvement.
Moreover, the neglect and unawareness of the philosophical foundations of educational programs constitute one of the most significant factors that may adversely affect the educational system at many universities. Medical education centers do not currently adapt their teaching and training methods to practical educational philosophy. Therefore, it is necessary to explore the main flow of prevailing, ideas or philosophy, to understand how the educational system is transforming, as well as its goals and policies. Therefore, the current study was conducted to investigate the philosophical mindedness and educational philosophy of the faculty members of the Semnan University of Medical Sciences (SUMS). The research questions are: 1. To what extent do faculty members of the SUMS have a philosophical mindedness? 2. What is the status of educational philosophy among the faculty members of the SUMS? 3. Is the philosophical mindedness related to the educational philosophy of the faculty members of the SUMS? 4. Can philosophical mindedness predict educational philosophy among the faculty members of the SUMS?
Methods
This descriptive correlational study was conducted on full-time faculty members of the SUMS selected by the convenience sampling method in 2020. A sample size of 62 participants was calculated based on the formula , and a study by Mahboobi et al. 6 Faculty members who agreed to participate in the study were asked to complete questionnaires at their offices and return them within a week.
Faculty members' educational philosophy was measured with 15-question Philosophy of Adult Education Inventory (PAEI). Each question with five options investigates liberalism, behaviorism, progressivism, humanism, and radicalism. Finally, the scores of 15 questions were collected separately for five philosophies, and the scores of each faculty member in each philosophical school were determined separately. The range of scores in each of the five schools was between 15 and 105. A score of 95-105 indicates a strong agreement with a philosophical school, a score of 55-65 indicates a unique philosophical school, and a score of 15-25 indicates a strong disagreement with a philosophical school. 7 Zinn reported the alpha of the overall test to be 0.75, 8 and Spurgeon and Moore reported the alpha of the test to be 0.94. 9 A study by Zandevanian et al 2 examined the face and criterion validities of the tool for use in Iran. Pearson's, Spearman's, and Kendall's tau-b correlations were calculated between the five subscales of the PAEI and the researcher-made form to examine the criterion validity; all the correlation coefficients were significant (P < 0.01). For examining the reliability, the Cronbach's alpha of the tool was calculated and reported as 0.92. 2 Philosophical mindedness was assessed using the 42item philosophical mindedness questionnaire developed according to the book entitled the Philosophical Mind in Educational Administration, written by Smith Philip 10 and prepared for use in Iran by Komeli Asl. 11 The questionnaire has a total of 42 items, where 15 inquire about the comprehensive dimension, 14 about the penetrative dimension, and 13 about the flexibility dimension of philosophical mindedness. The answers are on a five-point scale (i.e., never, rarely, sometimes, almost always, and always). The scores of 42 questions are added together to yield the final score. The minimum possible final score is 42, and the maximum is 210. A score between 42 and 84 indicates poor philosophical mindedness, and 84 and 126 indicate average philosophical mindedness. A score higher than 126 indicates strong philosophical mindedness. The Cronbach's alpha coefficient was 0.753 for the comprehensiveness dimension, 0.721 for the penetration dimension, 0.788 for the flexibility dimension, and 0.788 for the entire philosophical mindedness questionnaire. 11
Statistical analysis
The extracted data were analyzed using software SPSS for Windows 11.5 (SPSS Inc, Chicago, IL USA). Descriptive statistics were used for demographic information. The results were reported with mean ± standard deviation and frequency (percent). The normal distribution of the data was assessed using the Kolmogorov-Smirnov and Shapiro-Wilk tests. The mean values were compared using the t test and the ANOVA. Pearson's and Spearman's correlation coefficients and regression analysis were used for the inferential analysis. In all tests, P < 0.05 was considered statistically significant.
Results
Among 62 participants in this study, 39 (62.9%) were females, 31 (50%) were from the departments of medicine, 24 (38.7%) held Ph.D. degree, 24 (38.7%) were assistant professors. The participants' average work experience was 11.37 ± 6.55 years. The detailed demographic information of the participants is provided in Table 1. Quantitative data distribution was normal for all groups (P > 0.05).
Question 1: To what extent do the faculty members of the SUMS have a philosophical mindedness?
Among the faculty members participating in this study, 56 members (95.2%) had average philosophical mindedness, two members (3.2%) had strong philosophical mindedness, and merely one member (1.6%) had poor philosophical mindedness. Most of the faculty members in nursing and para-medicine (92.3%), rehabilitation (90.9%), medicine (100%), and dentistry (85.7%) had average philosophical mindedness ( Figure 1).
As shown in Table 2, no significant differences were shown between the faculty members of the different faculties in terms of their general philosophical mindedness and each of its dimensions (i.e., comprehensiveness, penetration, and flexibility) (P > 0.05). The highest mean philosophical mindedness score was related to comprehensiveness (47.54 ± 4.9), followed by the penetrative dimension (43.40 ± 4), and the lowest score pertained to flexibility (32.38 ± 3.7). The mean total score of the faculty members philosophical mindedness was 123.3 ± 10.
Question 2: What is the educational philosophy among faculty members of the SUMS?
Based on Table 2, there were no significant differences in educational philosophy tendencies between the faculty members of the different faculties (P > 0.05). Additionally, Figure 2 shows tendencies toward educational philosophy in the faculties by educational philosophy tendencies. In assessing the overall educational philosophy among the faculty members, the minimum mean score was related to liberalism, and the maximum mean score was related to behaviorism ( Table 2).
Question 3: Is the philosophical mindedness related to the educational philosophy of the faculty members of the SUMS?
The Pearson's correlation coefficient matrix results in Table 3 show a significant relationship between philosophical mindedness and educational philosophy. The correlation coefficient between philosophical mindedness and liberalism, behaviorism, and progressivism was -0.4, 0.51, and 0.38, respectively (P < 0.05).
Furthermore, Spearman's correlation test showed no statistically significant relationship between the level of education and the tendencies to educational philosophies and between the level of education and the philosophical mindedness of the three dimensions (P < 0.05). In addition, in the Spearman correlation test, gender was only significantly related to the tendency toward radicalism (P = 0.004). Furthermore; the mean radicalism score for women was higher than that for men (P = 0.004).
Question 4: Can the variable of philosophical mindedness predict the educational philosophy of the faculty members of the SUMS?
The results of the multivariate regression analysis indicate that flexibility explained 0.27 of the variance of liberalism and 0.3 of the variance of progressivism. Work experience (years) 11.37 ± 6.55 Table 4.
The results of the estimation of significant models in the form of regression coefficients are presented in Table 5. As shown, flexibility can predict liberalism and progressivism, and the t test for the significance of the regression coefficients is significant at the level of less than 0.05. The regression coefficients showed that the flexibility predicted -0.27 for liberalism and -0.3 for progressivism. In addition, comprehensiveness and penetration can predict radicalism, and the t test for the significance of the regression coefficients is significant at a level less than 0.05. The regression coefficients in the first model showed that comprehensiveness predicted -0.25 of radicalism, and in the second model, comprehensiveness and penetration predicted -0.45 and 0.38 of radicalism, respectively.
Discussion
The results of the current study showed that most of the participating faculty members had average philosophical mindedness, and comprehensiveness and flexibility dimensions were the highest and lowest mean scores among the dimensions of philosophical mindedness, respectively.
Philosophical mindedness is the individual's capabilities and readiness to value, make correct judgments, and the habit of creative thinking. 10 Philosophically-minded faculty members are affected by their philosophical minds in their teaching process and use creative and active teaching methods. Philosophical mindedness positively affects the stages of planning and preparation, the teaching process, evaluation, classroom management, and professional collaboration and increases the quality of teaching. The quality of teaching is expected to be favorable at the desired level in environments where the faculty members have a good philosophical mindedness. 12 Faculty members with a high level of comprehensiveness in their philosophical mindedness can visualize the general educational context in their minds when teaching. When encountering a particular problem, they can understand its connection to the related issues and provide various solutions. In addition, faculty members with this mental skill wish to assess their students' behaviors in different situations, conduct an in-depth study of their students' abnormal behaviors, discover the reasons for these behaviors, and attempt to modify them. 12 This study investigated the educational philosophies of the faculty members of Semnan University of Medical Sciences. The results showed no statistically significant difference in the tendency toward liberalism, behaviorism, progressivism, humanism, and radicalism, which was consistent with the findings of McKenzie's study. 13 In our study, the lowest mean value in the dimensions of educational philosophies was related to the tendency to liberalism, and the highest mean was related to behaviorism. In a study by Zandevanian et al. in Iran, who used PAEI to measure the educational philosophy of the faculty members in Yazd, the highest educational correlation was observed in the tendency to behaviorism and progressivism, and the lowest was observed in the tendency to liberalism. 2 In addition, most faculty members tended to behaviorism in studies by Zandevanian et al, 2 McKenzie1, 3 Moore and Spurgeon, 9 Williams, 14 Boone et al, 3 and Lehman. 15 In a study by Boone et al, 3 the least tendency among the faculty members was toward liberalism. The results of these studies are consistent with our study. The analysis of these results revealed that liberalism has several distinctive features that may have caused the lower tendency toward it. In liberalism, there is a profound gap between theory and practice, the emphasis on conveying theoretical knowledge and content, and theoretical discussions are of higher value than practical and professional ones. Another criticism of liberalism concerns its elitist approach and efforts to form an elite, which contradict the common slogan of "education for all" in today's adult education. Liberalism theorists recommend education for a limited number of people and consider professional education sufficient for the public. Another point is that, in liberalism, the faculty member is an expert conveyor of knowledge, and the teaching mainly 3,7 Criticism of this viewpoint may be that the superiority of theoretical knowledge is less acceptable in today's world, and medical sciences education seeks to eliminate the gap between theory and practice. In our study, the good philosophical mindedness of the faculty members not only confirms their creative thinking but also shows their ability to use lectures in conjunction with modern and active teaching methods. The teaching methods used by these faculty members are often studentoriented, and the teacher is not the only presenter in the classroom. This approach shows that participating faculty members were less concerned with liberalism, which is a positive point in education.
In this study, the highest tendency was toward behaviorism. Behaviorism emphasizes the principles of positivism, objective study, and the relationship between the environment and under-controlled behavior, which was the dominant paradigm of the 20 th century. Behaviorism is founded on the philosophy of objectivism. Its learning approaches emphasize central or critical skills and the outcome of learning. The teacher's function is to shape the students' behavior in this philosophy. 16 Behaviorism focuses on the students' acquiring specific skills and competencies through a structured sequence. In this approach, the ultimate goals are determined based on a written outline, the learning content and activities are sequenced according to these goals, and the results are evaluated concerning the predetermined goals. 16 The final goal of teaching-learning, especially medical sciences education, is the changes in behaviors due to experiences. In the medical sciences education, the training is step by step, and in every stage, an evaluation is performed to assess whether the behavioral goals have been achieved. First, the students pass basic sciences courses and then specialized subjects. Practical learning activities are presented in clinical skills development centers that simulate healthcare centers. Students will be admitted to real clinical centers if they obtain an appropriate score. In medical sciences education, the faculty member acts as a behavior engineer, 17 who conveys the information to the students and controls their amount of learning at different stages and finally strengthens desirable activities in them, and manipulates and controls the frequency, duration, and intensity of the students' behaviors in all stages of learning. 2,14 In the current study, the tendency toward liberalism decreased as flexibility of the philosophical mindedness increased. Issues are investigated from different aspects through flexibility, and appropriate responses are shown according to the situation. 11 While in liberalism, the curriculum includes theoretical and eternal facts (timeless virtues) found in theology, dialectic, mathematics, literature, and philosophy. This knowledge is about constant subjects that are only recognized through reason and have remained unchanged over time. 7 With philosophical mindedness, a person does not condemn himself to tangible things but uses the innovative and creative power of the mind to understand tangible things better. The philosophically-minded individual takes his mind off the real world and tries to explain and interpret the reasons and evidence for specific matters. Hence, promoting philosophical mindedness and its dimensions conflicts with liberalism. 4 The findings of this study showed that increasing the comprehensiveness and penetration dimensions of the philosophical mindedness increased the tendency toward radicalism. People with greater penetrability always ask questions regarding old and even new issues and thoughts and explore them to find their truth with reasoning. This perspective makes leaving worthless matters and trying to change based on the facts. In this view, socio-economic and cultural issues play an essential role in affecting changes since comprehensiveness and penetration allow issues to be investigated from different aspects, and responses emerge by the situation. 4 Radicalism emphasizes the role of education and training as a means to create major social, political, economic, and cultural changes. This philosophical approach aims to provide comprehensive education on social, cultural, and political principles and societal changes. 7 In the current study, increasing the flexibility of philosophical mindedness increased the tendency toward progressivism. Progressivism is affected by pragmatism. Pragmatism's view of existence as a changing subject that asks whether humankind can experience and reconstruct existence means that the primary goal of education and training in this school is the continuous reconstruction of experiences. In this school, scientific methods are used to solve social problems; correct judgment is one of the crucial objectives of pragmatism. Using the students' interests, experiences, and findings to acquire new experiences is a fundamental principle of pragmatic epistemology. 18 This school is consistent with flexibility approaches, i.e., patience and reflection in judgment, and also away from the psychological Mortis that occurs in flexibility. This philosophy seeks to support responsible participation in society to transfer practical knowledge and problem-solving skills to the learners. 5 The current study found no statistically significant relationships between the level of education and educational philosophy tendencies, and between the level of education and philosophical mindedness or its three dimensions and subsets of the three dimensions of philosophical mindedness. No relationship between education level and educational philosophies is consistent with a study by Buckingham. 19 In the current study, the tendency toward radicalism was greater in women than in men. Regarding the effect of gender on educational philosophy and philosophical mindedness, it should be noted that gender alone cannot be considered the determining factor. Our findings are consistent with the results of Wingenbach, 20 who believed that women have a greater tendency toward radicalism due to their tendency and focus on social changes, empowerment, and equality.
In the current study, increased work experience led to a greater tendency toward humanism. Our results are consistent with a study by Farhadian et al, 21 who found that the mean score of humanism of faculty members with more work experience was higher than those with less work experience. Our study showed that a work experience enriches the faculty members' understanding of humanism over their years of teaching and interaction with the students, they become more familiar with the students' characteristics and then learn to use more effective teaching methods for adult students. In other words, as a result of the experience, the faculty member becomes a facilitator in the teaching process, and the learner assumes responsibility for learning. 21
Conclusion
In general, the findings of this study showed that the faculty members of the Semnan University of Medical Sciences had average philosophical mindedness. They had the least tendency towards liberalism and the most tendency towards behaviorism. Having good philosophical mindedness positively affected the stages of planning and preparation, teaching, evaluation, class management, and professional participation and increased the quality of teaching.
Having good philosophical mindedness, a person does not condemn himself to tangible things and attempts to explain and interpret the reasons and evidence for specific things. Hence, the promotion of philosophical mindedness and its dimensions conflict with liberalism.
The philosophical mind leaves what is worthless and attempts to make changes based on facts, which is consistent with radicalism. The philosophical mind uses scientific methods to solve social problems, which is consistent with pragmatism. This philosophical approach seeks to support responsible social participation to transfer practical knowledge and problem-solving skills to learners.
Finally, it is expected that where the faculty members have philosophical mindedness, the quality of teaching will be at the desired level.
Limitations and suggestions
The absence of the tools of philosophical mindedness and educational philosophy in accordance with Iranian-Islamic philosophy caused the measurement of the study objectives not to be appropriate to the current culture and traditions in this university. Therefore, researchers recommend designing these tools corresponding to Iranian-Islamic culture.
This study only investigated the philosophical mindedness and educational philosophies of the faculty members of medical sciences who taught theory courses. The researchers found no similar report in the country with similar objectives. However, we recommend conducting further studies with a larger sample size in different settings, such as clinical courses.
The participants of this study had average philosophical mindedness. Therefore, it is suggested to hold courses to strengthen the philosophical mindedness of the faculty members in their empowerment programs. Improvement in their philosophical mindedness could be positively affected by educational philosophies. Critical thinking encompasses high-level intellectual and mental skills, such as judgment, questioning, reasoning, seeking truth, analysis, comprehensiveness of thought, objectivity (apparent enlightenment), neutrality, flexibility, and rational reflection. Accordingly, improving critical thinking skills leads to increased abilities such as comprehensiveness, penetration, and flexibility, which are the same as dimensions of philosophical mindedness. Therefore, it is suggested to hold critical thinking courses for students and faculty members in medical sciences. | 5,411 | 2022-09-11T00:00:00.000 | [
"Education",
"Philosophy",
"Medicine"
] |
5DMETEORA FRAMEWORK: MANAGEMENT AND WEB PUBLISHING OF CULTURAL HERITAGE DATA
: Cultural Heritage (CH) management software represents virtual information in various ways aiming either at usability and long-term preservation or interactivity and immersiveness. A single web-based framework that couples the organization of geospatial, multimedia and relational data with 4D visualization, Virtual Reality (VR) and Augmented Reality (AR) is presented in this paper (https://meteora.topo.auth.gr/5dmeteora.php). It comprises the 5dMeteora platform and the Content Management System (CMS) for uploading, processing, publishing and updating its content. The 5dMeteora platform integrates a responsive 3D viewer of high-resolution models in the basis of 3DHOP (3D Heritage Online Platform) and Nexus.js multi-resolution library. It offers data retrieval and interpretation mechanisms through navigation tools, clickable geometries in the 3D scene, named hotspots, and semantic organization of metadata. Its content and interactive services are differentiated, based on the scientific specialty or the field of interest of the users. To achieve the sense of spatial presence, VR and AR viewports are designed to give a clearer understanding of spatial bounds and context of 3D CH assets. The proposed CMS allows dynamic content management, automation of 3DHOP’s operations regarding 3D data uploading and hotspots defining, real-time preview of the 3D scene as well as extensibility at all levels (e.g., new data types). It is built upon a MySQL Database Management System and developed with PHP scripting, backend JavaScript and Ajax controllers as well as front-end web languages. The database maintains and manages the entities of every type of data supported by the platform, while encryption methods guarantee data confidentiality and integrity. The presented work is the first valid attempt of open-source software that automates the dissemination of 3D and 2D content for customized eXtented Reality (XR) experiences and reaches multiple levels of interactivity for different users (experts, non-experts). It can meet the needs of domain experts that own or manage multimodal heritage data.
INTRODUCTION
Photogrammetry and computer vision have advanced to a point that real-world applications can be achieved in a meaningful way. In the field of CH, they are an essential and cost-effective tool in documentation and long-term digital preservation. Once displayed online, the geometrically accurate 3D representations increase the scientific value and public awareness of CH. Enriched visualizations with multimedia, metadata and textual information facilitate cross-disciplinary learning and engage the public. The actual potential landscape is expanded with dynamic services for data retrieval, semantic enrichment and radical new forms of immersion offered by VR and AR. Beyond these frontend features, content storage, indexing and organization provide resilience for institutions, museums, professionals, researchers and stakeholders of the CH community. Their datasets are often large and diverse including spatial 2D and 3D data, metadata, non-spatial and sensitive information as well as different types of multimedia addressed to different users. Considering the difficulty, effort and code literacy needed in promoting online a multimodal repository, they either host 3D models as stand-alone experiences limited to exploring a single artifact or they adhere to the default set of capabilities provided by the software rather than customizing (if possible) its built-in functionalities to their project's requirements (Garcia et al., 2022). Thus, there is a high demand of an end-to-end solution that automates CH content management and publishing to improve the administrator experience.
To fill this gap, this paper presents an open-source web-based framework for the real-time creation of interactive visualizations of multi-modal CH data. The "5dMeteora" framework is the official name of its front-end interface as it currently hosts the 3D geometric documentation data of the UNESCO site of Meteora, Greece. It consists of three technological components: (i) a MySQL Database Management System (DBMS); (ii) a mid-level Content Management System (CMS); and (iii) the platform with integrated 3DHOP's 3D viewer, interactive search and retrieve tools, specialized information as well as VR and AR capabilities.
3DHOP is an open-source framework capable to manage extremely complex 3D meshes or point clouds (tens of million triangles/vertices), created by the Visual Computing Laboratory of ISTI -CNR. (Potenziani et al., 2015). In the context of the platform, clickable geometries, namely "hotspots", placed on the surface of the 3D models, serve as 3D cartographic symbols that link information and multimedia with their spatial reference. Τhe platform also, offers data indexing and sorting in tables as well as advanced querying and filtering of search forms across different multimedia collections. Content can be personalized based on the field of interest, the target audience or the specialization of the end-user. Finally, the users can explore 3D CH assets that are not accessible or do not currently exist through a VR viewport and two types of AR sessions (location-based and relative positioning).
The administrators of the framework can preview, upload, update and maintain these front-end functionalities through the first ever CMS dedicated to multi-resolution 3D models of Nexus library. Nexus is a multiresolution visualization library supporting ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy interactive rendering of very large surface models . The CMS abstracts 3DHOP's logic into a control panel for customization and preview of the models and hotspots of the 3D scene. Text, images, videos and pdf files are posted online after being uploaded, edited and correlated with the hotspot(s), the tags and the field of expertise they refer to into customizable fields. Therefore, the DBMS records are accessed spatially, by locations, semantically, by attributes and thematically. The CMS maintains log registry of each session, diagnosis of structured errors and exception handling for administrator inputs.
Commercial web-visualization software
Multiple web-based frameworks offering multiple forms of data presentations, tools and interactions have arisen (Peinado-Santana et al., 2021). Each one entails different code literacy skills, technological pipeline, data size and 3D formats. Most well-known proprietary ones for automatic content distribution are Sketchfab,echo3D,Vectary and p3d.in, offered at different pricing tiers. Sketchfab is a popular, easy-to-use solution for 3D models hosting and sharing, optimized for fast, on-demand loading and VR/AR inspection (Sketchfab Inc., 2012). Echo3D is a 3D asset management platform for developers that provides tools and cloud infrastructure to manage, update, and stream 3D content to real-time 3D/AR/VR and Metaverse applications and games (echo3D Inc., 2023). It offers a CMS and delivery network (CDN) to build the back-end system of the 3D model viewer and then, integrate into React.js and WebGL for web exploitation, AR SDKs and game engines. Similarly, Vectary is a web publishing software with dashboard for the creation, editing and sharing of 3D experiences without any programming skills (Vectary Inc., 2023). It supports both 2D and 3D files uploading, advanced settings for appearance tuning (lighting, materials, texture, effects) and ready-to-use interactions such as click and actionbased events. Finally, p3d.in hosts and instantly visualizes online or integrates into webAR the products of 3D modelling software like Blender, 3DSMax, Rhino etc (p3d.in TM, 2010). Unlike the aforementioned platforms, it does not have an admin interface layout, but it speeds up the publishing procedure.
Since the presented software are not explicitly designed to be deployed for the CH sector, the assets are often presented completely out of context. Moreover, big cost, limits on size, geometric complexity and formats as well as unavailability of code modifications and extensions for customizing or adding new features respectively, are deterrent factors for heterogeneous and high-quality datasets. It must be noted that Sketchfab gives the potential for small institutions with minimal resources to disseminate their own 3D digitised content. In 2020, it launched the program "Sketchfab for Museums and Cultural Heritage" authoring business accounts for museum professionals and cultural institutions including the Minneapolis Institute of Art, Chile's Museo Nacional de Historia Natural and the University of Dundee Museum Collections in UK. The rest of the literature review emphasizes on open-source and commercial software that incorporates automatic or semi-automatic procedures for online 3D visualization.
Non-commercial CH management approaches
ATON is an open-source framework that lies upon Three.js library for creating cross-platform CH applications of highfidelity (Fanini et al., 2021). It comes with state-of-the-art modules and components for WebXR displays, querying, semantic annotations, real-time collaborative sessions are more. It is actively deployed by research projects in archaeological 3D reconstructions (Fazio et al., 2022;Demetrescu et al., 2023), museum collections (Gonizzi Barsanti et al., 2018) and gamification (Turco et al., 2019). Resurrect3D, a customizable platform comprising a 3D rendering engine with a toolset for domain experts and a front-end infrastructure, has been proposed as an alternative of Sketchfab (Romphf et al., 2021). Based on Three.js library, it focuses on UI extensibility of its basic scene editing and interaction tools. Kompakkt also adheres to the design principles of Sketchfab with the difference that it enables collaboratively annotation of 3D models with multimedia (Kompakkt.de, 2018). Last but not least, Smithsonian Voyager emphasizes on digital storytelling through a set of customizable set of authoring and quality control tools, including adding annotations, guided tours and articles (Smithsonian Digitization, 2021).
After a comparative analysis of eight institutional and eleven commercial repositories, Champion and Rahaman, 2020 reveal common inadequacies such as specialized and integrated 3D model viewer with measurement tools, more comprehensive and useful metadata, ability to link to archival records as well as assignment of unique DOI or ID to the hosted models. In fact, the design of security features matters significantly for domain experts in order to ascertain data copyright and integrity. From a deployment perspective, data ownership is mainly involved in the control dimension of a robust database architecture (Asswad and Marx Gómez, 2021). Nataska Statham reviews five online platforms (Google Arts & Culture, CyArk, 3DHOP, Sketchfab and game engines), regarding their applicability to scientific 3D visualizations for documentation, restoration, preservation and multi-disciplinary collaboration (Statham, 2019). Among these, 3DHOP outstands in terms of performance, interactivity and extensibility. Being dedicated to CH content, it efficiently visualizes photogrammetric 3D meshes, characterized by their geometric complexity, huge size, metric precision and detailed texture mapping. Besides trackball, visibility, lighting and camera controls, advanced functions for animations, measurements, sectioning and annotations are supported. However, it lacks a management system for dynamically setting up and updating 3D scene entities as well as authoring services.
Comparison and contributions
The proposed framework advances web publishing of multimodal CH data, addressing some of the issues that previous open-source and institutional approaches have not deal with or offer combined. Firstly, it facilitates the creation of highresolution 3D visualizations through a CMS of common functionalities as the ones of the proprietary software. To securely protect content's ownership encryption and hashing is applied to file names, credentials and proprietary data. In contrary to the cloud infrastructure of echo3D and ATON, files are stored in the local server of admin.The 3D viewer of the 5dMeteora platform inherits the scientific tools of 3DHOP including distance measurement, coordinates point picking and sectioning of the georeferenced models. An innovative feature for direct spatial and relational connection, is the dynamic animation of 3DHOP's hotspots when selecting their relative records in DataTables panel, and vice versa. Finally, a WebXR module is integrated for 3D assets inspection into a VR session mode or overlay to the real-world through two location-based AR experiences. However, the developed CMS does not currently support customization of the VR and AR experiences. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy 3. METHODOLOGY
System Architecture
The 5dMeteora framework lies upon open-source technologies of LAMP stack (Linux, Apache, MySQL, PHP), known for its stability, security and flexibility. The programming toolkit comprises standard web development languages such as HTML, CSS, and JavaScript along with pure PHP scripting and AJAX for handling server-side logic. The CMS is built on a RESTful API service that allows for a modular and flexible architecture. Its web interface provides a user-friendly and intuitive way for non-technical users to create, manage, publish, and edit content on the platform. It consists of the 3DHOP control panel and the media management tools (Figure 1).
The fields of the MySQL Database Management System are dynamically populated with input values of the respective administrator of the CMS and, at the same time, these are retrieved with SQL and PHP queries from the front-end interface for immediate visualization and publishing. Three main entities group emerge; the "models" and "hotspots" of 3DHOP, the "users" and the "media". 3D rendering and visualization of the high-resolution textured models is delegated to 3DHOP. Its 3D viewer and toolset are embedded in the graphical user interface (GUI) of the CMS for preview purposes and in the front-end platform. The JavaScript libraries jQuery and Bootstrap are also integrated for events handling and responsiveness respectively. Both the GUIs of the CMS and the platform have been designed to be mobile-friendly, with a responsive layout that adapts to different screen sizes. Finally, VR and AR applications lie on A-Frame web framework and Three.js library (A-Frame, 2021). Because they run independently from the rest of the platform, they can be easily updated, deployed, and scaled to meet demand for specific cases and 3D overlays.
Database Management System
The MySQL DMS creates the logical structure that conceptualizes the framework and develops it into a sophisticated entity-relationship model consisting of 40 tables. A permission mechanism assigns roles and rights to registered user through the "user" entity that handles the encrypted credentials and personal information per administrator (name, surname, mail, password) and the CMS's usage rights per role. "Role" properties enable the determination of the permission levels per user (viewer tools, data retrieval). Beyond natural persons, the "role" entity is associated with the classification of data into thematic categories ("roles"), based on the specialty or field of interest of the platform's visitors. By default, visitors are characterized as "Tourists" and have access to multimedia of general content. Text, images and videos of specific scientific fields are available on the platform, as long as they select the corresponding thematic category. Serving multiple specialties with in-depth information on specific scientific fields is implicitly related to the "User" entity due to common logic of role retrieval ( Figure 2). As each authorized user corresponds to a level of licensing, so does each multimedia file correspond to a thematic category.
Figure 2.
Part of the Entity Relational diagram with the tables and relations of the "user", "person" and "role" main entities In terms of its spatial dimension, it defines the objects ("models" and "hotspots") and settings of the 3DHOP's scene graph. The main entity of "hotspots" is the foreign key that handles records associated with the points of interest of the 3D models, including their position, appearance, geometrical shape, short description, keywords and associations with "models", "places" and "media". Figure 3 illustrates part of the database Entity Relational diagram related to the structure of tables and the properties of correlations of the entity of "hotspots". The organization of heterogeneous dataset often leads to data redundancy, slowing of queries performance and inconsistency, unless the principles of normalization are applied. Each media types, namely "text", "image" and "video", has separated tables for the "hotspot" and the "place" they refer to, the field of scientific interest and the "attributes" in the form of keywords. The data are further broken down into more logical units associated with the language and the individual correlation tables and associations. Since tables are populated by the input data of the current admin of the CMS, rules, size restrictions and specific data types in the value fields tables are preset to avoid duplicate entries and incorrect inputs.
Back-end interface
The CMS is developed with pure PHP, JavaScript, and Ajax, without any code dependencies. It dynamically manages and updates the content of the platform and has the following capabilities: • Codeless 3DHOP: Mapping of 3DHOP's scene graph entities and settings into a complete control panel with a viewport for dynamic preview.
•
Extension of "hotspots" tool: Creation of annotations in the form of clickable geometries on the surface of 3D models to be used as a spatial reference for related multimedia. Hotspots can be grouped into spatial or thematic ontologies, called "Places".
•
Multimedia editor: Editing of text documents as well as the titles, captions or descriptions of image and video files, linked with related hotspots, keywords and scientific specialty.
•
Error exception and handling for invalid admin inputs as well as warnings in case vulnerability and security issues arise.
Figure 1. System architecture and main components
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy As already stated, encryptions specialized for 3D files, hashing for sensitive data as well as an error and exception handling mechanism have been developed for admins. If an exception is thrown within any operation, the database transaction will automatically roll back and the issue will be reported to the technical support team. If execution is successful, the transaction will automatically be committed. Figure 4 illustrates how the admins' input assigns a "role" on a user, from available roles served by our database. The diagram is a part of the procedure being used to correlate the role to the user profile. It is worth to mention that there are 2 hashing values as attributes of the option element that are being created for the validation integrity between the scripts and the actual data stored into the database. Query logging is also, enabled to keep in memory a log of all queries that have been run for the current request. Furthermore, the association of each admin account with a level of rights or a degree of authorization is a secure method of distinguishing data (timestamp and name or person that uploaded/edited/deleted) and reducing the risks of unauthorized access.
Front-end interface
The front-end interface of the developed framework integrates a 3D viewer of various high-resolution meshes of CH, intuitive tools for spatial and contextual data correlation, multiple media galleries for storytelling and engaged XR experiences. Before uploading, the 3D files are converted into the multiresolution format of Nexus.js library that enables progressive and viewdependent loading. Regardless of the data size, loading is quick. The 3D model is divided into a series of increasingly detailed sub-models, with each sub-model representing the same geometry at a different level of detail. Its resolution is constantly being optimized, depending on the distance of the camera. In the main page of the platform, the interactive and customizable table system that organizes and filters the attributes for each hotspot of the 3D models is defined by DataTables.js, which is a plug-in for the jQuery Javascript library. It is linked directly with 3DHOP with asynchronous events and Ajax callbacks. It retrieves from the defined RDMS records, the information and metadata of each hotspot of the current visible 3D model of the scene. In serverside processing mode, the DataTables.js plugin sends a request to the server with the parameters of the current visualized model, search criteria, sorting, and pagination. The server then, processes the data and sends back only the required for its hotspots records of the database, reducing the amount of data that needs to be sent over the network and improving performance. To retrieve the data required from searches of end-users in DataTables and the Multimedia pages, complex SQL queries actions have been developed. In case of Dropdown lists and freetext inputs, only the keyword entities that match the query terms and are relevant to the information requests are retrieved from the RDMS. The following search query relates the selected place with a hotspot for finding its metadata:
VR and AR displays
The web XR app is built upon A-Frame that features both marker-based and location-based experiences as well as built-in tracking. The primitives of A-Frame map the relative objects of the Three.js scenegraph, including the loaders that support any 3D file format. In case of 5dMeteora platform, the 3D assets are converted into the glTF format, which is designed to be lightweight and efficient, making it ideal for use on the Web. After gaining access to the VR sensor data of the device through the WebXR API, camera is transformed, and the content of VR session is rendered. To achieve marker-less AR, an object gets and stores the device orientation controls, i.e. accelerometer and magnetic field sensors. Then, the 3D model and the device's camera are positioned at the given location (long, lat) or, at a short distance (long, lat +0.03) in front of the camera. Regarding technical requirements, a WebXR compatible browser is needed to experience the XR content on the existing devices of endusers, without the need for special gear or hardware. The VR session is also suitable for head-mounted displays (HMDs) such as Oculus Rift and HTC Vive Head-Mounted Display. For AR location-based tracking the device of end-users must have a GPS and an IMU (Inertial Measurement Unit) sensor. The crossplatform applications are supported by Android or iOS mobile software devices with technical features that meet the computational requirements of AR technology. Moreover, on a device with multiple cameras, the Chrome browser may have trouble locating the correct one. If it is found that the wrong camera is being activated, it is recommended to use Firefox.
Content Management System
The CMS has an easy-to use and responsive Graphical User Interface (GUI) that comprises various 3D and 2D viewports for real-time data preview, along with buttons, lists and other types of inputs for mapping the described operations (Ioannidis et al., 2021). When the admin logins and uploads a 3D file, the list of the available models for editing is updated ( Figure 5). By selecting the desired 3D model, the options that determine its position, rotation and size in the 3D scene are activated for modification and preview. Multiple models can be displayed concurrently by setting up their visibility.
The next 3DHOP's scene manipulation involves hotspots and places. Hotspots are clickable 3D models or point clouds that can be placed anywhere on the 3D model. A title, a short description and a 3D file of the desired geometry are assigned to each new one ( Figure 6). These geometries of any dimension or shape, can be rendered with a solid color with alpha channel and are associated with a single place, specific keywords as well as text, images and videos. When a hotspot is drawn, its preview is displayed in the embedded 3D viewer at the given position and with the given rotation and scale. Thus, hotspots are a clear, concise and easily understood type of 3D cartographic symbolization. Places represent collections of hotspots that share the same geographic or thematic basis. Keywords, are assigned to each hotspot and place, specifying their thematic category or content.
The registration and management of the relative non-spatial data are delegated to the second tab of the CMS. Text documents, images of jpeg format and videos can be uploaded along with their title, description and alternative ("alt") text. They can be linked to one or more hotspots and to one or more of the following specialties or fields of interest: (i) theologicalhistorical; (ii) geospatial; (iii) archaeologicalarchitectural; and (iv) educational. 3D models and multimedia are accompanied by descriptive keywords that are displayed into dropdown lists with a free text search field. In addition to easiness of access, they contribute to efficient organization, digital identification and enhanced information interoperability.
Front-end Platform
The front-end comprises three pages: the Main Page, the Multimedia Pages with the documentation of each hotspot of the scene and, the XR application. The navigation from the Main Page to the Multimedia Page is accomplished via two different ways: (i) either within a pop-up side panel that is depicted on the right side of the 3D viewer after the user enables the hotspot tool and zooms in the target hotspot; or (ii) within the DataTables panel, located below the 3D viewer, providing an overview of the relational data that accompanies each hotspot (Figure 7).
3D viewer:
The prominent feature of the Main Page is the 3D viewer of 3DHOP with its built-in toolset for direct interaction with the 3D scene. The viewer supports, in addition to standard navigation tools (i.e., panning, zooming in/out and rotating) scientific utilities for georeferenced models. The calculation of point-to-point distance and 3D coordinates directly on the surface of the mesh, allows for precise measurement of the dimensions of different parts of the asset and perception of its real-world size and scale. In addition, the hotspots tool activates the display of the geometries that highlight specific areas or points of interest. The selection of a geometry within the 3D scene leads to a smooth camera movement that frames the correspondent area and the highlighting of the relative row of the DataTables. Then, a side window appears with a short description of the visualized area or point of interest, a short description of the place it belongs to, its attributes as well as a preview of the multimedia material that clarifies it (Figure 8). If the end-users wish, they are redirected to the Multimedia Page for its full documentation.
DataTables.js panel:
The 3D viewer is semantically enriched and clarified by the DataTables panel. It improves data accessibility and indexing in HTML tables, with sorting, paging and filtering capabilities. When visualization changes, the panel dynamically updates its content. In addition to free text fields, end-users can narrow down the data based on certain criteria. Dropdown lists enable filtering by the name of 3D models and hotspots, the name of places and the attributes or keywords of each spatial entity. Finally, the button of the second-to-last column performs a smooth animation in the 3D viewer to bring the view in the position of the hotspot of the selected row while the button of the last column redirects to the multimedia page of its complete multimedia documentation.
Multimedia pages:
The multimedia pages provide complete documentation regarding multiple aspects of the areas or points of interest to which they refer. The text documents, the images and the video files of each page are organized into separate tabs grouped by data type. The collections are of both generic and specialized content, involving CH-related thematic and scientific fields. Specifically, a dropdown list contains the following categories: (i) theologicalhistorical; (ii) geospatial; (iii) archaeologicalarchitectural; and (iv) educational. Each one gives access to multimedia files that derive of scientific research in the respective field. Besides thematic and scientific categorization, searching and filtering, either through a dropdown list with all the keywords or by entering keywords in a free text field, are integrated. The personalized knowledge access provides a holistic understanding of the significance, context, and value of the CH asset. At the same time, the interactive search and retrieve tools engage non-experts users and convey information in a clear and comprehensive manner.
Web XR applications
VR immerse the end-users into an interactive 3D environment while AR provides a true sense of scale, depth and spatial awareness. They contribute to the creation of a more memorable and impactful experience for end-users who are interested in learning about the visualized CH asset. Two applications have been developed serving two different use cases depending on the physical location of the end-users. The first AR application overlays 3D assets onto their real-world position and requires their physical presence at the specific geographical location. It enhances the CH site or asset with non-existent, supplementary or non-visible information. The 3D model will be overlaid at the given pose and scaled to its real-world proportions (Figure 9).
The end-users have the potential to interact with it by zooming in, for a more detailed and effective investigation. The second application is helpful in case the end-users' location is not relevant to the information or experience being provided or in situations where the actual physical context is not desirable (e.g., dangerous or inaccessible in the real-world). At initialization, the 3D asset is displayed in VR switching easily and quick in superimposition through the camera, in the current location. Into VR mode, the asset can be explored in its entirety through a viewpoint of first-person perspective (1PP) (Figure 10a). In AR mode, it overlays in a relative position and by a certain pose in the surrounding space in which the end-users are located, without the requirement of scanning any visual marker (Figure 10b).
CONCLUSIONS
The 5dMeteora framework enables non-expert administrators to store, enrich and, finally, visualize their data to the 3D viewer and the multimedia pages of the front-end platform. It offers CH experts the potential to configure interactive tools that connect spatial and non-spatial data and to develop XR experiences with 3D digital content. Its VR/AR features assist a broader public to perceive and engage into CH content. The CMS of 5dMeteora framework has increased efficiency in assigning roles to users and media as well as previewing and final uploading of 3D models along with their associated media directly to 3DHOP. It automates the creation of hotspots and organizes the related information based on the target group's scope, scientific specialty or field of interest. Thus, data of heterogeneous origin, format and field of interest are easily handled by non-expert administrators and disseminated to a variety of target audiences. It can address the needs of various stakeholders involved in CH management that undertake the role of the administrator like government agencies, CH foundations and associations, Museums, libraries, universities and research institutions, scientists and professionals of the field and private companies and consultants, including architectural firms, engineering companies, and conservation specialists. To our knowledge, at the present there is no ready-to-use CMS for the 3DHOP framework. Further work includes the support of additional data formats of media files and the optimization of spatial tracking accuracy as well as occlusion handling of AR applications. The potential of CMS extension in order to customize, preview and publish the XR sessions is in the process of investigation. | 6,969.2 | 2023-06-23T00:00:00.000 | [
"Computer Science",
"History",
"Art"
] |
Exploring semantic differences between the Indonesian pre fi xes PE-and PEN-using a vector space model
: Indonesian has two prefixes, PE-and PEN-, that are similar in form and meaning, but are probably not allomorphs. In this study, we applied a distributional vector space model to clarify whether these pre fi xes have discriminable semantics. Comparisons of pairs of words within and across morphologically de fi ned sets of words revealed that cosine similarities of pairs consisting of a word with PE-and a word with PEN-were reduced compared to pairs of only PE-words, or of only PEN-words. Furthermore, nouns with PE-were more similar to their base words than was the case for words with PEN-. The specialized use of PE-for words denoting agents, and the specialized use of PEN-for denoting instruments, was also visible in the semantic vector space. These differences in the semantics of PE-and PEN-thus provide further quantitative support for the independent status of PE-as opposed to PEN-.
Introduction
In Indonesian, there are two nominalisation prefixes: PE-and PEN-, which derive nouns with a range of similar meanings (agent, instrument, patient, location, causer) from verbs.Qualitative studies mainly describe PE-and PEN-as independent prefixes (Ramlan 2009;Sneddon et al. 2010), but there are also studies that take them to be allomorphs (Dardjowidjojo 1983;Kridalaksana 2007).It is unclear whether PE-is an allomorph of PEN-or is actually an independent formative (Denistia 2018).
The first prefix, PEN-, is described as having six phonologically-conditioned allomorphs which are in complementary distribution (Ramlan 2009;Sugerman 2016;Sukarno 2017).The N in PEN-is a mnemonic for the nasal assimilation that characterizes most of its allomorphs.For notational clarity, we write the prefixes in upper case and distinguish between their allomorphs using subscripts: PEN peng-, PEN pen-, PEN pem-, PEN peny-, PEN penge-; and one non-nasalized allomorph PEN pe-.The second prefix, PE-, is clearly similar in form, and has been argued to be very similar also in meaning as PEN pe- (Nomoto 2006). 1he reason that PE-is taken to be a different prefix is that nouns with PE-are derived from verbs with the prefix BER-, and nouns with PEN-are derived from verbs with MEN-(see, e.g., Benjamin 2009;Dardjowidjojo 1983;Ermanto 2016;Nomoto 2006Nomoto , 2017;;Putrayasa 2008;Ramlan 2009;Sneddon et al. 2010), through a process of affix substitution (e.g.petani "farmer" -bertani "to farm" and penari "dancer" -menari "to dance").Similar to PEN-, MEN-has also six phonologicallyconditioned allomorphs: MEN meng-, MEN men-, MEN mem-, MEN meny-, MEN menge-, and MEN me-.
Verbs with MEN-can be extended with the suffixes -i and -kan (Kroeger 2007;Sneddon et al. 2010;Sutanto 2002;Tomasowa 2007).These suffixes add a further argument: a beneficiary, a causer, or a location (e.g.tulis "to write" -menulisi "to write on something", menuliskan "to write for someone") (Arka et al. 2009;Ramli 2006).Verbs with BER-are found with -kan or -an to express possession and reciprocity (e.g.alamat "address" -beralamatkan "to have an address", cium "to kiss", berciuman "to kiss each other").However, derived nouns with PE-and PENdo not carry -i, -kan, or -an suffixes, even though they may correspond to verbs with these suffixes (Nomoto 2006).For instance, pemilik, "owner", is paradigmatically related to memiliki "to own something", with the suffix -i.Importantly, the verb memilik does not exist.
The relation between form and meaning of PE-and PEN-is elucidated further by Chaer (2008), Benjamin (2009), and Sneddon et al. (2010), who reported that these prefixes are occasionally attested for the same base word with either the same or different a semantic role.For instance, PEN-as in penembak and PE-as in petembak are both derived from the base tembak, "to shoot", and denote "someone who shoots" and "shooter (athlete) ", respectively.There are also cases in which, having the same base word, the derived form with PEN-expresses the agent and the derived form with PE-expresses the patient.For instance, PEN-as in penyapa and PE-as in pesapa are both derived from the base sapa, "to greet/address", and denote "a person who greets/addresser" and "a person who is greeted/addressee" respectively.Denistia and Baayen (2019) conducted a corpus-based analysis to investigate whether PE-is really an allomorph of PEN-.Their study also included a quantitative analysis of the paradigmatic relation between PEN-and PE-with their corresponding verbal prefixes MEN-and BER-.They argued that PE-and PEN-actually are two different prefixes, since these prefixes reveal different degrees of productivity and also show semantic specialization: PEN-is more productive in forming agents and instruments, whereas PE-primarily forms agents and to some extent patients, but not instruments.They also observed that the number of derived words with an allomorph of PEN-is correlated with the number of base words with the corresponding allomorph of MEN-.PE-and its base with BER-do not partake in this correlation; it is an exception to the quantitative paradigmatic relations characterizing the allomorphs of PEN-and MEN-.
In the present study, we used methods from Distributional Semantics Modelling (DSM; Landauer and Dumais 1997) to investigate potential further semantic differences between PE-and PEN-.In DSM, word meanings are quantified by looking at words' contexts, following the insight of Firth (1957: 11) that "You shall know a word by the company it keeps".DSM builds on the observations that 1) words that have similar meanings usually occur in similar contexts (Rubenstein and Goodenough 1965); and 2) that words appearing in similar contexts tend to have similar meanings (Pantel 2005).To operationalize this, distributional information of words from large language corpora is brought together in high-dimensional vectors (Turney and Pantel 2010).Thanks to this vector representation, geometric methods that quantify vector similarity can be used to measure the semantic similarity between words of interest.
Methods from distributional semantics have proved useful both for natural language processing (e.g., Alfonseca et al. 2009 in information retrieval;McCarthy et al. 2007 in word sense disambiguation; Cheung and Penn 2013 in textual summarization) and for a range of psycholinguistic tasks, including semantic priming and similarity judgements (e.g., Lowe and McDonald 2000;Lund and Burgess 1996;McDonald and Brew 2004), and studies of morphological processing (Kuperman and Harald 2009;Lazaridou et al. 2013;Marelli and Baroni 2015).Semantic vector spaces also play a central role in a recent computational model of the mental lexicon (Baayen et al. 2019).
DSM was first applied to Indonesian morphology by Fam et al. (2017).They examined the paradigmatic relations for Indonesian derivational affixes (e.g.beli:dibeli, "to buy:to be bought", makan:makanan, "to eat:food"), and used a vector space model to generate predictions for the meanings of unseen derived words.In the present study, we constructed a semantic vector space from a large Indonesian corpus.If PE-and PEN-words differ in meaning, they are expected to occur in systematically different contexts, and be distributed differently in the semantic vector space.
The reminder of this paper is structured as follows.We first introduce the corpus used for this study and the databases that we derived from this corpus.In Section 3, we then describe how we constructed the semantic vector space, derived model-based similarity measures, and obtained human judgements on word similarities.We also present the analyses of the model-predicted similarity values, and a comparison of model predictions with human judgements.Finally, we discuss the results obtained and conclude the study in Section 4.
Materials
The main corpus used in this study was the Leipzig Corpora Collection (henceforth, LCC) available at http://corpora2.informatik.uni-leipzig.de/download.html.This corpus was compiled from different sources such as the web, newspapers, and the Wikipedia pages dating from 2008 to 2012 (Goldhahn et al. 2012).It consists of 2, 759,800 sentences, 50,794,093 word tokens, and 112,025 different word types.We obtained the morphological structure of the non-compound words using the MorphInd parser (Larasati et al. 2011) and checked the results manually against the online version of Kamus Besar Bahasa Indonesia, a comprehensive dictionary of Indonesian (Alwi 2012).The precision of the parser was at 0.98 with a recall of 0.8 in parsing all the PE-and PEN-words of the corpus.Overall, we obtained 560,633 Indonesian word types, 47,217,467 tokens, and 314,448 hapax legomena.We processed the data using the R version 3.4.3programming language (R Core Team 2017).The databases and the R scripts are available online at http://bit.ly/ PePeNSemVector.
Indonesian lemmatized database
Using the morphological analyses provided by MorphInd, we lemmatized the LCC corpus.In a preliminary processing step preceding lemmatization, we lower-cased all words and excluded numbers, punctuation marks, and the 15 highest frequency stop words.2During lemmatization, the bound morphemes (ku-"I", -ku "my", kau-"you", -mu "your", -nya "his/her/its"), prolexemes (e.g.non-, anti-, pra-, pasca-), particles (e.g.-lah and -pun to express emphasis, -kah to ask a question), and numeric affixes (e.g.se-"one", per-"per") were separated from their base word as suggested by Sneddon et al. (2010).We also marked -nya, when its function is to emphasize a question word, by nya-WH (Pastika 2012).Besides, although MorphInd identifies antar as a prolexeme, we did not separate the prolexeme and the base into two tokens as antar has a different meaning when it occurs as a simple word (e.g.antaragama "among religions" -antar "to pick up").
Hyphenated words were dealt with as a special case in the lemmatization process since the hyphen can indicate various morphological word formation patterns such as full reduplication, partial reduplication, imitative reduplication, affixed reduplication, or compounding.Hyphens may also appear in proper names and when an affix is attached to a loan word (Sunendar 2016).The hyphens for -Nya, -Ku, and -Mu (note the capital N, K and M) were lemmatized to Tuhan "God" (e.g.kepada-Mu, kepada Tuhan "to God').We did not parse reduplicated forms as this word formation process is used to convey different meanings (e.g.plurality, intensification, or iteration ;Chaer 2008;Dalrymple and Mofu 2012;Rafferty 2002;Sugerman 2016).Several examples illustrating the output of the lemmatization process are shown in Table 1.
"Thank you for always paying attention to me while in Korea, when I missed my mom you told me to call her, even you also invited me to meet your mother to attenuate my longing when I really miss my mother."
Modelling semantics
The distributional vector representations of PE-and PEN-target words were extracted from the LLC corpus using word2vec (Mikolov et al. 2013) with the default parameter settings3 (see also Altszyler et al. 2017 for other methods).Cosine similarity was employed to measure the degree of semantic similarity of two lemmas.Let vectors v and w be two n dimensional vectors representing two lemmas.The cosine similarity of v and w is the cosine of the angle θ between v → and w → , and is equal to the inner product of the vectors, after being length-normalized (see Equation ( 1)).Thus, similarity judgement is based on the orientation, and not the magnitude, of the vectors.
Equation 1: Calculation of cosine similarity value between two vectors.
Data sets
Using the cosine similarity, we constructed two datasets, henceforth the CosSim database and the PePeNCos database. 4The CosSim database contains the cosine similarity values for all possible combinations of pairs of words from the set of PE-, PEN-, BER-, and MEN-words.This database also includes the cosine values for PE-, PEN-, BER-, and MEN-words with their respective base words.For each of its 37,003,784 entries, the CosSim database provides the following information: Lemma1; Lemma2; Cosine similarity of Lemma 1 and Lemma 2; Prefix (the prefix which the lemma contains, either PE-, PEN-, BER-, or MEN-); Base word; Semantic role of the nominalization with PE-or PEN-: agent, instrument, causer, patient, location; Derived-base cosine similarity, i.e., the cosine similarity of the derived word and its base word; and the word category of the base word.For agent nouns formed with PE-, we also specified whether the word refers to an athlete or a non athlete.Example entries of this database are listed in Table 2.
The semantic roles assigned to the nominalizations with PE-and PEN-are based on manual annotation carried out by the first author, based on words' occurrences in the corpus.For each type, at least one token was sampled from the corpus, and checked against the Kamus Besar Bahasa Indonesia.Nominalizations that may express multiple semantic roles, cf."opener" in English, pembuka in Indonesian, are linked with an "agent-instrument" semantic role.Manual inspection of all of the 579,695 PE-and PEN-word tokens in the corpus was not feasible.Thus, the manual annotation of semantic roles is necessarily incomplete.
The PePeNCos database is a subset of the CosSim database and contains 81 derived words with PE-and 910 derived words with PEN-.The database specifies the cosine similarity of the derived word and the corresponding base word, the word class of the base word, and the semantic role of the derived word.From this database, we excluded PE-and PEN-words that do not have a verbal base that co-occurs with the prefix MEN-or BER- (Dardjowidjojo 1983;Kridalaksana 2007;Nomoto 2017;Ramlan 2009;Sneddon et al. 2010).Table 3 presents some examples of entries in this database.
Semantic similarity rating
Eighty-three Indonesian native speakers were asked, by means of an online questionnaire, to rate pairs of words with respect to their similarity in meaning on a (Likert 1932), following Miller and Charles (1991).Participants were first presented with a set of instructions that illustrated and exemplified the task.Subsequently, they were requested to judge the similarity between 48 noun base words and the corresponding derived words with PE-and PEN-on a scale from 0 (no similarity in meaning) to 4 (very similar in meaning).An "I don't know" option was provided to the participants just in case some low frequency words would not be recognized.These responses were removed from our analyses.Participants were free to re-rate any pairs before submitting their final judgements.
Our word materials consisted of 24 PE-words and 24 PEN-words and their base words.Out of the set of 48 PE-and PEN-words, 47 have unique base words; two PEN-words share the same base word.Across prefixes, we controlled for the frequency of base and derived words, in which both of them displayed a comparable wide range of cosine similarity values.The words were selected pseudorandomly, while ensuring that different base word frequencies (High and Low), different derived noun frequencies (High and Low), and different cosine values (see Figure 1) were present in the dataset.A word's frequency was classified as High or Low when present in the list of the top 20% or the bottom 20% most frequent words, respectively.This data set, which contains the human ratings as well as the cosine similarity values, is available in the supplementary materials. 5xample entries are listed in Table 4.
Analysis
In what follows, we first compare the semantic similarities within and between the sets of words with PE-and PEN-(Section 3.1).In Section 3.2, we address the semantic similarities of the base words of these prefixes.Following this, we address the different semantic roles that are realized by words with PE-and PENagain using the cosine similarity measure (Section 3.3).Section 3.4 investigates semantic similarity for base words and their prefixed derivatives, and Section 3.5 concludes with comparing the corpus-based semantic similarities with human ratings of semantic similarity.We complemented the LDA analysis with visualization using Principal-Components. Figure 2, left panel, shows the locations of PE-and PEN-words in the space spanned by the first two principal components.Independent-samples t-tests were conducted to compare the mean of PE-and PEN-vectors for each dimension.For the first dimension, the mean of PE-is −1.18, whereas PEN-is 0.11 ( p < 0.0001).For the second dimension, the mean of PE-is −0.36, while PEN-is 0.03 ( p = 0.03473).Further independent-samples t-tests for the first and the second dimension showed different means for PEathlete (−1.9 and −1.03) and PEnon-athlete (−0.82 and −0.02; p = 0.026 for the first comparison and p = 0.001 for the second comparison).
Figure 3, left panel, presents boxplots summarizing the distributions of cosine similarities for three sets of word pairs: PE-/PEN-pairs (set 1), PEN-/PEN-pairs (set 2), and PE-/PE-pairs (set 3); see examples in Table 6.Although the distributions show considerable overlap, differences in mean cosine similarity do reach significance for the between prefix comparisons (PE-/PEN-) and within-prefix
comparisons (either PEN-PEN or PE-PE).
A Kruskal-Wallis rank sum test confirmed the presence of at least one significant difference ( χ 2 ( 2) = 2535.1,p < 0.0001; mean cosine similarities: 0.024 for set 1, 0.049 for set 2, and 0.07 for set 3). Post-hoc pairwise multiple comparisons using the Nemenyi test and p-value adjustment using the Bonferroni correction confirmed that mean cosine similarity for the PE-/ PEN-group is indeed significantly lower than that for the PEN-/PEN-and the PE-/ PE-groups ( p < 0.0001 for both comparisons).The between-prefix cosine similarities indicate that PE-and PEN-formations form relatively cohesive clusters within their own class in semantic space, and that these classes are not fully overlapping in semantic space.The mean cosine similarity for word pairs within the PEN-group, however, is not convincingly different from the cosine similarity of pairs within the PE-group ( p = 0.049).
Cosine similarity and paradigmatic relations
Since PE-and PEN-are paradigmatically related with the verbal prefixes MEN-and BER-, respectively, that occur in the nominalization's base words (see Benjamin with BER-show a similar trend as the corresponding nouns, such that within-prefix similarities (MEN-/MEN-; BER-/BER-) are greater than between prefix similarities MEN-/BER-.For this comparison, we selected all verbs with MEN-and BER-, regardless of whether they correspond to PEN-and PE-or not.Table 7 shows how often MEN-, BER-, PE-, and PEN-prefixes attach to monomorphemic base words, as well as the prevalence of verb-noun affix substitution pairs.Figure 3, right panel, presents boxplots summarizing the distributions of cosine similarities for BER-/MEN-, MEN-/MEN-, and BER-/BER-pairs.The Kruskal-Wallis rank sum test (hskip2ptχ 2 ( 2) = 34699, p < 0.0001) and Bonferroni-corrected pairwise tests clarified that the mean for BER-/MEN-pairs (0.032) is significantly smaller than those for the within-prefix pairs (p < 0.0001 for both comparisons).In addition, the mean cosine similarity for word pairs within the BER-set (0.042) is significantly lower than the mean of the pairs within the MEN-set (0.046; p < 0.0001).Although the differences for the base verbs are smaller than for the nominalizations, it is the case that for both nouns and verbs the comparisons between prefixes yield somewhat lower mean similarities than those within prefixes.We can therefore conclude that the paradigmatic system of PE-/PEN-and BER-/MEN-shows coherence not only at the level of form, but also to some extent at the level of semantics.
Cosine similarity and semantic roles
We observed that within-prefix word pairs are more similar in their semantics than between-prefix pairs.Since Denistia and Baayen (2019) have shown that PE-can realize the patient semantic role, and that PEN-can realize the instrument semantic role, and that both may realize the agent semantic role, the question arises whether the present semantic vectors are sufficiently sensitive to reflect these differences in what semantic roles the different prefixes may realize.The most frequent semantic roles for each prefix, agent for PE-and agent and instrument for PEN-, were selected for further analysis.Patient PE-observations were too few to be included.PEN-words were further distinguished by whether they realized multiple semantic roles (both agent and instrument) depending on the context (Jalaluddin and Syah 2009).Of specific interest are five groups of word pairs: (1) PE-and PEN-words expressing agent, (2) PE-words expressing agent, (3) PEN-words expressing agent, (4) PEN-words expressing instrument, and (5) PEN-words expressing both agent and instrument.Figure 4, left panel, shows that the distribution of cosine similarities for PE-/ PEN-pairs is shifted down compared to the distributions for the pairs of words with PE-and pairs of words with PEN-.A Kruskal-Wallis rank sum test (hskip2ptχ 2 ( 2) = 362.41,p < 0.0001) and Bonferroni-corrected pairwise tests clarified that the means for within-prefix agent pairs, PE-as agents (0.082) and PENas agents (0.044), are significantly higher than the mean for between-prefix agent pairs PEN-/PE-(0.033).Furthermore, the tests also clarified that agents with the less productive PE-prefix are significantly more similar than those with the more productive PEN-prefix (p < 0.0001).
In our data, PEN-expresses agent, instrument, or sometimes both agent and instrument, and has a productivity index V1/N (Baayen 2009) of 0.00085 for agents Differences between the Indonesian prefixes PE-and PENthat is greater than the productivity index for instruments (0.00035) and that for the mixed cases (0.00001).Within the set of words with PEN-, see the right panel of Figure 4, we observe differences in mean cosine similarity between the mixed group and agents (lowest similarities) on the one hand, and the mixed group and instruments (highest similarities) on the other hand.The mixed group is positioned in between the two extreme groups, as expected.A Kruskal-Wallis rank sum test (hskip2ptχ 2 ( 2) = 6895.1,p < 0.0001) and Bonferroni-corrected pairwise tests clarified that the mean cosine similarity for PEN-words in the mixed set (0.091) was significantly different from the mean for words realizing only the agent (0.044) or only the instrument (0.161, p < 0.0001).Interestingly, the mean cosine similarity for PEN-agents is lower than that for PEN-instruments.In other words, the set of words with PEN-realizing instruments is internally more similar.This may be due to more consistent contextual collocations for instruments.For instance, instruments are often used with specific prepositions such as dengan "with" or with verbs such as menggunakan and memakai "to use something" in their context.
Returning to PE-, Chaer (2008) observed that PE-is the prefix of choice for agents that are athletes (e.g., petinju "boxer" and pecatur "chess player").Accordingly, one might suspect that observing a higher cosine similarity for PE-as agent compared to PEN-as agent in Figure 4 is due to the specific use of PE-for athletes.In order to investigate this possibility, we split the set of PE-words expressing agents into two subsets, with one subset (PEathletes ) comprising the athletes and the other (PEnon-athletes ) the non-athletes.
As shown in Figure 5, cosine similarities within the PEathletes set are quite high (mean 0.255) compared to both non-athletes realized with PE-and between-prefix comparisons with (non-athlete) nouns with PEN-.A Kruskal-Wallis rank sum test (hskip2ptχ 2 ( 3) = 525.99,p < 0.0001) and Bonferroni-corrected pairwise tests clarified that the mean cosine similarities of pairs within the PEathletes set are significantly higher than those for the pairs of words in the other sets of agent nouns ( p < 0.0001).When both PEathletes and PEnon-athletes are merged into one set, the mean cosine similarity decreases to 0.049; see the left panel of Figure 4. Apparently, the high cosine similarities within the PE-agents group are due mainly to the subset of agent nouns that refer to athletes.As we can see in Figure 5, pairs of words are much less similar semantically when only one, or none, refer to an athlete, irrespective of whether they are formed with PE-or PEN-.However, the small differences in the mean between these three distributions do receive statistical support (all p < 0.0001).
Cosine similarity for base-derived pairs
As observed by Chaer (2008), PE-is used specifically to coin words for athletes; 34% of types in our data.We therefore expected that base-derived word pairs with PE-have a greater mean cosine similarity compared to base-derived word pairs with PEN-.
The left panel of Figure 6 presents boxplots for the distributions of cosine similarities for word pairs consisting of a base word and the corresponding nominalization, once for PE-and once for PEN-.A Wilcoxon test (W = 44,626, p < 0.0001) clarified that the mean cosine similarity for PE-/BASE word pairs (0.315) is significantly higher than the mean cosine similarity for PEN-/BASE word pairs (0.211), as expected.Subsequent analyses that focused on the word category of the base word clarified that the overall pattern is driven entirely by pairs with nouns as base word (W = 2,488, p = 0.648 for verbs; W = 790, p = 0.1329 for adjectives; but W = 5,932, p < 0.0001 for nouns).The right panel of Figure 6 shows the distributions base-derived pairs with noun bases.Since most formations with PE-denoting athletes have a nominal base, the larger cosine similarities for PE-are again driven primarily by this particular semantic field.
Modelling human judgement for base-derived pairs
To further validate the corpus-based semantic vectors and the cosine similarity measure, we carried out a rating task in which participants were requested to evaluate the semantic similarity between 48 nominal base words and their 5: Boxplots for the cosine for PE-partition into nouns for athletes and nouns for non-athletes, agent nouns with PEN-.
Differences between the Indonesian prefixes PE-and PENnominalizations with PE-and PEN-.Given the results reported in the previous section, we expected the ratings to be lower for the 24 pairs involving PEN-than for the 24 pairs involving PE-.
Participants were asked to provide ratings on a five-point Likert scale (1-5), for each of the 48 derived/base pairs.Participants were requested to use the full scale.The set of items comprised two subsets of pairs, depending on whether or not the affix of the derived word is PE-or PEN-(Affix).We selected the items in such a way that there was no strong difference in mean cosine similarity between the PE-and PEN-groups (W = 401, p = 0.01937).For both the derived and the base word, we included their frequency of occurrence as covariates (FrequencyDerived, FrequencyBase).
Out of 83 participants, 13 never used more than three options of the five options available on the rating (see Figure 7).These participants removed prior to analysis.We used a GAMM (Generalized Additive Model, MGCV package 1.8-17 (Wood 2006(Wood , 2011))), for statistical evaluation to investigate whether the cosine similarities and human judgements are correlated.Table 8 presents the summary of a model with a smooth for PE-and a difference smooth for PEN-.These curves are shown in the left and right panels of Figure 7.A thin plate regression spline was used to model the non-linear interaction of base frequency derived frequency, and by-participant random intercepts were included as well.Random intercepts for item were not included because an analysis of concurvity indicated item was too strongly confounded with the other item-bound predictors.Apparently, the way in which human ratings can be predicted from the cosine similarity is different for the two prefixes.As can be seen by comparing the left and centre panels of Figure 8, the effect of cosine similarity is limited to the first twothirds of the range of its values; the effect levels off for the highest cosine similarity values.This indicates that a large part of the range of cosine similarities is indeed predictive for human intuitions about the semantic similarity between PE-and Figure 7: Scatter plot matrix for ratings by cosine similarity for the 83 participants in the human similarity judgement experiment.Participants 3,13,14,18,38,40,43,51,57,61,64,71,73 were removed from the model because of their too restricted use of the rating scale.PEN-words and their base words.Furthermore, the upward slope of the regression curve in the predictive range of cosine is steeper for PE-than that for PEN-, suggesting a greater sensitivity of the cosine of the angle of two semantic vectors as a similarity measure for the prefix PE-.The difference curve in the right panel shows that we indeed have a significant difference: around a cosine similarity of 0, the predicted partial effect of PE-is significantly lower, and around a cosine similarity of 0.2, it is significantly higher.
General discussion
Studies in Indonesian allomorphy have generally focused on words' internal structure.Denistia and Baayen (2019) is the first corpus-based study systematically investigating how complex words are used in written Indonesian.In the present study, we extend their investigation using methods of distributional semantics to study the prefixes PE-and PEN-, which have been described as having similar form and meaning (Rajeg 2013;Sneddon et al. 2010), have their own quantitative semantic profiles; if so, this would provide further support for PE-and PEN-being separate affixes rather than allomorphs (Denistia and Baayen 2019;Nomoto 2017;Ramlan 2009;Sneddon et al. 2010).We used methods from distributional semantics to obtain semantics vectors (also known as word embeddings) for all words with PE-and PEN-, as well as for their base words and their paradigmatically related verbs with BER-and MEN-.In addition, we investigated whether the corpus-based cosine similarity measure was predictive for human similarity judgements.
There are subtle but statistically significant differences in the distributions of cosine similarities between PE-and PEN-.The finding that PE-words are less similar to PEN-words than to other PE-words, and likewise that PEN-words are less similar to PE-words compared to PEN-words, dovetails well with the hypothesis that PE-and PEN-are different prefixes, rather than allomorphs.
The semantic analyses using embeddings provides further support for paradigmatic consistency between PE-/PEN-and BER-/MEN- (Benjamin 2009;Dardjowidjojo 1983;Denistia and Baayen 2019;Ermanto 2016;Nomoto 2017;Putrayasa 2008;Ramlan 2009;Sneddon et al. 2010).Cosine similarities calculated between formations with PE-and formations with PEN-tend to be somewhat smaller than cosine similarities calculated for pairs of words with PE-and likewise for pairs of words with PEN-.A similar pattern is found for the corresponding base words with BER-and MEN-.This difference is likely to be due to well described differences in the semantic functions of these prefixes (Arka et al. 2009;Chaer 2008;Kroeger 2007;Putrayasa 2008;Sneddon et al. 2010;Sutanto 2002;Tomasowa 2007).MENtypically renders a verb explicitly active either, transitive or intransitive, and can carry the suffixes -i and -kan.These suffixes express intensification or iteration (in addition to adding a further argument, either a beneficiary, a location, or a causer).BER-, by contrast, is described as a prefix which typically forms intransitive verbs and expresses reciprocals, reflectives, or possessives.
PE-and PEN-differ also in that nouns with PE-are more similar to their base word compared to nouns with PEN-.This finding was supported by a rating experiment, which also suggested that the semantic vectors are indeed predictive of intuitive human judgements of semantic similarity.
Finally, a closer investigation of the semantic roles realized by nominalizations with PE-and PEN-reveals that the mean cosine similarity for pairs of PEwords expressing agents is higher than the mean for pairs of PEN-words expressing agents.Furthermore, words with PEN-as instruments have a higher mean cosine similarity compared to pairs of words with PEN-that express agents.
We have seen that the semantic similarities of pairs of agents realized with PEis slightly greater in the mean than the semantic similarities of pairs of agents realized with PEN-(see Figure 4).Furthermore, the semantic similarities of pairs of base and derived words are greater for PE-than for PEN-(Figure 6).These results are perhaps surprising given that of the two prefixes, it is PE-that is the least productive (Denistia and Baayen 2019).Typically, one would expect greater semantic transparency between base and derived word for more productive affixes.
The somewhat greater transparency of agents with PE-is likely to be due to the specific use of PE-to express athletes (e.g., petinju "boxer" and perenang "swimmer").The overall less productive prefix has found a small semantic niche in which it is strongly established.By way of comparison, irregular verbs in English, German, and Dutch have found a semantic niche comprising actions and positions involving the body (Baayen and Moscoso del Prado Martin 2005).Likewise in Dutch, the less productive suffix -te (compare -th in English) typically expresses measures (e.g., lengte, English length), whereas the more productive rival suffix -heid is also used for character traits and anaphoric reference (Baayen and Neijt 1997).
In summary, using distributional semantics as analytical tool, we have been able to provide corpus-based evidence for subtle differences in the semantics of the Indonesian prefixes PE-and PEN-.The present results provide further support for PE-and PEN-being different prefixes, supplementing earlier studies pointing to differences in their phonological conditioning (Ramlan 2009;Sneddon et al. 2010), differences in their paradigmatic relations with the verbal prefixes of their base words (Nomoto 2017), and differences in their productivity (Denistia and Baayen 2019).
The semantic effects that we have documented in the present study are small.This is likely to be due not only to the enormous differences in words' meanings, but also to the small size of the corpus from which we derived our embeddings.Whereas in natural language processing applications, corpora of several billions of words are favoured, our corpus comprises only 47 million words.As a consequence, our vectors are noisy, especially for lower-frequency words.Further replication studies based on larger corpora will be essential for consolidating the present exploratory results.At the same time, our embeddings have turned out to be surprisingly useful.Several of our observations are predated in the qualitative literature, but it is difficult to evaluate the importance of these observations for the language system.Embeddings have allowed us to provide quantitative corpusbased support for several aspects of the semantics of Indonesian prefixal morphology, and thus provide novel external support and enhanced predictive precision for previous qualitative research.
Figure 1 :
Figure 1: Rank distribution of cosine similarities of words with PE-(left panel) and words with PEN-(right panel) with their respective base words, as used in the semantic similarity judgement task.
Figure 3 :
Figure 3: Boxplots for the distributions of cosine similarities.Left panel: cosine similarities for between PE-and PEN-, within PEN-, and within PE-words.Within and between prefix cosine similarities, group means are significantly different only for between prefix comparisons.Right panel: cosine similarities between MEN-and BER-, within MEN-and within BER-.For these base words, all pairs of group means are significantly different.
Figure 2 :
Figure 2: PEN-words (red) and PE-words (black) in the plane spanned by the first two principal components of PCA analysis of the semantic vectors of these words.Left panel: PE-and PEN-.PEis clustered more on the central to left part, whereas PEN-is more to the central-right part.Right panel: PE-(broken down by athlete and non-athlete) and PEN-.PE-athlete and PE-non-athlete are reasonably well separated.
Figure 4 :
Figure4: Boxplots for the distributions of cosine similarities for cross-prefix pairs of words with PE-and PEN-expressing agents, as well as for within-prefix pairs expressing agents (left panel).The right panel compares the distributions of cosine similarities for words with PEN-, comparing pairs of words that can realize both agent and instrument, and those realizing either agent or instrument.All pairs of group means are significantly different for both the left and right panels.
Figure 6 :
Figure6: Boxplots for the distributions of cosine similarities for word pairs consisting of the base and the derived word (left panel) and the noun base and the derived word (right panel).Mean cosine similarity is higher for PE-compared to PEN-in both comparisons.
Figure 8 :
Figure 8: Partial effects for cosine similarity as a predictor of human ratings for PE-(left panel) and PEN-(middle panel).Right panel: the difference curve which, when added to the curve of PEN-, yields the curve of PE-.
Table :
Examples of the lemmatization.
Table :
Examples of entries in the PePeNCos database.
Table :
Examples of entries in the CosSim database.
Table :
Examples of entries of the database with human similarity ratings.Part: participant.Cosine similarity of PEand PEN-We made use of linear discriminant analysis (LDA) to clarify whether the PE-and PEN-words are separable in semantic space.The LDA was able to reach 95% classification accuracy for 81 PE-(27 athlete, 54 non athlete) and 910 PEN-words (all of which have a minimal token frequency of 5).As shown in Table5(left), the model assigned nearly half of the PE-words correctly.A second LDA was given the task to discriminate between PEN-, PE-athletes, and PE-non-athletes.Interestingly, as shown in the right subtable of Table5, the nine PEN-words that were misclassified as PE-were assigned to the PE non-athelete -group.PEN-is never confused with PE athlete -.The athlete subset is clearly less confusable with PENthan the non-athlete subset.
Table :
The confusion table of model prediction between PE-and PEN-(left) using linear discriminant analysis, and between PE-and PEN-prediction when PE-is split into athlete and non-athlete (right).Columns: observed, rows: predicted.
Table :
Examples of entries for each prefix and semantics role set.BCL: word class of the base of lemma , BCL: word class of the base of lemma .
Table :
Counts of tokens and types for MEN-, BER-, PEN-, and PE-.The noun-verb correspondence is calculated based on how often the same base word occurs with the prefixes of interest.
Table :
GAMM fitted to the ratings elicited for pairs of PE-and PEN-nominalizations and their base words. | 8,180.2 | 2021-04-09T00:00:00.000 | [
"Linguistics"
] |
Multi-omics analysis of LAMB3 as a potential immunological and biomarker in pan-cancer
Laminin Subunit Beta 3 (LAMB3) is a transcription factor and participates in the coding of laminin. It plays an important role in cell proliferation, adhesion, and transfer by regulating various target genes and signaling pathways. However, the role of LAMB3 in human pan-cancer immunology and prognosis is still poorly understood. The TCGA, GTEx, CCLE, and HPA databases were utilized for the analysis of LAMB mRNA and protein expression. The expression of LAMB3 in various immune and molecular subtypes of human cancer was examined using the TISIDB database. The prognostic significance of LAMB3 in various cancers and clinical subtypes was investigated using Kaplan-Meier and Cox regression analysis. The relationship between LAMB3 expression, various immune cell infiltration, immune checkpoints, tumor mutational load, microsatellite instability, and DNA methylation was examined using the TCGA database. Clinical samples of four lung cancer cell lines and eight lung cancer cases were collected to confirm the expression of mRNA in lung cancer. In 17 cancers, the mRNA for LAMB3 is expressed differently and has good diagnostic and prognostic value in 22 cancers. Cox regression and Nomogram analysis show that LAMB3 is an independent risk factor for 15 cancers. LAMB3 is implicated in a variety of tumorigenesis and immune-related signaling pathways, according to GO, KEGG, and GSEA results. LAMB3 expression was associated with TMB in 33 cancer types and MSI in 32 cancer types, while in lung cancer LAMB3 expression was strongly associated with immune cell infiltration and negatively correlated with all seven methylated CpG islands. Cellular experiments demonstrated that LAMB3 promotes malignant behavior of tumor cells. Preliminary mechanistic exploration revealed its close association with PD-L1, CTLA4, cell stemness gene CD133 and β-catenin-related signaling pathways. Based on these findings, it appears that LAMB3 could be a potential therapeutic target for immunotherapy and tumor prognosis. Our findings reveal an important role for LAMB3 in different cancer types.
Introduction
The global incidence and mortality of cancer are increasing year by year, seriously affecting the safety of public health and the quality of human life (Chen et al., 2021). Despite the efforts of scientists to cure cancer, there is no absolute cure for cancer (Ribas and Wolchok, 2018). It has become a new research method to find diagnostic biomarkers and therapeutic methods for cancer through large data synthesis analysis with the development and perfection of cancer public databases like the cancer genome map (TCGA) and the Encyclopedia of Cancer Cell Lines (CCLE) database Hu et al., 2021). This method has made important contributions to the precise diagnosis and treatment of cancer.
Laminin plays an important role in regulating cell migration and signal transduction and is necessary for the formation and operation of the basement membrane (Miner and Yurchenco, 2004). Tumor invasion and metastasis are significantly associated with the process of extracellular matrix and tumor breakdown of the basement membrane (Zhu et al., 2020). Laminin Subunit Beta 3 (LAMB3) encodes one of laminin's three subunits (Huang et al., 2019). Studies have shown that LAMB3 has some association with the prognosis of multiple cancers. Zhou found that LAMB3 single nucleotide polymorphisms (SNPs) could lead to cervical cancer (Zhou et al., 2010). Kwon found that LAMB3 may induce the development of gastric cancer because of promoter demethylation (Kwon et al., 2011). Kinoshita found that LAMB3 high expression had a positive effect on the development of head and neck squamous cell carcinoma (HNSC) (Kinoshita et al., 2012). In addition, LAMB3 is involved in the invasion and metastasis of certain types of cancer Sundqvist et al., 2020;Sui et al., 2022).
Be that as it may, most exploration on the job of LAMB3 in growths is restricted to explicit kinds of disease, LAMB3 the job in human pancancer prognosis and immunology is rarely systematically analyzed. We used multiple databases, including TCGA, CCLE, genotype tissue expression (GTEx), human protein mapping (HPA), and cBioPortal to analyze LAMB3 expression levels and prognosis in pan-cancer. We also discussed the potential correlation between LAMB3 expression and the immune checkpoint (ICP) gene, immune cell infiltration level, microsatellite instability (MSI), and tumor mutation load (TMB) in 33 cancers. In addition, to investigate the biological function of LAMB3 in tumors, we carried out an enrichment analysis of LAMB3 co-expressed genes. Finally, we explored differences in LAMB3 expression and prognosis in lung cancer to further verify their outcomes in human cancers. Our results show that LAMB3 are prognostic and a risk factor for multiple cancers. LAMB3 plays an important role in tumor immunity by affecting tumor infiltration immune cells, TMB and MSI. This study provides new insights into the role of LAMB3 in anti-cancer immunotherapy.
2 Methods and materials 2.1 Expression of LAMB3 gene in pan-cancer RNA sequencing data and clinical follow-up information for 33 tumor types and normal tissues were downloaded from the TCGA database (https://portal.gdc.cancer.gov/) and the GTEx database (https://gtexportal.org/home/). Data on tumor cell lines are downloaded from the CCLE database (https://portals. broadinstitute.org/ccle/). The BioGPS database (http://biogps. org) is used to analyze the expression of LAMB3 in different cancers and paired normal cell lines. Use R software v3.6.3 for statistical analysis and visualization, and ggplot2 package [version 3.3.3] for visualization. The Mann-Whitney U test detects two sets of data and p < 0.05 is considered statistically significant.
Expression of LAMB3 protein in pancancer
To assess the differences in LAMB3 expression at the protein level, we downloaded immunohistochemical images of LAMB3 protein expression in normal tissues and their corresponding tumor tissues, including THCA, LUADLUSC, HNSC, SKCM, and STAD, from the HPA database (http:// www.proteinatlas.org/) with the top 5 protein expressions and differential gene expression.
Molecular and immune subtypes of LAMB3 in pan-cancer
The correlation between the expression of LAMB3 and the molecular or immune subtype of pan-cancer is discussed in the TISIDB database (http://cis.hku.hk/TISIDB/). The database integrates multiple types of data to assess interactions between tumors and the immune system.
Prognosis and diagnostic value of LAMB3 in pan-cancer
The receiver operating catalytic (ROC) curve is used to assess the diagnostic value of LAMB3 in pan-cancer. We only screened for diseases with AUC >0.7. The relationship between LAMB3 expression and cancer prognosis was evaluated using the Kaplan-Meier diagram (OS, DSS, and PFI). In addition, we further studied the relationship between LAMB3 expression and prognosis (OS, DSS, and PFI) in different clinical subgroups of lung cancer. The forest diagram shows the p-value, hazard ratio (HR), and 95% confidence interval. Statistical analysis and visualization using R software v3.6.3. The survminer package [version 0.4.9] is used for visualization. The survival package [version 3.2-10] is used for the statistical analysis of survival data. Statistical testing using Cox regression. p < 0.05 is considered statistically significant.
Univariate/multifactorial analysis and the construction of a column diagram
Cox regression analysis (univariate and multivariate analysis) was used to examine the prognostic value of LAMB3 in pan-cancer and its Frontiers in Molecular Biosciences frontiersin.org clinical subtypes. Column line plots were constructed using the R package "rms." 2.6 Analysis of LAMB3 expression correlation with immune cells, ICP gene, TMB, and MSI RNAseq data for cancer (level 3) and corresponding clinical information were obtained from the TCGA database. For reliable immune correlation assessment, we used immunedeconv, which is an R package that integrates six state-of-the-art algorithms including TIMER, xCell, MCP-counter, CIBERSORT, EPIC, and quantized. The expression values of SIGLEC15, IDO1, CD274, HAVCR2, PDCD1, CTLA4, LAG3, and PDCD1LG2 were extracted to observe the expression of immune checkpointassociated genes. These eight genes are the transcripts associated with immune checkpoints. TMB is derived from the article The Immune Landscape of Cancer published by Thorsson et al. (2018); MSI is derived from the Landscape of Microsatellite Instability Across 39 Cancer Types article published by Bonneville et al. (2017). R package: GSVA package [version 1.34.0] (Hänzelmann et al., 2013). Immuno-infiltration algorithm: ssGSEA (built-in algorithm of GSVA package). Correlation algorithms: MCP-counter and TIMER algorithms. p < 0.05 was considered statistically significant.
Protein-protein interaction (PPI) network construction
The first 20 proteins that may bind to the LAMB3 were screened from the STRING website. Cytoscape (version 3.9.0) is applied to the visualization of PPI networks.
2.8 Gene ontology (GO), Kyoto encyclopedia of genes and genomes (KEGG), and gene concentration enrichment analysis (GSEA) GO, KEGG, and GSEA are performed using proteins or differential genes interacting with LAMB3 to analyze the biological and molecular functions of LAMB3 in different cancer types. All of the above analyses were performed and visualized using the R package (version 3.6.3). The ggplot2 package and the cluster Profiler package [version 3.14.3]
Lung cancer cell lines and normal bronchial epithelial cells in culture
Four lung cancer cell lines (two lung squamous carcinoma, two lung adenocarcinoma) A549, H226, H1299, SK-MES-1, and normal bronchial epithelial cells BEAS-2B. The above cells were routinely cultured in DMEM high sugar medium containing 10% FBS at 37°C and 5% CO 2 .
Clinical sample collection for patients with lung cancer
Lung cancer tissues and their paracancerous tissues from eight patients were collected at the First Affiliated Hospital of Kunming Medical University. All patients signed informed consent and this study was approved by the Ethics Committee of the First Affiliated Hospital of Kunming Medical University.
Genetic alterations of LAMB3 in lung cancer
Genetic alterations of LAMB3 in lung cancer were obtained using the cBioPortal database (https://www.cbioportal.org/).
Analyses of the DNA methylation status in the LAMB3 CpG islands
The DNA methylation status of LAMB3 gene CpG sites was analyzed in the TCGA dataset for lung cancer disease using the DiseaseMeth database (http://bio-bigdata.hrbmu.edu.cn/diseasemeth/ index.html). RNAseq data in level 3 HTSeq-FPKM format and Illumina human methylation 450 methylation data were downloaded from the lung cancer project of the TCGA database to visualize LAMB3 in relation to multiple CpG loci using ggplot2 [version 3.3.3].
Analysis of co-expression gene of LAMB3 in lung cancer
We screened the top 50 co-expressed genes that were positively and negatively associated with LAMB3 expression in lung cancer. The threshold values were |log2 fold-change (FC)|>2.0 and P adj<0.05. Use the stat package for visual analysis.
Exploration of LAMB3 differentially expressed genes (DEGs) in lung cancer
We explored DEGs between different LAMB3 expression groups (low expression group: 0%-50%; high expression group: 50%-100%) in lung cancer using the deseq2 package. The ggplot2 package was Frontiers in Molecular Biosciences frontiersin.org used for visualization analysis with a threshold of |log2 fold-change (FC)|>1.0 and P adj<0.05. Then, we performed statistical analysis with the cluster Profiler package for GO and KEGG enrichment analysis of DEGs. In addition, a PPI network of DEGs with log2 FC > 1.5 or log2 FC < −2 as the threshold is established by using the STRING network, and the hub gene is analyzed by CytoHubba's MCC algorithm in Cytoscape (version 3.7.2).
Risk factor models to predict survival
To comprehensively assess patient survival, we constructed a nomogram integrating distinct clinicopathological information using the "rms" package. ROC curves were used to assess the accuracy of the signature prediction. 32 hub genes interacted with LAMB3 in lung cancer. The above 32 genes were subsequently subjected to one-way Cox regression analysis to obtain 7 genes associated with prognosis in lung cancer. Then the least absolute shrinkage and selection operator (LASSO) regression analysis was performed. The LASSO regression could improve the accuracy and interpretability of the model, and eventually build a risk factor model to predict survival.
Western blot (WB)
Cells or exosomes were lysed in RIPA buffer containing 1% PMSF, and the protein concentration was determined using the BCA method. Protein was separated by 10% SDS-PAGE, transferred to PVDF membranes, and incubated with primary antibodies and HRP-coupled secondary antibodies. The membranes were subjected to ECL luminescent solution. The grey-scale values were analyzed using ImageJ software.
Cellular in vitro experiments
A549 and H1299 were routinely cultured in DMEM high sugar medium containing 10% FBS at 37°C and 5% CO2. Cells at 40%-60% confluence were transfected with inhibitor LAMB3 for 48 h. Transfection efficiency was observed using fluorescence microscopy and WB. The cells (A549, A549--siLAMB3, H1299, H1299-siLAMB3) were incubated in 96-well plates for 0 h, 24 h, 48 h, and 72 h respectively, and performed with Cell Counting Kit-8 (CCK8) assay. Cells (A549, A549--siLAMB3, H1299, H1299-siLAMB3) were inoculated into Culture-Insert with scratches up to 500 μm wide. The length of scratches was observed and recorded under the microscope after culturing for 0, 6, 12, and 24 h. The area of scratched area was quantified using ImageJ software. The matrix gel was diluted at a ratio of 1:8, and 100 μL of the diluted matrix gel was added to the upper chamber of the Transwell plate, and incubated for 30 min at 37°C. Cell suspension of 200 μL (5 × 104 cells) is added to the upper chamber, and 500 μL of complete medium containing 20% FBS was added to the lower chamber. The plates were incubated for 48 h. The cells were gently wiped off the surface of the matrix gel and the transwell was fixed with 4% paraformaldehyde, and stained with 0.1% crystal violet. Six random fields were counted under the light microscope.
Differential expression of LAMB3 between tumor and normal tissue samples
Combined analysis by the GTEx database and TCGA database showed that LAMB3 mRNA levels were significantly higher in most cancers compared to their normal tissues in unpaired samples, such as CESC, CHOL, ESCA, HNSC, LIHC, LUAD, LUSC, STAD, THCA, UCEC. In contrast, LAMB3 mRNA expression was lower in BRCA, GBM, KICH, KIRC, KIRP, PCPG, and PRAD than in the corresponding normal tissues ( Figure 1A). The above trends were the same in the paired samples ( Figure 1B). We further investigated the expression of LAMB3 in different cancer cell lines and normal tissues through the BioGPS database, and we found that LAMB3 had high expression in a small number of normal cell strains. But it has high expression levels in almost all cancer lines (Supplementary Table S1). LAMB3 expression levels of the top 10 normal tissue and cancer cell lines are shown in Figure 1C.
In addition, we also analyzed the HPA database's IHC results and compared them to the gene expression data previously mentioned to assess LAMB3 expression at the protein level. In most cancers, LAMB3 protein expression showed an upward trend over normal tissues (Figures 1D,E). We screened the top 5 cancers (THCA, LUADLUSC, HNSC, SKCM, STAD) in pan-cancer with the same protein expression trend as the gene expression ( Figures 1F-J). The above results suggest that changes in LAMB3 expression in cancer tissues may be involved in the process of cancer cell development.
Correlation between LAMB3 and molecular and immune subtypes of pancancer
The correlation between LAMB3 differential expression and pan-cancer molecular subtype is explored from the TISIDB database. LAMB3 was found to be expressed differently in 10 different molecular subtypes, including STAD, LUSC, LIHC, LGG, KIRP, HNSC, ESCA, BRCA, UEEC, and PCPG (Figures 2A-J). Molecular subtypes vary from cancer to cancer. In the case of STAD, LAMB3 has the highest expression in the CIN subtype ( Figure 2A). For LUSC, LAMB 3 is most expressed in the basal subtype ( Figure 2B). But LAMB3 is most abundant in the Corticaladmixture molecular subtype of PCPG ( Figure 2J).
Frontiers in Molecular Biosciences
frontiersin.org
Diagnostic value of LAMB3 in pan-cancer
The ROC curve was used to evaluate the diagnostic value of LAMB3 in pan-cancer. The results showed that LAMB3 was extremely accurate (AUC>0.9) in predicting 11 cancer types, including LAML (AUC = 0. Figure S2). Indicates that this gene is a potential predictive marker in pan-cancer.
Correlation of LAMB3 expression levels with immune cell infiltration, ICP gene, TMB, and MSI in pan-cancer
An emerging method of treating tumors is immunotherapy. ICP genes have been shown to have a significant impact on immunotherapy and the infiltration of immune cells. We originally investigated the relationship of LAMB3 expression with ICP qualities in human malignant growths and the capability of LAMB3 in immunotherapy ( Figure 6A). The LAMB3 expression of TGCT, SKCM, PAAD, MESO, LUSC, and HNSC in 8 ICP gene transcripts was highly positively correlated with the immune checkpoint gene. Immunotherapy that targets ICP genes may have a favorable therapeutic effect if LAMB3 expression is high. LAMB3 expression was negatively correlated with ICP genes in UVM, USC, THCA, STAD, SARC, PRAD, PCPG, OV, LUAD, KIRP, KICH, GBM, ESCA, DLBC, CESE, BRCA, BLCA, and ACC, suggesting that high LAMB3 expression may indicate poor outcomes from immunotherapeutics targeting ICP genes.
The relationship between TMB and MSI, which are both intrinsically linked to the sensitivity of immune checkpoint inhibitors, and LAMB3 expression levels was the next topic of our investigation. MSI has a significant positive correlation with Figure 6C).
Following that, we investigated the connection between the levels of infiltration of 26 immune-related cells and the expression levels of LAMB3. Immune cell infiltration levels were significantly correlated with LAMB3 expression in most types of cancer ( Figure 6D). LAMB3 expression was strongly correlated with the degree of immune cell infiltration in CESE, CHOL, COAD, DLBC, ESCA, HNSC, LGG, LIHC, LUAD, LUSC, PAAD, SKCM, STAD, TGCT, and UCEC. In GBM, KIRP, KIRC, THCA, UVM, and PCPG, LAMB3 expression was highly negatively correlated with the degree of immune cell infiltration. In other tumors, there was a positive and a negative correlation with the level of immune cell infiltration. As a result, we hypothesize that LAMB3 could be a novel immunotherapy target or a pan-cancer biomarker that could predict the response to immunotherapy or produce promising therapeutic outcomes.
3.6 PPI network, GO and KEGG enrichment analysis of LAMB3 interaction protein 20 proteins that interact with LAMB3 proteins were found using the STRING database ( Figure 7A). The 20 target-binding proteins were then subjected to GO enrichment analysis ( Figure 7B). In this case, the primary biological process (BP) contains extracellular matrix organization, extracellular structure organization, and cell-substrate adhesion. The cellular component (CC) was mainly enriched in the basement membrane, laminin complex, and collagen-containing extracellular matrix. The molecular function (MF) has mainly participated in extracellular matrix structural constituent, integrin binding and cell adhesion molecule binding ( Figure 7C). According to the results of the KEGG pathway enrichment, it is mostly linked to ECM-receptor interaction, Small cell lung cancer, Focal adhesion, Human papillomavirus infection, PI3K-Akt signaling pathway, Toxoplasmosis, Amoebiasis, Arrhythmogenic right ventricular cardiomyopathy and Hypertrophic cardiomyopathy ( Figure 7D) (Supplementary Table S2).
In conclusion, our pan-cancer analysis revealed that LAMB3 is an independent risk factor for multiple cancers and is closely linked to their development and prognosis ( Figure 7E) (Supplementary Table S3). In particular, LAMB3 expression levels were found to have a strong correlation with all aspects of lung cancer. Using lung cancer as an example, we investigated this further in this section. LAMB3 gene expression levels were higher in 8 cases of lung cancer ( Figure 7F) and 4 lung cancer cell lines (lung scale/lung adenocarcinoma) ( Figure 7G) than in paracarcinous tissue or normal lung epithelial cells. Trends in gene expression match predictions made for pan-cancer. Table S4). We have summarised the above table in the form of a forest diagram ( Figure 8D). Pathologic stage ( Figure 8E), T stage ( Figure 8F), Gender ( Figure 8G), Age ( Figure 8H) and number_pack_years_smoked ( Figure 8I) have worse prognosis when LAMB3 high expression. In addition, we developed a nomogram model with the parameters of LAMB3 expression levels, age, gender, T stage, and pathologic stage ( Figure 8J). These factors were identified as highly significant prognostic predictors based on multivariate Cox regression analysis. Columnar plots showed significantly higher clinical value in predicting the probability of survival at 1, 3, and 5 years in patients with lung cancer carcinoma. We further used ROC curves to assess the accuracy of LAMB3 in predicting OS in lung cancer patients. As shown in Figure 8K, the AUC values of 0.576, 0.588, and 0.530 at 1, 3, and 5 years, respectively, demonstrated the accuracy of LAMB3 in predicting patient prognosis. We performed further analysis of the coexpressed genes of LAMB3 in lung cancer. Subsequent univariate Cox regression analysis yielded eight genes significantly associated with prognosis (Supplementary Table S6). Further, LASSO regression analysis revealed that the above genes were considered risk factors affecting the prognosis of lung cancer patients ( Figures 8L-N).
Correlation of LAMB3 expression levels with immune cell infiltration in lung cancer
We investigated the connection between the amount of infiltration of 26 immune-related cells and the level of LAMB3 expression in lung cancer. LAMB3 gene expression was positively correlated with Natural killing (NK) cells, NK CD56dim cells, Neutrophils, Tcm, Th2 cells, Tgd, Treg, DC, and Th1 cells. LAMB3 gene expression was negatively correlated with aDC, B cells, CD8 T cells, Cytotoxic cells, Eosinophils, iDC, Macrophages, Mast cells, NK CD56bright cells, pDC, T cells, T helper cells, Tem, TFH and Th17 cells ( Figure 9A). After that, we carried out the connection between the expression of LAMB3 genes on T cells, B cells, macrophages, NK cells, and B cells in the immune system. In group comparisons, we found that the high-expression LAMB3 group had a lower number of infiltrating T cells, B cells, and macrophages and a higher number of infiltrating NK cells ( Figure 9B). The Spearman method was used to investigate the relationship between immune cell infiltration and LAMB3 expression. T cells, B cells, macrophages, and NK cells all have a negative correlation with LAMB3, according to the EPIC method analysis ( Figure 9C). Using the immune infiltration algorithm (ssGSEA), we also discovered that LAMB3 was negatively correlated with T cells, B cells, and macrophages. In contrast, NK cells demonstrated a positive correlation with LAMB3, which is in line with previous findings regarding the infiltration of immune cells ( Figure 9D).
The methylation status of the LAMB3 gene is closely associated with lung cancer patients
The DiseaseMeth tool was used to look at the relationship between the levels of DNA methylation and CpG islands in the
Analysis of LAMB3 co-expression gene and functional enrichment in lung cancer
In a heat map, we explored the top 50 co-expressed genes that were either positively or negatively associated with LAMB3 expression in lung cancer (Figures 11A, B). Following the established thresholds, 955 DEGs, including 873 downregulated genes and 82 upregulated genes, were gathered (Supplementary Table S5). We searched for the hub gene by performing a PPI network map of the respective genes that were upregulated and downregulated ( Figure 11C). The analysis of the GO and KEGG enrichment of DEGs indicates that the BP of the LAMB3 co-expressed differential gene primarily includes the formation of quadruple SL/U4/U5/U6 snRNP and mRNA trans-splicing, via spliceosome. CC is mainly involved in spliceosomal snRNP complex and small nuclear ribonucleoprotein complex, etc. MF primarily participates in hormone activity and receptor-ligand activity, etc. KEGG pathway enrichment was mainly associated with Spliceosome, Salivary secretion, and RNA transport ( Figure 11D).
Finally, we used GSEA enrichment analysis to investigate the significance of LAMB3-related DEGs in terms of gene ontology, oncogenic signatures, and immunological signatures. The top ten data are mainly shown in the images (Supplementary Tables S6-S8). Gene ontology is mainly related to GO_RNA_PROCESSING, GO_ NUCLEOLUS, GO_RIBONUCLEOPROTEIN_COMPLEX, GO_RI BONUCLEOPROTEIN_COMPLEX_BIOGENESIS, and GO_RIBO NUCLEOPROTEIN_COMPLEX_SUBUNIT_ORGANIZATION ( Figure 11E). Oncogenic signatures are mainly related to ATF2_ UP.V1_UP, BRCA1_DN.V1_UP, SNF5_DN.V1_DN, WNT_ UP.V1_DN" and KRAS. LUNG.BREAST_UP.V1_UP ( Figure 11F). Immunologic signatures is mainly related to CTRL_VS_POLYIC_ 0.5H_BMDC_DN, CTRL_VS_ANTI_IGM_STIM_BCELL_8H_ DN, PERIMEDULLARY_CORTICAL_REGION_VS_WHOLE_ MEDULLA_THYMUS_UP, HCMV_INFL_VS_HCMV_INF_ Figure 11G). The results show that the gene is closely related to gene ontology, oncogenic signatures, and immunological signatures. This also indicates that LAMB3 can serve as potential tumor biomarkers and therapeutic targets.
LAMB3 affects the biological function of lung cancer cells
Experiments showed that the protein expression level of LAMB3 in lung cancer cell lines A549 and H1299 was significantly higher than that in normal lung epithelial cells BEAS-2B (Figures 12A, C). In a follow-up experiment, WB demonstrated that we successfully knocked down LAMB3 expression in the above two cell lines using siRNA ( Figures 12B, D).
The effect of LAMB3 on the migration ability of lung cancer cell lines A549, A549-siLAMB3, H1299, and H1299-siLAMB3 was analyzed in wound healing assays. The results showed that inhibition of LAMB3 expression significantly inhibited the migratory ability of lung cancer cells, which was statistically significant (Figures 12E, H). We also examined the effect of knocking down LAMB3 on the invasive ability of cells. The results showed that the invasive ability of both cell lines was equally inhibited after the knockdown of LAMB3 (Figures 12F, I). Finally, we found that the proliferation ability of each cell line was significantly diminished after the knockdown of LAMB3 ( Figure 12G).
Finally, we performed a preliminary exploration of the mechanism based on the previous bioinformatics-related results. We found that inhibition of LAMB3 expression significantly suppressed the expression of PD-L1 and CTLA4 in cells, but there was no statistically significant difference in PD-L2 expression. Notably, both PD-L1 and CTLA4 are closely associated with T cell and tumor immunity. Meanwhile, inhibition of LAMB3 expression decreased the expression of cell stemness-related gene CD133 and β-catenin-related signaling pathway ( Figures 12B, D).
Discussion
Cancer is a significant risk factor for human health worldwide. Early detection and destruction of tumor cells are important conditions for improving the prognosis of cancer patients (Li et al., 2022). However, the most common cancer treatments, such
Frontiers in Molecular Biosciences
frontiersin.org as radiotherapy, chemotherapy, and surgical resection, have limited therapeutic effects. At the same time, the aggressiveness, metastasis, and drug resistance of tumors pose a similar challenge to cancer treatment (Li et al., 2017;Bahar et al., 2020). It has been demonstrated that LAMB3 plays a significant role in the progression of some types of cancer, and its function is closely Frontiers in Molecular Biosciences frontiersin.org associated with attachment, migration, and interaction with other components of the extracellular matrix (Huang et al., 2019). Abnormal promoter methylation and the silencing of LAMB3 are strongly correlated with the stage, size, and aggressiveness of breast cancer tumors (Sathyanarayana et al., 2003;Sundqvist et al., 2020). It has also been shown that LAMB3 is associated with diagnosis, prognosis, and the immune microenvironment in pancreatic cancer (Yang et al., 2020;Chen et al., 2022). Additionally, in prostate, thyroid, colorectal, and other cancers, LAMB3 may aid in the growth of tumors (Reis et al., 2013;Jung et al., 2018;Zhu et al., 2020). As a result, LAMB3 has the potential to be a novel pan-cancer biomarker, a potential therapeutic target, and a player in tumor formation and progression. By examining the relationship between LAMB3 expression and prognosis, immune microenvironment, and DNA methylation in cancer patients, we used a variety of bioinformatics to investigate the potential role of LAMB3 in pan-cancer.
To begin, we examined the levels of LAMB3 gene and protein expression in normal and cancerous tissues using the GTEx, TCGA, BioGPS, and HPA databases. The findings demonstrated that the majority of cancer types exhibited a significant upward trend in the expression of LAMB3. This is in line with the trend that other researchers have observed in cancers like colorectal, thyroid, pancreatic, and head and neck squamous cell carcinomas (Wang et al., 2018;Liu et al., 2019;Zhu et al., 2020;Islam et al., 2021). LAMB3's high expression is consistent with the expression trends we find in lung cancer cell lines and the tissues of lung cancer patients.
Secondly, we investigated the possibility of LAMB3 functioning as a biomarker for the diagnosis of pan-cancer. The results indicate that LAMB3 has excellent diagnostic value in most cancers. LAMB3 is also an independent risk factor for most cancers. Meanwhile, in the constructed forest plot of LAMB3 and clinicopathological features, high LAMB3 expression in most Frontiers in Molecular Biosciences frontiersin.org cancers, including LUSCLUAD, KIRP, KIRC, and HNSC, is associated with a poor prognosis. But there are also a few cancers, such as BLCA, where high LAMB3 expression means a better prognosis. The expression of LAMB3 in the molecular and immune subtypes of pan-carcinoma was then investigated to learn more about its potential mode of action. The findings demonstrated that the most of cancer types had significantly distinct expression patterns of LAMB3 in various immune and molecular subtypes. The above results indicate that LAMB3 is a potential pan-cancer prognostic biomarker, even though expression and prognosis differ in a small number of cancers. Tumor-infiltrating lymphocytes (TIL) and TMB are important factors in good immunotherapy outcomes and pan-cancer prognosis (Azimi et al., 2012;Steuer and Ramalingam, 2018;Fumet et al., 2020). Our study shows a strong correlation between LAMB3 and TILs. In the majority of cancers, including lung cancer, LAMB3 expression was negatively correlated with T cells, B cells, and macrophages. In most cancers, including lung cancer, LAMB3 expression was positively correlated with NK cells. It has been demonstrated that macrophages, as an essential component of the tumor microenvironment (TME), aid in immune evasion and suppression (Gajewski et al., 2013). NK cells with high expression can improve the capacity of DC cells to present tumor cross antigens (Han et al., 2019). Furthermore, LAMB3 expression was correlated with all marker genes for other cytokines that are known to be immunostimulatory and immunosuppressive, indicating that LAMB3 may play an immune function in lung cancer and other cancers. In addition to confirming that LAMB3 expression is closely linked to the biological processes of immune cells and immune-related molecules in the majority of cancers, our research provides additional insight into the broader suitability of LAMB3 for tumors. The close connection between LAMB3 and TME in human cancers is also demonstrated by the correlation between LAMB3 and TMB and MSI. In outline, the prognosis of cancer patients can be influenced by LAMB3 expression, which is closely Immunosuppressive agents may now have a new target in mind with this gene. We examined the LAMB3 co-expression network at the end. In terms of immunity, we discovered that antigen processing and presentation, B/T cell activation, and immune response regulation are all performed by LAMB3 and its co-expressed genes. We have also found LAMB3 is equally closely related in terms of oncogenic signatures and gene ontology. Furthermore, at the cellular level, we found that reducing LAMB3 expression in lung cancer cell lines A549 and H1299 inhibited the proliferation, migration, and invasive ability of tumor cells. Preliminary mechanistic exploration showed that it can regulate PD-L1 and CTLA4 expression, echoing the above findings that LAMB3 regulates T cell and tumor immunity. CTLA-4 inhibits T cell activation by competing with CD28 receptors to bind to B7 ligands on antigen-presenting cells (Stamper et al., 2001;Fife and Bluestone, 2008). PD-L1 is a ligand of PD-1, and their binding can transmit inhibitory signals in T cells, thereby reducing T cell proliferation and tumor-killing activity (Francisco et al., 2010). Both CTLA-4 and PD-1 are common inhibitory checkpoints on
Frontiers in Molecular Biosciences
frontiersin.org activated T cells and have been found to be the most reliable targets for cancer treatment. The combination of CTLA-4 and PD-1 blockers has a synergistic effect on the activation of anti-tumor immune response (Rotte, 2019). LAMB3 is likely to inhibit T cell infiltration through the above-mentioned targets, thus enabling "immune escape." Meanwhile, LAMB3 may also promote the malignant behavior of tumor cells by regulating tumor cell stemness and β-catenin-related signaling pathways. Taken together, the above results suggest that LAMB3 has the potential to be a target for anti-cancer immunotherapy.
In summary, we have performed the first comprehensive and systematic analysis of LAMB3 and cross-validated it using different databases, patient samples, and tumor cell lines. The factor was expressed differently between most tumors and normal tissues and revealed the correlation between LAMB3 expression and clinical prognosis, immune microenvironment, and DNA methylation. Although these bioinformatics are carried out by collecting information from various databases, clinical samples, and tumor cell lines, there are some limitations to this study. First, distinct databases contain cancers that are at odds with one another. To better understand LAMB3's expression and function, additional research in this area would require a larger number of clinical and tumor cell lines. Second, despite the investigation of the gene's potential signaling pathways and its prognostic and diagnostic value in pan-cancer, more in-depth in vitro or in vivo experiments to confirm these findings are lacking. Third, despite our findings that immune cell infiltration and associated immune targets in human cancers are closely linked to LAMB3 gene expression, we lack direct evidence that LAMB3 influences prognosis by participating in immune infiltration. The mechanism by which LAMB3 is involved in immune regulation is still unknown, and the effect of LAMB3 on tumor immunity also varies depending on the type of tumor. The mechanisms of related immunotherapy require additional cellular and animal research. In conclusion, further investigation of LAMB3 in lung cancer and its significance in
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
Frontiers in Molecular Biosciences
frontiersin.org
Ethics statement
The studies involving human participants were reviewed and approved by the Institutional Research Ethics Committee of The First Affiliated Hospital of Kunming Medical University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. | 7,705.6 | 2023-07-27T00:00:00.000 | [
"Biology"
] |
Evolution of Feeding Shapes Swimming Kinematics of Barnacle Naupliar Larvae: A Comparison between Trophic Modes
Synopsis A central goal in evolutionary biology is connecting morphological features with ecological functions. For marine invertebrate larvae, appendage movement determines locomotion, feeding, and predator avoidance ability. Barnacle larvae are morphologically diverse, and the morphology of non-feeding lecithotrophic nauplii are distinct from those that are planktotrophic. Lecithotrophic larvae have a more globular body shape and simplified appendages when compared with planktotrophs. However, little is known about whether and how such morphological changes affect kinematics, hydrodynamics, and ecological functions. Here, we compared the nauplii kinematics and hydrodynamics of a lecithotrophic Rhizocephalan species, Polyascus planus, against that of the planktotrophic nauplii of an intertidal barnacle, Tetraclita japonica. High-speed, micro-particle image velocimetry analysis showed that the Polyascus nauplii swam faster and had higher amplitude and more synchronous appendage beating than the Tetraclita nauplii. This fast swimming was accompanied by a faster attenuation of induced flow with distance, suggesting reduced predation risk. Tetraclita nauplii had more efficient per beat cycles with less backward displacement during the recovery stroke. This “anchoring effect” resulted from the anti-phase beating of appendages. This movement, together with a high-drag body form, likely helps direct the suction flow toward the ventral food capturing area. In sum, the tradeoff between swimming speed and predation risks may have been an important factor in the evolution of the observed larval forms.
1 E-mail<EMAIL_ADDRESS>Synopsis A central goal in evolutionary biology is connecting morphological features with ecological functions. For marine invertebrate larvae, appendage movement determines locomotion, feeding, and predator avoidance ability. Barnacle larvae are morphologically diverse, and the morphology of non-feeding lecithotrophic nauplii are distinct from those that are planktotrophic. Lecithotrophic larvae have a more globular body shape and simplified appendages when compared with planktotrophs. However, little is known about whether and how such morphological changes affect kinematics, hydrodynamics, and ecological functions. Here, we compared the nauplii kinematics and hydrodynamics of a lecithotrophic Rhizocephalan species, Polyascus planus, against that of the planktotrophic nauplii of an intertidal barnacle, Tetraclita japonica. High-speed, micro-particle image velocimetry analysis showed that the Polyascus nauplii swam faster and had higher amplitude and more synchronous appendage beating than the Tetraclita nauplii. This fast swimming was accompanied by a faster attenuation of induced flow with distance, suggesting reduced predation risk. Tetraclita nauplii had more efficient per beat cycles with less backward displacement during the recovery stroke. This "anchoring effect" resulted from the antiphase beating of appendages. This movement, together with a high-drag body form, likely helps direct the suction flow toward the ventral food capturing area. In sum, the tradeoff between swimming speed and predation risks may have been an important factor in the evolution of the observed larval forms.
Introduction
Nauplius is a homologous developmental stage shared by all crustaceans, and the free-living form of nauplius has persisted in most lineages (Williams 1994b) but see Scholtz (2000). Despite being a conserved larval stage, the body forms of freeliving nauplii are diverse (Dahms et al. 2006;Martin et al. 2014), and differences in swimming behaviors have been reported (Gauld 1959;Moyse 1984). It was posited that lability in naupliar phenotypes, especially that of behavior, allows diverse functions to evolve, which in turn contribute to the persistence of the nauplius during the adaptive radiation of crustaceans (Williams 1994a). However, few data are available on the relationship between naupliar morphology and kinematics, and on how phenotypic differences translate to functional performance by changing the nauplii's interactions with the surrounding fluid.
Swimming kinematics and/or hydrodynamics of nauplii have been previously described, mainly for copepods (Johnson et al. 2011;Borg et al. 2012;Kiørboe et al. 2014;Wadhwa et al. 2014;Lenz et al. 2015). And yet, the studied copepod nauplii represent only a fraction of known naupliar forms. A striking example of diversity in naupliar forms can be found among barnacle (Cirripedia) nauplii. They are easily distinguished from other crustacean nauplii by the presence of a pair of frontal horns, which are unique for barnacles (Høeg and Møller 2006). The presence of frontal horns or the less streamlined overall naupliar forms of barnacles was thought to be costly for locomotion, but may be beneficial for suspension feeding (Moyse 1984;Emlet and Strathman 1985). Comparative study on barnacle naupliar forms supports this functional tradeoff: common planktotrophic nauplii have relatively longer frontal horns and tail spines than lecithotrophic nauplii that do not feed (Wong et al. 2018). However, without empirical data on how lecithotrophic nauplii perform, inference on such a morphology-function link still lacks mechanistic insight (Koehl 1996).
Planktotrophic barnacle nauplii are "current feeders." They are capable of cruising through water and generating feeding currents at the same time (Lochhead 1936). When locomotion is tightly linked with feeding, a compromise between the two functions is highly likely (Strathmann and Gr€ unbaum 2006). For instance, an optimized mode of propulsion in nauplii is to paddle all three appendages pairs radially to push themselves forward. However, such movement would lead to food particles being pushed away from the body, compromising feeding. Another example of a tradeoff is that feeding currents spanning a larger area will entrain more food particles. And yet, the associated fluid disturbance will pose a higher predation risk by rheotatic predators (Kiørboe et al. 2010. In sum, not only is locomotory performance constrained by the need to feed, but also the need to retain stealth for protection from predators. Lecithotrophic nauplii have evolved a few times within Cirripedia and can be found in all three superorders (Martin et al. 2014). Most of them are found in parasitic barnacles or are associated with adaptation to oligotrophic habitats for larvae. Rhizocephala, the superorder with barnacles all specialized in parasitism, have only lecithotrophic nauplii (Høeg 1995). We hypothesize that swimming of lecithotrophic rhizocephalan nauplii, which are released from the constraint of feeding, will display kinematic characteristics that support the model of optimized nauplius swimming (Takagi 2015), and hydrodynamic signals that minimize predation risk.
Here, we compared kinematics and hydrodynamics of the nauplii of the rhizocephalan species Polyascus planus, which are internal parasites on grapsid crabs, against those of the free living intertidal barnacle Tetraclita japonica. We focused on the performance related to three major sources of selection pressure: locomotion, predation risk, and feeding. We specifically compared the proficiency (normalized velocity) and efficiency (forward: backward displacement ratio) of swimming, the spatial attenuation of flow signal to predators, and flux of suction current generated during the recovery stroke. We also compared swimming kinematics which likely lead to these differences in performance.
Methods
Collection of nauplii Adults of T. japonica were collected from the rocky intertidal in Clear Water Bay, Hong Kong (22 20 0 22 00 N 114 16 0 E). After collection, egg masses were dissected from the mantle cavity of T. japonica and maintained in aerated filtered seawater (25 C, 33 psu) until nauplii hatched. Host crabs of P. planus (Grapsus albolineatus and Pachygrapsus crassipes) with visible externa were hand caught from the rocky intertidal at Badouzi, NE Taiwan (25 08 0 50 00 N 121 47 0 40 00 E), and reared until release of nauplii from the externa. All hatched nauplii were transferred to fresh filtered seawater (25 C, 33 psu), and reared to stage II for video observations. Nauplii 2 morphometrics data were gathered through digital microscopy and are presented in Table 1.
Video acquisition
A custom-made glass cuvette (25 Â 75 Â 5 mm) was used as a recording chamber and held inside a dark room with temperature maintained at 25 C. An external tank with larger volume of water (400 mL) was used to buffer small temperature fluctuations. A high-speed camera (FastCam Mini UX100, Photron Ltd.) fitted with a bellows and a 60 mm focal length lens was used to video record swimming nauplii. Illumination was achieved with an array of white LEDs. Video acquisition was controlled with PFV Values are mean ± SE (n ¼ 5) except for carapace height and flux that were calculated from lateral view (n ¼ 2). % in phase compares percentage of pairs of appendages moving in the same direction. Values with statistically significant differences between taxa are bolded (P-value < 0.05, permutational T-test run with 9999 permutations). ant1, antennule; ant2, antenna; mand, mandible.
Swimming feeding tradeoffs in nauplii software (Photron Ltd.) and recorded with a resolution of 1280 Â 1024 pixels at 2000 frames s À1 . Microalgae (Isochrysis galbana) and neutrally buoyant micro-plastic beads (2.32 mm in diameter, Spherotech Inc.) were used as seeding particles to trace the fluid flow around T. japonica and P. planus nauplii, respectively. About 30 individuals were used in each video session. The nauplii were not tethered, so successful recording depended on nauplii passing the field of view on the right focal plane (see details in the Supplementary Methods). Videos were taken from both dorsal/ventral (the xy plane) and lateral view (the yz plane), but the majority of videos (60%) analyzed were from xy plane due to difficulty in obtaining video from lateral view.
Vector field calculation
Videos were imported into DaVis (version 8.2.1, LaVision GmbH) for flow field computation. Prior to cross-correlation calculation, masking of larvae was performed with three background removing algorithms (smoothing, sliding maximum, and sliding minimum subtractions), followed by thresholding. A multi-pass algorithm with a decreasing size of interrogation windows (from 64 Â 64 to 32 Â 32 pixels for P. planus, and 96 Â 96 to 64 Â 64 pixels for T. japonica, both with 50% overlaps) was used in cross-correlation computation on instantaneous flow velocity vectors. The size of interrogation window was chosen based on density of seeding particles such that each window contains a density of >15 particles. Vector post-processing was performed to remove outlier vectors before exporting the final velocity vectors V into grids of 80 Â 64 cells (each cell represents 16 Â 16 pixels, with ðu; vÞ components representing velocity in ðx; yÞ directions) for further calculations. Vector field interpolation was performed for T. japonica to achieve the same density of vectors in the final vector fields for both species observed.
Locomotion: swimming velocity and kinematics
For swimming and kinematics analyses, about 40 frames were extracted from each video covering a complete beat cycle sampled at approximately equidistant time points. Identification of beat cycles was first estimated from the videos by eye and later quantitatively determined based on the angular positions of the swimming appendages. Displacement of the swimming nauplius was calculated from the distance between centroids of three body landmarks on the larva between frames. These three body landmarks are tips of frontal horns and the tip of the dorsal thoracic spine (Fig. 1). Direction of displacement was determined by looking at the sign of the dot product of the displacement vector and the vector from centroid to tail spine (dorsal thoracic spine, designated as CT ! here). A negative sign of the dot product indicates opposite direction with CT ! and defined as forward displacement and vice versa. A value of zero was defined as no displacement in direction parallel to CT ! . Cumulative displacement curves, i.e., the cumulative sums of displacement of naupliar body's centroid over time, were used to compare displacement patterns of moving naupliar body. Reynolds number was calculated as Re¼UL/V, with U the average swimming speed, L the larval length, and V the kinematic viscosity of seawater at 25 C, 33 psu. To compare the efficiency of propulsion per beat, we calculated the ratio of forward to reverse displacement. Angles of three pairs of naupliar appendages-antennule (ant1), antennae (ant2), and mandible (mand)-were defined as the angle formed between each vector of centroid to appendage tip CA ! and CT ! , calculated as Marking of body landmarks and tips of appendages was performed in tpsDIG2 (version 2.30; Rohlf 2010). Angular positions for appendages were digitized only for the right side. Swimming velocity and the angular velocity of the appendages were calculated by taking the time derivative of larval centroid displacement and angular displacement of the appendages, respectively. Two metrics were calculated to quantify the difference in beat timing of appendages: angular separation between combinations of appendages, and proportion of time that combinations of appendages moved in same direction (beating or retracting). These metrics were compared with a permutational T-test run with 9999 permutations in R.
Locomotion: vortex circulation
Swimming nauplii produce vortices with their beating appendages as they propel forward. The vortex structure and strength is related to the amount of thrust produced. We quantified and compared vortex circulation produced by the beating appendages of the nauplii directly from the flow field. Circulation C was calculated from the surface integral of vorticity x for the area A bounded by vortices 4 Wong et al.
x was calculated in DaVis software. Discrete approximation of circulation was computed as sum of vorticity at grid position ðx; yÞ multiplied by the area represented by the cell for each frame at time t Only vortices at the right side of the larva were used for calculation and the vortices were compared between species with a permutational T-test run with 9999 permutations in R.
Predation risk: spatial attenuation of flow Flow disturbance generated by nauplii is expected to decay over distance, and a faster spatial decay imposes less risk of being detected by a potential predator . Flow speed V is a function of distance from the larva r kV k / r n : To compare the risk of predation presented as magnitude of hydrodynamic signal, we calculated n, the power for spatial attenuation from the velocity field. The computation was performed with a method similar to that of Kiørboe et al. (2014), where binning of flow speed was first performed with different thresholds of speed U Ã . Distance of the spatial extent of the flow was then determined as radius r of a circle of area equivalent to the area covered by the binned speed S (U Ã ). The power n was estimated by a power law fitting, i.e., by regression analysis with lnðU Ã Þ and lnðrÞ as y and x of the regression equation, respectively. Power n was then obtained from the slope of the regression fit. We compared the power of spatial attenuation of flow at the peak of the power stroke between the two species with a T-test.
Feeding: flux
We calculated flux toward the food capturing region (vicinity of labrum) of a nauplius during the recovery stroke to compare the volume of feeding current generated by the nauplii. Polyascus nauplii do not feed and possess only a vestigial labrum; thus, the water flux represents a hypothetical equivalent to Tetraclita nauplus's feeding current. In a threedimensional (3-D) velocity field, flux U can be Swimming feeding tradeoffs in nauplii calculated as the surface integration of velocity vectors passing through a defined area A at an angle normal to the surface where w is the velocity in z-direction, b n is the normal unit vector, and the dot product gives the magnitude of velocity vector projected onto the normal direction. Since our PIV data are only cross-sectional (2-D), we computed "flux" passing through a line segment of length equal to body width of the larva from the 2-D vector field (Fig. 1). A discrete approximation was computed by summing up the magnitude of the velocity vectors projected on normal direction multiplied by the length represented by each velocity vector n dl: A similar computation was performed for velocity fields in lateral view v; w ð Þ for line segments of length equal to 1.5Â body height of a larva in the ventral direction (Fig. 1). Fluxes were calculated from both the earthbound frame of reference (defined as "absolute flux") and in the nauplius' frame of reference (defined as "relative flux," which is the absolute flux minus the naupliar body's velocity). In other words, relative flux estimates flow relative to the position of the nauplius' body, which is essential to determine whether flow carrying potential food particles is approaching or leaving the food capturing area. Relative fluxes in both top and lateral views were compared between species with a T-test.
Results
Swimming proficiency and efficiency The non-feeding nauplii of Polyascus swam more than twice as fast as the feeding Tetraclita nauplii at $29 body length s À1 (7.7 ± 0.4 mms À1 ) compared with $10 body length s À1 (4.5 ± 0.5 mms À1 ) ( Table 1). The higher swimming speed of Polyascus nauplii put these smaller nauplii (265.0 ± 7.4 mm carapace length) in similar Reynolds number (ca. 2) with the larger (447.4 ± 16.8 mm carapace length) Tetraclita naplii (Table 1). High speed videos of swimming nauplii (Supplementary Video S1) showed that both the fast and slow swimmers suffered from backward displacement during the recovery stroke. In fact, Tetraclita nauplii pushed themselves back less during the recovery stroke relative to forward displacement during the power stroke, making them more efficient in terms of the forward:backward displacement ratio ( Fig. 2A and Table 1).
Swimming kinematics
The swimming velocity difference is best explained by the large difference in beat frequency between the species. Polyascus nauplii beat their appendages at frequencies approximately three times that of Tetraclita nauplii (Table 1), which translates into higher angular speeds in all pairs of appendages (Table 1). In addition, Polyascus nauplii beat their mandibles at larger amplitudes ( Fig. 2A, Table 1, and Supplementary Videos S2 and S3). There was no significant difference in the beat amplitude for the other two pairs of appendages (Table 1). For both species, antennae beat with the largest amplitude and antennules beat with the smallest. Within each species, there was no significant difference in angular speeds between power and recovery strokes, except for the mandibles of Polyascus nauplii (Table 1).
Besides differences in frequency and amplitudes, the two species had distinctive phase shift patterns between pairs of appendages, summarized in Lissajous curves (Fig. 2B) and in percentage of appendage pairs moving in the same direction (Table 1). Polyascus nauplii swam with a metachronal wave of power strokes that began with mandibles and ended with antennules. This initial movement was followed by a synchronous recovery stroke, during which all pairs of appendages retracted with little separation (Fig. 2). In contrast, Tetraclita nauplii swam with only mandibles and antennae beating in a similar metachronal power stroke, but had their antennules moving away from the other two appendage pairs, as evident from the large angular separations at mid power stroke (Table 1). At mid recovery stroke, antennules and mandibles began to move away from each other, enlarging angular separation during Tetraclita nauplii's recovery stroke. In sum, kinematics differences between species were more pronounced during recovery stroke than power stroke.
Vortex circulation
Differences in kinematics were also reflected in differences in vorticity circulation. Strokes of Polyascus nauplii created higher vorticity (x) than strokes of Tetraclita nauplii. Vorticity (x) at the end of the power stroke was À52.6 ± 5.3 s À1 (SE) compared with À24.2 ± 0.6 s À1 , which corresponded to the higher angular speed of beat (Table 1). However, the vortex circulation of Polyascus nauplii had on average 46% smaller spatial extent than that of 6 Tetraclita nauplii (Fig. 3). Thus, when integrated over area, the circulation of the body vortices (C) was of similar magnitudes between the two species at the end of power stroke (Table 1 and Supplementary Fig. S3). However, the relative contribution of each limb toward vorticity circulation differed qualitatively between the two species ( Fig. 3 and Supplementary Videos S4 and S5). Body vortices created by mandibles' beat in Tetraclita nauplii were at more posterior positions and had a smaller extent, which corresponded to a smaller amplitude of beat of the mandibles in Tetraclita than in Polyascus nauplii (Fig. 3A, F). In addition, the extent and magnitude of vorticity created by mandibles' beat was considerably smaller than that by antennae in Tetraclita nauplii, corresponding to the large difference of amplitude between these two pairs of appendages (Fig. 3A, B). In contrast, Polyascus nauplii's mandibles and antennae created vortices of similar extent and magnitude during power strokes (Fig. 3G, H).
Spatial attenuation of fluid disturbance
Polyascus nauplii swam with a small area of influence (with flow ! 0.0005 ms À1 , Kiørboe et al. [2014]) at the end of the power stroke, around half of that of Tetraclita nauplii (Table 1). Area of influence varied through the beat cycle, but the observed differences between species are robust ( Supplementary Fig. S2).
This difference can be explained by faster spatial flow attenuation observed for Polyascus nauplii (Fig. 4). At the peak of the power stroke, flow speed near the Polyascus nauplii body was higher, but it attenuated sharply with distance with an average power of À2.79 compared with À1.47 in Tetraclita nauplii (Fig. 4C). This sharp decline in flow speed limited the spatial extension of fluid disturbance created, allowing the non-feeding Polyascus nauplii to swim more quietly.
Flux and feeding current From velocity fields, potential paths of fluid flow carrying food particles toward the nauplius body could be observed. During the power stroke, fluid was pushed toward the body of the nauplius from both left and right sides toward its posterior end (Fig. 3). During the recovery stroke, fluid was pulled from the posterior end toward the body by the appendages, creating a suction feeding current toward the food capturing region.
Relative flux, calculated from flow relative to the moving body of the nauplius, shows that fluid did not flow toward the nauplius' body during the power stroke; instead, fluid followed the moving body of the nauplius, going forward due to viscosity (see Supplementary Fig. S6 for absolute flux and Supplementary Fig. S7 for relative flux; and flux calculated from lateral views in Supplementary Figs. S8 Swimming feeding tradeoffs in nauplii and S9). Because the velocity of the moving nauplius's body was about an order of magnitude larger than the fluid flow velocity created by the swimming stroke, relative velocity of flow toward the body's proximity was dictated by body velocity calculated from the centroids of body landmarks (Supplementary Fig. S10). Thus, flux toward the body was achieved only during the recovery stroke, when body velocity was reversed.
Relative fluxes were not significantly different between the two species (Table 1) and Polyascus nauplii could bring particles to the proximity of their body easily with a backward movement during the recovery stroke, even though they did not need to feed. Because the transport of particles could not be followed in the Eulerian approach of PIV, we analyzed the particle path by simply tracking particles individually to investigate their fates. From the tracing of particles during the recovery stroke ( Fig. 5 and Supplementary Video S6), it was revealed that the Tetraclita nauplius drew particles toward its food capture area under the labrum with good accuracy, i.e., the end of the particle paths matched with the capture region at the end of recovery stroke. Suction current was also generated during the recovery stroke for Polyascus nauplii, but was not directed toward the vestigial labrum.
Discussion
The observed planktotrophic and lecithotrophic barnacle nauplii differed in locomotory performance, generation of fluid signal, and manipulation of near-body fluid flow. The integrated process of feeding and swimming observed in Tetraclita nauplii led to compromises in swimming speed and predation avoidance. Polyascus nauplii, which are released from the need of feeding, swim fast with rapid fluid disturbance attenuation. This unique comparison of "swimmer versus feeder" reinforces the importance of hydrodynamics in shaping predation risk, and thus, zooplankton evolution ).
Swimming feeding tradeoffs in nauplii
Linking swimming kinematics and hydrodynamic consequences of contrasting larval forms also helps improve our mechanistic understanding of how functional needs shape the evolution of naupliar morphology.
Optimal propulsion of lecithotrophic nauplius larvae
The swimming speed of the Polyascus nauplii is the fastest recorded for barnacles thus far in terms of both body length and distance per unit time (compared with Walker [2004]). Given that nauplii of both species had similar Reynolds number (ca. 2), inertial effect contributed little to Polyascus nauplii's faster swimming speed than Tetraclita nauplii. There are two possible mechanisms that contribute toward this fast swimming of the non-feeding larvae, namely high beat frequency and synchronized beat pattern. Despite the circulation (CÞ being similar between the two species observed, the Polyascus nauplii complete triple the amount of beat cycles within a unit time, and hence, traverse a larger distance. Furthermore, swimming of nauplii of Polyascus resembled the "swimming-by-jumping" observed in copepod nauplii in which metachronal stroke was used (Borg et al. 2012). Metachronal stroke, featured with appendages in sequential power strokes and simultaneous recovery strokes, has been identified as the most efficient swimming mechanism for multilegged swimmers (Lenz et al. 2015;Takagi 2015). Other similarities, such as higher frequency of appendage beat and higher stroke amplitude for mandibles (Borg et al. 2012), were also observed in Polyascus nauplii. These shared characteristics likely help increase swimming speed, promoting convergence to metachronal stroke among fast swimming nauplii.
Tradeoffs between feeding and efficient swimming
In contrast to fast swimming nauplii, planktonic crustacean nauplii that cruise slowly through the water do not share a single stroke pattern (Moyse 1984;Johnson et al. 2011;Borg et al. 2012). While the antennae are the main appendage for propulsion, the roles of the remaining two pairs of limbs vary (Gauld 1959;Walker et al. 1987;Anderson 1993;Williams 1994b). In Tetraclita nauplii, antennules moved in anti-phase to antennae and mandibles for a large proportion of time. This observation supports the previous view that barnacle nauplii's antennules contribute little to propulsion (Walker et al. 1987).
In fact, this anti-phase beating of antennules might have a role in "anchoring" the moving body of the nauplius during recovery stroke. Our particle tracking comparison suggested that successful capture of particles in planktotrophic barnacle nauplii depends on matching between particles brought by the suction current produced by the antennae and mandibles and the position of the feeding chamber at the end of the recovery stroke. Excessive backward displacement of the nauplius body in any direction could shift the focus of the suction flow, resulting in a misdirected flow. Therefore, retarded backward displacement during the recovery stroke, i.e., the "anchoring effect," could be crucial in "guiding" the feeding current. The observed anti-phase beating ensured that antennules were fully extended when antennae reached the peak of retraction speed. Together with drag increasing long frontal horns and tail spines, the spanning antennules could contribute toward the anchoring effect for Tetraclita nauplii. However, such backward displacement dampeners likely come at the cost of propulsion as they add burden to forward displacement during the power stroke.
The observed mechanism for reducing backward displacement is different from that suggested for copepod nauplii, which involves the movement of mandibles (Borg et al. 2012). Tetraclita nauplii's mandibles beat with small amplitude. The limited radial motion of mandibles is likely a result of their known direct role in pushing food particles toward the food collecting region with their medially directed setae (Gauld 1959;Anderson 1993). Supporting this notion, the contrasting mandible beat pattern between the feeding and non-feeding nauplii did correspond to differences in circulation (Fig. 3). These observations highlight that feeding imposes functional constraints on kinematics such that movement patterns favoring efficient propulsion do not coincide with those for effective particle capture. The resulting diversity of kinematics in turn help shape diversity of naupliar body forms (Wong et al. 2018).
Tradeoffs between feeding and predation risk Good feeders are often associated with poor swimming (Strathmann and Gr€ unbaum 2006). But the feeding process not only compromises swimming performance, it also puts the feeding nauplii at risk of predation due to the greater fluid signal generated ). Fast swimmers characterized by a short power stroke duration relative to the viscous time scale generate a fluid flow that attenuates quickly, which is well studied in copepod adults (Jiang and Kiørboe 2011). Nauplii of neither copepods nor barnacles could swim as hydrodynamically quietly as the copepod adults. Nonetheless, reduced fluid signal is evident in these fast-swimming crustacean nauplii. Spatial attenuation power is similar in copepod and barnacle nauplii: $r À3 for Polyascus and jumping copepod nauplii, and $r À1.5 for Tetraclita and cruising copepod nauplii (Fig. 4). This observation again highlights how common limiting factors (biomechanical constraint from naupliar body plan) and driving forces (selection pressure from predation and starvation risk) shape hydrodynamics of larval locomotion.
Planktotrophy versus lecithotrophy
The better performance in locomotion and predation avoidance in lecithotrophic nauplii prompts us to revisit the question of why loss of feeding is not more common (Strathmann 2018). One possible explanation is that lecithotrophy is costly in terms of parental investment in eggs. Polyascus and other rhizhocephalan barnacles are parasites that have plenty of nutrients at their disposal (Høeg 1995), removing the penalty of investment. The other possibility is that planktotrophy confers benefits: nauplii can spend longer times for dispersal and accumulate energy storage to increase chances of post-settlement survival. Such long-distance dispersal ability, though disputed (Strathmann 2018), could be essential for population maintenance of sessile barnacles.
Our kinematic and hydrodynamic comparisons connect morphological differences among barnacle nauplii to their contrasting ecological needs. The globular-shaped lecithotrophic nauplii swam faster with metachronal limb beats and were hydrodynamically quietly. In contrast, the planktotrophic nauplii increased drag (through anti-phase limb beat and body extensions) to create an accurately-directed feeding current. Thus, the functional trade-offs between feeding, locomotion, and predator avoidance impose kinematic and hydrodynamic constraints, which in turn help shape the evolution of larval form.
Author contributions
J.Y.W. collected the data and carried out the analysis. All authors conceived and designed the study, drafted the manuscript, and gave approval for publication. | 6,895 | 2020-04-17T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
The Application of Minimal Length in Klein-Gordon Equation with Hulthen Potential Using Asymptotic Iteration Method
The application of minimal length formalism in Klein-Gordon equation with Hulthen potential was studied in the case of scalar potential that was equal to vector potential. The approximate solution was used to solve the Klein-Gordon equation within the minimal length formalism. The relativistic energy and wave functions of Klein-Gordon equation were obtained by using the Asymptotic IterationMethod. By using theMatlab software, the relativistic energies were calculated numerically.The unnormalized wave functions were expressed in hypergeometric terms. The results showed the relativistic energy increased by the increase of the minimal length parameter. The unnormalized wave function amplitude increased for the larger minimal length parameter.
Introduction
The relativistic effect gives the correction in the nonrelativistic quantum mechanics by applying the strong potential field in the particles dynamic.The particles dynamic in relativistic effect can be described by using Klein-Gordon equation, particularly for a zero spin particle.The Klein-Gordon equation is formed by two potentials coupling which are the four-vector potential (()) and scalar potential (()).From these two potentials, the Klein-Gordon equation has two framework conditions, which are as follows: if the scalar potential was equal to vector potential (() = ()) and if the scalar potential was equal to minus vector potential (() = −()).This condition appears in nuclear and high energy physics problem [1][2][3].Some of researchers have investigated both these conditions with the certainty vector potential.The main case of that research was how to reduce the Klein-Gordon equation into the Schrodinger-like equation, so we can solve it by using the certainty suitable methods.The methods which are usually used are such as Supersymmetric Quantum Mechanics (SUSY) [4], Nikiforov-Uvarov [1,5], and Asymptotic Iteration Method [6,7].Various potentials are used to explain the dynamic particle, such as the harmonic potential [8], Makarov potential [2], Hulthen potential [1,5], Kratzer potential [6], and Trigonometric Poschl-Teller potential [7].
The particles dynamic in quantum mechanics corresponds to the position and momentum of particles.And, as we have known, the study of commutation relations between position and momentum operators is explained using Heisenberg uncertainty principle [9], which is given by [ X, P] ≥ ℏ (1) where X is position operator, P is momentum operator, i is imaginer number, and ℏ = ℎ/2 with h being Planck constant.The presence of a quantum gravity on quantum mechanics has the consequence of the existence of a minimal observable distance on the order of the Planck length.Therefore the Heisenberg uncertainty principle gets additional correction due to the presence of a quantum gravity, which is well known as Generalized Uncertainty Principle (GUP) [10,11], given by where is minimal length parameter that has value 0 ≤ ≤ 1 and P is magnitude of the momentum [11].When the energy is much smaller than the Planck mass, goes to zero and we recover Heisenberg uncertainty principle [12]. 2
Advances in Mathematical Physics
In 2009, Jana and Roy have solved Klein-Gordon equation in the presence of minimal length for scalar potential was equal to vector potential using Algebraic Approach [13].The Klein-Gordon equation in the presence of a minimal length is solved by using Feynman for scalar potential was equal to vector potential [14].In addition, the hypergeometric method is used to solve Klein-Gordon equation using hyperbolic cotangent potential [15] and Asymptotic Iteration Method in trigonometric cotangent potential [16,17].
In this paper, we solved the minimal length formalism in radial part of Klein-Gordon equation for the condition () = () with Hulthen potential by using approximate solution.The approximate solution is used by Chabab et al. to solve the Bohr Mottelson Hamiltonian in the presence of a minimal length formalism by introducing the new wave function [11].The minimal length formalism in the Klein-Gordon equation is reduced into second-order differential equation.The relativistic energy equation and wave functions of Klein-Gordon equation are obtained by using Asymptotic Iteration Method.
The study is organized as follows.In Section 2, the approximate form of Klein-Gordon equation within minimal length formalism is presented.The Hulthen potential is introduced in Section 3. We describe Asymptotic Iteration Method in Section 4. The result and discussion are presented in Section 5.At last, conclusion is given in Section 6.
Approximate Form of Klein-Gordon Equation within Minimal Length Formalism
Generalized Uncertainty Principle is called the minimal length that is deformed from the commutation relations between position and momentum operators in quantum mechanics.In (2), the commutation relations can be rewritten as follows [9,12]: where P and p are momentum operators at high and low energy, respectively.The magnitude of p is expressed by p.The occurrence of minimal length is in string theory, black hole, quantum gravity, and noncommutative geometry, which yield new correction to Heisenberg uncertainty principle and imply a finite minimal uncertainty in position measurements, e.g., at the Planck scale [14].
Accordingly, it would be natural to scale the potential term in (6), so that the nonrelativistic energy is reproduced [18].The new wave function that is used to get the approximate form of Klein-Gordon equation is given [11]: Equation ( 7) is modification of (19) in [11].The modification is proposed to eliminate quadratic of Laplacian factor such that we get (20) in [11].By substituting ( 7) into ( 6), we obtain Equation ( 8) is the minimal length formalism in Klein-Gordon equation within the approximate form.The component Δ 3 is eliminated due to the value of 2 which goes to zero, and value of is very small.Here, we have used the properties that Δ is scalar differential operator.If Δ operates to scalar fields at a point (, , ), Δ will result in another scalar field.Here, Δ is also called scalar Laplacian.Inserting ( 7) into ( 6) leads us to do the multiplication operation among Laplacian.By using the property that the multiplication operation is commutative for scalar differential operator with constant coefficient, then the multiplication operator among Laplacian also has commutative properties.Therefore, here we can eliminate Δ 2 when ( 7) is inserted into (6).To get simple solution of (8), binomial expansion is used for small ; then (8) becomes Equation ( 9) is obtained by setting 2 which goes to zero, so 2 is ignored.Applying spherical Laplacian operator, as into (9), we use variable separable method by setting the new wave function (, , ) = ()Θ()Φ(), and we have a polar part and radial part of Klein-Gordon equation in the presence of minimal length.The polar part is given: and the radial part is as follows: where is a constant of variable separable method which corresponds to angular momentum (L).By applying () = ()/ and = ( + 1) into ( 12), it yields Equation ( 13) is the minimal length formalism in Klein-Gordon equation with Hulthen potential in the form of onedimensional Schrodinger-like equation.
Asymptotic Iteration Method
Asymptotic Iteration Method is method to solve the secondorder differential equation in form [19][20][21] where 0 () ̸ = 0 and 0 () are the coefficients of a differential equation and n is a quantum number.To obtain solution, we derive (14).
and here The eigenvalue is obtained from the quantization condition which is given by To obtain the wave function, ( 14) is reduced into the formalism, as follows Equation ( 18) is one-dimensional Schrodinger-like equation that has solution which is expressed in hypergeometric term, given as where is normalization constant and 2 1 is a hypergeometric function.The unnormalized wave functions of Klein-Gordon equation are obtained by using ( 19)-( 20) [19][20][21].
Hulthen Potential
The Hulthen potential is one of short range potentials in physics.The Hulthen potential has been used in particle physics, atomic physics, nuclear physics, solid-state physics, and chemical physics.The Hulthen wave functions have been used in solid-state physics problems [22].The Hulthenlike wave functions have been found to investigate atomic problems [22].The general Hulthen potential is given by [23] where is a screening parameter and is potential depth.The value of screening potential is 0.025 for low screening and 0.15 for high screening [22].To get simple solution, (21) was changed in hyperbolic trigonometric term [23], given as By setting = 7 and = 0.1, the visualization of Hulthen potential is expressed in Figure 1.
Figure 1 shows the visualization of Hulthen potential in r function.The Hulthen potential for different value of r is approximately from 0 until 0.05/eV (natural unit).The Hulthen potential has negative value in a very small value of r, while the Hulthen potential inclines to be constant for higher value of r.
Result and Discussion
Equation ( 13) can not be solved exactly unless we use the approximation to the function of 1/ 2 .The approximation of function 1/ 2 is given as [23] for small value of or ≪ 1.The visualization of that approximation is expressed in Figure 2.
Figure 2 shows the red line as a function of 1/ 2 and light blue line as a function of 2 /sinh 2 ( ).It is seen that the two lines overlap with each other; then the centrifugal term 1/ 2 is approximated by 2 /sinh 2 ( ) also as in [23].To obtain the exact solution of ( 13) the approximate term of 1/ 2 in ( 23) is inserted into ( 13) and together with ( 22); then we get Equation ( 24) is the Klein-Gordon equation with the minimal length for Hulthen potential which can be rewritten as 2 ) (26) Equation ( 25) is second-order differential equation that will be reduced to hypergeometric differential equation type; by letting coth = 1 − 2, we get and then we set in ( 29), so we have By setting () = (1 − ) (), then (31) is reduced to AIM-type differential equation that is similar to (14).
By comparing ( 14) and (32), we have To obtain eigenvalue, we use ( 15)-( 17) and ( 33)-(34), which yields By inserting ( 27)-( 28) and (30) into (35), we obtain the relativistic energy equation Klein-Gordon equation with minimal length for Hulthen potential, as follows: where and n is quantum number.Equation ( 36) is relativistic energy equation of the minimal length formalism in Klein-Gordon equation with Hulthen potential.The relativistic energies were calculated numerically by using Matlab software.The results of relativistic energies are listed in Table 1.
Table 1 shows that the presence of minimal length and Hulthen potential in Klein-Gordon equation gives influence to the relativistic energy value.The relativistic energy value without minimal length parameter ( = 0) is lower than the relativistic energy value with minimal length parameter.Then, the relativistic energy value increases for the larger minimal length parameter and for the larger quantum number (n).The influence of Hulthen potential gives negative value in the relativistic energy value.If we set = 0, without the presence of the minimal length parameter in relativistic energy equation (36), it was reduced to the relativistic energy equation which is in agreement with [5].
In [5] it was shown that the relativistic energy equation for the Klein-Gordon equation for Hulthen potential without the minimal length depends on the squared of quantum number (n).
To get the unnormalized wave function, we used ( 19)-( 20), so we have Inserting ( 40) into (19) yields Equation ( 41) is substituted into () = (1−) (), so we have the unnormalized wave function given as By applying coth = 1 − 2 in (42), we obtain the unnormalized wave function of minimal length formalism in Klein-Gordon equation, given as By inspecting Figures 3 and 4, we can see that the influence of minimal length parameter increases the amplitude of the unnormalized wave function.
Conclusion
The investigation of the minimal length formalism in the Klein-Gordon equation is obtained by approximate solution.The minimal length in Klein-Gordon equation for Hulthen potential is solved using Asymptotic Iteration Method.The Asymptotic Iteration Method is used to obtain the relativistic energy and unnormalized wave functions.The results show that the relativistic energies value increases for the larger minimal length parameter and for the larger quantum number (n).Then, the influence of minimal length parameter exerts effect in increasing the amplitude value of the unnormalized wave functions.
Figure 2 :
Figure 2: The visualization of approximation, with = 0.1 and value of r from 0 until 0.05 /eV (natural unit). | 2,818.2 | 2018-07-02T00:00:00.000 | [
"Physics"
] |
Drug resistance prediction for Mycobacterium tuberculosis with reference graphs
Tuberculosis is a global pandemic disease with a rising burden of antimicrobial resistance. As a result, the World Health Organization (WHO) has a goal of enabling universal access to drug susceptibility testing (DST). Given the slowness of and infrastructure requirements for phenotypic DST, whole-genome sequencing, followed by genotype-based prediction of DST, now provides a route to achieving this. Since a central component of genotypic DST is to detect the presence of any known resistance-causing mutations, a natural approach is to use a reference graph that allows encoding of known variation. We have developed DrPRG (Drug resistance Prediction with Reference Graphs) using the bacterial reference graph method Pandora. First, we outline the construction of a Mycobacterium tuberculosis drug resistance reference graph. The graph is built from a global dataset of isolates with varying drug susceptibility profiles, thus capturing common and rare resistance- and susceptible-associated haplotypes. We benchmark DrPRG against the existing graph-based tool Mykrobe and the haplotype-based approach of TBProfiler using 44 709 and 138 publicly available Illumina and Nanopore samples with associated phenotypes. We find that DrPRG has significantly improved sensitivity and specificity for some drugs compared to these tools, with no significant decreases. It uses significantly less computational memory than both tools, and provides significantly faster runtimes, except when runtime is compared to Mykrobe with Nanopore data. We discover and discuss novel insights into resistance-conferring variation for M. tuberculosis – including deletion of genes katG and pncA – and suggest mutations that may warrant reclassification as associated with resistance.
INTRODUCTION
Human industrialization of antibiotic production and use over the last 100 years has led to a global rise in the prevalence of antibioticresistant bacterial strains.The phenomenon was even observed within patients in the first clinical trial of streptomycin as a drug for tuberculosis (TB) in 1948 [2], and indeed as every new drug class has been introduced, so has resistance followed.Resistance mechanisms are varied, and can be caused by point mutations at key loci (e.g.binding sites of drugs [3,4]), frameshifts rendering a gene non-functional [5], horizontal acquisition of new functionality via a new gene [6], or upregulation of efflux pumps to reduce the drug concentration within the cell [7].
Phenotypic and genotypic methods for detecting reduced susceptibility to drugs play complementary roles in clinical microbiology.Carefully defined phenotypic assays are used to give (semi)quantitative or binary measures of drug susceptibility; these have the benefit of being experimental, quantitative measurements, and can detect resistance caused by hitherto unknown mechanisms.Prediction of drug resistance from genomic data has different advantages.Detection of a single-nucleotide polymorphism (SNP) is arguably more consistent than a phenotypic assay, as it is not affected by whether the resistance it causes is near some threshold defining a resistant/ susceptible boundary.Additionally, combining sequence datasets from different laboratories is more reliable than combining different phenotypic datasets, and using sequence data allows one to detect informative genetic changes (e.g. a tandem expansion of a single gene to form an array, thus increasing dosage).More subtly, defining the cut-off to separate resistant from susceptible is only simple when the minimum inhibitory concentration distribution is a simple bimodal distribution; in reality it is sometimes a convolution of multiple distributions caused by different mutations, and genetic data are sometimes needed to deconvolve the data and choose a threshold [8,9].
The key requirement for a genomic predictor is to have an encodable understanding of the genotype-to-phenotype map.Research has focused on clinically important pathogens, primarily Escherichia coli, Klebsiella pneumoniae, Salmonella enterica, Pseudomonas aeruginosa and Mycobacterium tuberculosis (MTB).The challenges differ across species; almost all bacterial species are extremely diverse, with non-trivial pan-genomes and considerable horizontal gene transfer causing transmission of resistance genes [10].In these cases, species are so diverse that detection of chromosomal SNPs is heavily affected by reference bias [11].Furthermore, there is an appreciable proportion of resistance that is not currently explainable through known SNPs or genes [12][13][14].At the other extreme, MTB has almost no accessory genome, and no recombination or plasmids [15].Resistance appears to be caused entirely by mutations, insertion/deletions (indels) and rare structural variants, and simple sets of rules ('if any of these mutations are present, or any of these genes inactivated, the sample is resistant') work well for most drugs [16].MTB has an exceptionally slow growth rate, meaning that culture-based drug susceptibility testing (DST) is slow (2-4 weeks, depending on media), and therefore sequencing is faster [17].As part of the end TB strategy, the World Health Organization (WHO) strives towards universal access to DST [18], defining target product profiles for molecular diagnostics [19,20] and publishing a catalogue of high-confidence resistance mutations intended to provide a basis for commercial diagnostics and future research [16].There was a strong community-wide desire to integrate this catalogue into software for genotypic resistance prediction, although independent benchmarking confirmed that there was still need for improvement [12].Hence, there is a continuing need to improve the understanding of the genetic basis of resistance and integrate it into software for genotypic DST.
In this paper we develop and evaluate a new software tool for genotypic DST for MTB, built on a generic framework that could be used for any bacteria.Several tools have been developed previously [21][22][23][24][25].Of these, only Mykrobe and TBProfiler work on Illumina and Nanopore data, and both have been heavily evaluated previously [22,23,26,27] -so we benchmark against these.Mykrobe uses de Bruijn graphs to encode known resistance alleles and thereby achieves high accuracy even on indel calls with Nanopore data [27].However, it is unable to detect novel alleles in known resistance genes, nor to detect gene truncation or deletion, which would be desirable.TBProfiler is based on mapping and variant calling (by default using Freebayes [28]) and detects gene deletions using Delly [29].
Our new software, called DrPRG (Drug resistance Prediction with Reference Graphs), builds on newer pan-genome technology than Mykrobe [11] using an independent graph for each gene in the catalogue, which makes it easier to go back and forth between VCF and
Impact Statement
Mycobacterium tuberculosis is the bacterium responsible for tuberculosis (TB).TB is one of the leading causes of death worldwide; before the coronavirus pandemic it was the leading cause of death from a single pathogen.Drug-resistant TB incidence has recently increased, making the detection of resistance even more vital.In this study, we develop a new software tool to predict drug resistance from whole-genome sequence data of the pathogen using new reference graph models to represent a reference genome.We evaluate it on M. tuberculosis against existing tools for resistance prediction and show improved performance.Using our method, we discover new resistance-associated variations and discuss reclassification of a selection of existing mutations.As such, this work contributes to TB drug resistance diagnostic efforts.In addition, the method could be applied to any bacterial species, so is of interest to anyone working on antimicrobial resistance.the graph.To build an index, it takes as input a catalogue of resistant variants (a simple four-column TSV file), a file specifying expert rules (e.g.any missense variant between codons X and Y in gene Z causes resistance to drug W) and a VCF of population variation in the genes of interest.This allows it to easily incorporate the current WHO-endorsed catalogue [16], which is conservative, and for the user to update the catalogue or rules with minimal effort.Finally, to provide resistance predictions, it takes a prebuilt index (an MTB one is currently provided) and sequencing reads (FASTQ).
We describe the DrPRG method, and to evaluate it, gather the largest MTB dataset of sequencing data with associated phenotype information and reveal novel insights into resistance-determining mutations for this species.
METHODS
DrPRG is a command-line software tool implemented in the Rust programming language.There are two main subcommands: build for building a reference graph and associated index files, and predict for producing genotypic resistance predictions from sequencing reads and an index (from build).
Constructing a resistance-specific reference graph and index
The build subcommand of DrPRG requires a variant call format (VCF) file of variants from which to build a reference graph, a catalogue of mutations that confer resistance or susceptibility for one or more drugs and an annotation (GFF) and FASTA file of the reference genome.
To ensure the reference graph is not biased towards a particular lineage or susceptibility profile, we selected samples from a VCF of 15 211 global MTB samples [30].We randomly chose 20 samples from each lineage 1 through 4, as well as 20 samples from all other lineages combined.In addition, we included 17 clinical samples representing MTB global diversity (lineages 1-6) [31,32] to give a total of 117 samples.In the development phase of DrPRG we also found it necessary to add some common mutations not present in these 117 samples; as such, we added 48 mutations to the global VCF (these mutations were selected as they were the most common minor allele-causing mutations that were not in the reference graph and are listed in the archived repository -see Data Summary).We did not add all catalogue mutations as there is a saturation point for mutation addition to a reference graph, and beyond this point, performance begins to decay (see Sections S1 and S2, available in the online version of this article and [33]).
The build subcommand turns this VCF into a reference graph by extracting a consensus sequence for each gene and sample.We use just those genes that occur in the mutation catalogue and include 100 bases flanking the gene.A multiple sequence alignment is constructed for each gene from these consensus sequences with MAFFT (v7.505) [34,35] and then a reference graph is constructed from these alignments with make_prg (v0.4.0) [11].The final reference graph is then indexed with Pandora [11].
Genotypic resistance prediction
Genotypic resistance prediction of a sample is performed by the predict subcommand of DrPRG.It takes an index produced by the build command (see Constructing a resistance-specific reference graph and index) and sequencing reads -Illumina or Nanopore are accepted.To generate predictions, DrPRG discovers novel variants (Pandora), adds these to the reference graph (make_prg and MAFFT) and then genotypes the sample with respect to this updated graph (Pandora).The genotyped VCF is filtered such that we ignore any variant with fewer than three reads supporting it and require a minimum of 1 % read depth on each strand (Section S2).Next, each variant is compared to the catalogue.If an alternative allele has been called that corresponds with a catalogue variant, resistance ('R') is noted for the drug(s) associated with that mutation.If a variant in the VCF matches a catalogue mutation, but the genotype is null ('.'), we mark that mutation, and its associated drug(s), as failed ('F').Where an alternative allele call does not match a mutation in the catalogue, we produce an unknown ('U') prediction for the drug(s) that have a known resistance-conferring mutation in the relevant gene.
DrPRG also has the capacity to detect minor alleles and call minor resistance ('r') or minor unknown ('u') in such cases.Minor alleles are called when a variant (that has passed the above filtering) is genotyped as being the susceptible (reference) allele, but there is also read depth on the resistant (alternate) allele above a given minor allele frequency parameter (--maf; default is 0.1 for Illumina data).Minor allele calling is turned off by default for Nanopore data, as we found it led to a drastic increase in the number of false-positive calls (this is also the case for Mykrobe and TBProfiler).
When building the index for DrPRG and making predictions, we also accept a file of 'expert rules' for calling variants of a certain class.A rule is associated with a gene, an optional position range, a variant type and the drug(s) that rule confers resistance to.Currently supported variant types are missense, nonsense, frameshift and gene absence.
The output of running predict is a VCF file of all variants in the graph and a JSON file of resistance predictions for each drug in the index, along with the mutation(s) supporting that prediction and a unique identifier to find that variant in the VCF file (see Section S3 for an example).The reference graph gene presence/absence (as determined by Pandora) is also listed in the JSON file.
Benchmark
We compare the performance of DrPRG against Mykrobe (v0.12.1) [26] and TBProfiler (v4.3.0)[22] for MTB drug resistance prediction.Mykrobe is effectively a predecessor of DrPRG; it uses genome graphs, in the form of de Bruijn graphs, to construct a graph of all mutations in a catalogue and then genotypes the reads against this graph.TBProfiler is a more traditional approach which aligns reads to a single reference genome and calls variants from that via aligned haplotype sequences.
A key part of such a benchmark is the catalogue of mutations, as this generally accounts for the majority of differences between tools [26].As such, we use the same catalogue for all tools to ensure that any differences are method-related and not catalogue disparities.The catalogue we chose is the default one provided in Mykrobe [12].It is a combination of the catalogue described in Hunt et al. [26] and the category 1 and 2 mutation and expert rules from the 2021 WHO catalogue [16].This catalogue contains mutations for 14 drugs: isoniazid, rifampicin, ethambutol, pyrazinamide, levofloxacin, moxifloxacin, ofloxacin, amikacin, capreomycin, kanamycin, streptomycin, ethionamide, linezolid and delamanid.
We used Mykrobe and TBProfiler with default parameters, except for a parameter in each indicating the sequencing technology of the data as Illumina or Nanopore and the TBProfiler option to not trim data (as we do this in quality control).
We compare the prediction performance of each program using sensitivity and specificity.To calculate these metrics, we consider a true positive (TP) and true negative (TN) as a case where a program calls resistant and susceptible, respectively, and the phenotype agrees; a false positive (FP) as a resistant call by a program but a susceptible phenotype, with false negatives (FNs) being the inverse of FPs.We only present results for drugs in the catalogue and where at least 10 samples had phenotypic data available.
To benchmark the runtime and memory usage of each tool, we used the Snakemake benchmark feature within our analysis pipeline [36].
Quality control
All downloaded Nanopore fastq files had adapters trimmed with porechop (v0.2.4; https://github.com/rrwick/Porechop),with the option to discard any reads with an adapter in the middle, and any reads with an average quality score below 7 were removed with nanoq (v0.9.0) [50].Illumina reads were preprocessed with fastp (v0.23.2) [51] to remove adapter sequences, trim low-quality bases from the ends of the reads, and remove duplicate reads and reads shorter than 30 bp.
Sequencing reads were decontaminated as described by Hall et al. [27] and Walker et al. [16].Briefly, sequenced reads were mapped to a database of common sputum contaminants and the MTB reference genome (H37Rv; accession NC_000962.3)[52], only keeping those reads where the best mapping was to H37Rv.
After quality control, we removed any sample with an average read depth <15, or where more than 5 % of the reads mapped to contaminants.
Lineage information was extracted from the TBProfiler results (see Benchmark).
Statistical analysis
We used a Wilcoxon rank-sum paired data test from the Python library SciPy [53] to test for significant differences in runtime and memory usage between the three prediction tools.
The sensitivity and specificity confidence intervals were calculated with a Wilson's score interval with a coverage probability of 95 %.
RESULTS
To benchmark DrPRG, Mykrobe and TBProfiler, we gathered an Illumina dataset of 45 702 MTB samples with a phenotype for at least 1 drug.After quality control (see Quality control), this number reduced to 44 709.In addition, we gathered 142 Nanopore samples, of which 138 passed quality control.In Fig. 1 we show all available drug phenotypes for those interested in the dataset, although our catalogue does not offer predictions for all drugs listed (see Benchmark).Lineage counts for all samples that passed quality control and have a single, major lineage call can be found in Table 1.
Sensitivity and specificity performance
We present the sensitivity and specificity results for Illumina data in Fig. 2 and Table S2 and the Nanopore data in Fig. 3 and Table S3.
When comparing DrPRG's performance to that of Mykrobe and TBProfiler, we look for instances where the confidence intervals do not overlap, indicating a significant difference.With Illumina data (Fig. 2 and Table S), DrPRG achieves significantly greater sensitivity ], with no significant difference for all other drugs.In terms of sensitivity, there was no significant difference between DrPRG and TBProfiler except for ethionamide, where DrPRG was significantly more sensitive [75.2 % (73.7-76.8)vs 71.5 % (69.8-73.1)].For specificity, there was no significant difference between the tools except that DrPRG and Mykrobe were significantly better than TBProfiler for rifampicin [97.8 % (97.6-98.0)vs 97.2 % (97.0-97.4)].There was no significant difference in sensitivity or specificity for any drug with Nanopore data.
In both figures, we show the minimal requirements from the WHO target product profiles for the sensitivity and specificity of genotypic drug susceptibility testing [19] as red dashed lines.Note, a sensitivity target is not specified by the WHO for ethambutol (EMB), capreomycin (CAP), kanamycin (KAN), streptomycin (STM), or ethionamide (ETO).For Illumina data, all tools' predictions for rifampicin, isoniazid, levofloxacin, moxifloxacin and amikacin are above the sensitivity minimal requirement target.TBProfiler also exceeds the target for pyrazinamide, which DrPRG misses by 0.2 %.No drug's sensitivity target was achieved with Nanopore data.For specificity, the tools are all very similar and either exceed or fall below the threshold together (see Fig. 2).The target of >98 % is only met by all tools with Illumina data for ofloxacin, amikacin, linezolid and delamanid.Mykrobe also exceeds the target for capreomycin.As such, amikacin is the only drug where both sensitivity and specificity performance exceed the minimal requirement of the WHO target product profiles.Only capreomycin and kanamycin specificity targets are exceeded (by all tools) with Nanopore data.
However, for Illumina data, we did find that likely sample swaps or phenotype instability [54] could lead to some drugs being on the threshold of the WHO target product profiles.If we excluded samples where all three tools make a FP call for the strong isoniazid and rifampicin resistance-conferring mutations katG S315T (n=152) and rpoB S450L (n=119) [16], respectively, all three tools would exceed the isoniazid specificity target of 98 % -thus meeting both sensitivity and specificity targets for isoniazid.In addition, DrPRG and Mykrobe would meet the rifampicin specificity target of 98 % -leading to both targets being met for rifampicin for these two tools.As previously reported [54,55], we also found considerable instability in the ethambutol result caused by embB mutations M306I (n=827) and M306V (n=519) being called for phenotypically susceptible samples (FP) by all three tools.Other frequent consensus FP calls included fabG1 c-15t, which is associated with ethionamide (n=441) and isoniazid (n=241) resistance, and rrs a1401g, which is associated with resistance to capreomycin (n=241), amikacin (n=70) and kanamycin (n=48).In addition, there were common false positives from gyrA mutations A90V and D94G, which are associated with resistance to the fluoroquinolones levofloxacin (n=108 and n=70, respectively), moxifloxacin (n=419 and n=349) and ofloxacin (n=19 and n=17), and are known to cause heteroresistance and minimum inhibitory concentrations (MICs) close to the critical concentration threshold [56][57][58].
Evaluation of potential additions to the WHO catalogue
False negatives are much harder to investigate because it is not known which mutation(s) were missed, as they are presumably not in the catalogue if all tools failed to make a call.However, looking through those FNs where DrPRG makes an 'unknown' resistance call, we note some potential mutations that may require reclassification or inclusion in the WHO catalogue.For delamanid FNs, we found five different nonsense mutations in the ddn gene in seven samples -W20* (n=2), W27* (n=1), Q58* (n=1), W88* (n=2) and W139* (n=1) -none of which occurred in susceptible samples.We also found 13 pyrazinamide FN cases with a nonstop (stop-loss) mutation in pncA; this mutation type was also seen in 2 susceptible samples.Another pncA mutation, T100P, was also observed in 10 pyrazinamide FN samples and no susceptible samples.T100P only appears once in the WHO catalogue data ('solo' in a resistant sample).As such, it was given a grading of uncertain significance.As our dataset includes those samples in the WHO catalogue dataset, this means an additional nine isolates have been found with this mutation, indicating that this may warrant an upgrade to 'associated with resistance' .
We found an interesting case of allele combinations, where nine ethambutol FN samples have the same two embA mutations, c-12a and c-11a, and embB mutation P397T; this combination is only seen in two susceptible samples.Interestingly, embB P397T and embA c-12a do not appear in the WHO catalogue, but have been described as causing resistance previously [59].Three katG mutations were also detected in isoniazid FN cases.First, G279D occurs in eight missed resistance samples and no susceptible cases.This mutation is graded as 'uncertain significance' in the WHO catalogue and was seen solo in four resistant samples in that data.Singh et al. performed a protein structural analysis caused by this mutation and found that it produced 'an undesirable effect on the functionality of the protein' [60].Second, G699E occurs in eight FN samples and no susceptible cases, but has a WHO grading of 'uncertain significance' based on six resistant isolates; thus, we add two extra samples to that count.And third, N138H occurs in 14 FN samples and 1 susceptible sample.In seven of these cases, it co-occurs with ahpC mutations t-75g (n=2) and t-76a (n=5).This mutation only occurs in 3 resistant isolates in the WHO catalogue dataset, giving it an uncertain significance, but we add a further 11 cases.This mutation has been found to cause a high isoniazid MIC and to be associated with resistance [61,62].
Detection of large deletions
There are expert rules in the WHO catalogue that treat gene loss of function (any frameshift or nonsense mutation) in katG, ethA, gid and pncA as causing resistance for isoniazid, ethionamide, streptomycin and pyrazinamide, respectively [16].Although examples of resistance caused by gene deletion are rare [63][64][65][66][67], with a dataset of this size (n=44 709), we can both evaluate these rules and compare the detection power of DrPRG and TBProfiler for identifying gene deletions (Mykrobe does not, although in principle it could).In total we found 206 samples where DrPRG and/or TBProfiler identified deletions of ethA, katG, or pncA.Although many of these isolates did not have phenotype information for the associated drug (n=100), the results are nevertheless striking (Fig. 4).Given the low false-positive rate of Pandora for gene absence detection [11], these no-phenotype samples provide insight into how often gene deletions are occurring in clinical samples.
Of the 34 isolates where katG was identified as being absent, and an isoniazid phenotype was available, all 34 were phenotypically resistant.DrPRG detected all 34 (100 % sensitivity) and TBProfiler identified 26 (76.5 % sensitivity).Deletions of pncA were detected in 56 isolates, of which 49 were phenotypically resistant.DrPRG detected 47 (95.9 % sensitivity) and TBProfiler detected 46 (93.9 % sensitivity).Lastly, ethA was found to be missing in 16 samples with an ethionamide phenotype, of which 10 were phenotypically resistant.Both DrPRG and TBProfiler correctly predicted all 10 (100 % sensitivity).No gid deletions were discovered.We note that the TP calls made by Mykrobe were due to it detecting large deletions that are present in the catalogue, which is understandable given that the whole gene is deleted.
We conclude that DrPRG is slightly more sensitive at detecting large deletions than TBProfiler (and Mykrobe) for katG, and equivalent for pncA and ethA.However, we note that the WHO expert rule, which predicts resistance to isolates missing specific genes, appears to be more accurate for katG (100 % of isolates missing the gene are resistant) than for pncA (87 % resistant) and ethA (62.5 % resistant).
Runtime and memory usage benchmark
The runtime and peak memory usage of each program was recorded for each sample and are presented in Fig. 5. DrPRG (median 161 s) was significantly faster than both TBProfiler (307 s; P≤0.0001) and Mykrobe (230 s; P≤0.0001) with Illumina data.For Nanopore data, DrPRG (250 s) was significantly faster than TBProfiler (290 s; P≤0.0001), but significantly slower than Mykrobe (213 s; P=0.0347).In terms of peak memory usage, DrPRG (Illumina median peak memory 58 MB; Nanopore 277 MB) used significantly less memory than Mykrobe (1538 MB; 1538 MB) and TBProfiler (1463 MB; 1990 MB) for both Illumina and Nanopore data (P≤0.0001for all comparisons).
DISCUSSION
The dominant paradigm for analysing genetic variation relies on a central idea: all genomes in a species can be described as minor differences from a single reference genome.However, this approach can be problematic or inadequate for bacteria, where there can be significant sequence divergence within a species.Reference graphs are an emerging solution to the reference bias issues implicit in the 'single-reference' model [11,68,69].Such a graph represents variation at multiple scales within a population -e.g.nucleotide and locus level.The graph structure used in Pandora is gene (or locus) oriented, and allows natural support or analysing SNP and indel mutations and the presence/absence of genes.In this work, we have presented a novel method for making drug resistance predictions with reference graphs.The method, DrPRG, requires only a reference genome and annotation, a catalogue of resistance-conferring mutations, a VCF of population variation from which to build a reference graph and (optionally) a set of rules for types of variants in specific genes that cause resistance.We apply DrPRG to the pathogen M. tuberculosis, for which there is a great deal of information on the genotype/ phenotype relationship, and a great need to provide good tools that implement and augment current and forthcoming versions of the WHO catalogue.We illustrate the performance of DrPRG against two existing methods for drug resistance prediction -Mykrobe and TBProfiler.
We benchmarked the methods on a high-quality Illumina sequencing dataset with associated phenotype profiles for 44 709 MTB genomes; the largest known dataset to date [16].All tools used the same catalogue and rules, and for most drugs there was no significant difference between the tools.However, DrPRG did have a significantly higher specificity than TBProfiler for rifampicin predictions, and sensitivity for ethionamide predictions.DrPRG's sensitivity was also significantly greater than Mykrobe's for rifampicin, streptomycin, amikacin, capreomycin, kanamycin and ethionamide.Evaluating detection of gene loss, we found that DrPRG was more sensitive to katG deletions than TBProfiler.
We also benchmarked using 138 Nanopore-sequenced MTB samples with phenotype information, but found no significant difference between the tools.This Nanopore dataset was quite small and therefore the confidence intervals were large for all drugs.
DrPRG also used significantly less memory than Mykrobe and TBProfiler on both Nanopore and Illumina data.In addition, the runtime of DrPRG was significant faster than both tools for Illumina data and faster than TBProfiler for Nanopore data.While the absolute values for memory and runtime for all tools mean that they could all easily run on common computers found in the types of institutions likely to run them, the differences for the Nanopore data warrant noting.As Nanopore data can be generated 'in the field' , computational resource usage is critical.For example, in a recent collaboration of ours with the National Tuberculosis programme in Madagascar [27], Nanopore sequencing and analysis are regularly performed on a laptop, meaning that memory usage is sometimes a limiting factor.DrPRG's median peak memory was 277 MB, meaning that it can comfortably be run on any laptop or other mobile computing device [70].
It is clear from the Illumina results that more work is needed to understand resistance-conferring mutations for delamanid and linezolid.However, we did find that nonsense mutations in the ddn gene appear likely to be resistance-conferring for delamanid as has been noted previously [39,[71][72][73].We also found a novel (likely) mechanism of resistance to pyrazinamide -a nonstop mutation in pncA.Phenotype instability in embB at codon 306 was also found to be the main driver in poor ethambutol specificity, as has been noted elsewhere [54,55], indicating the need to further investigate cofactors that may influence the phenotype when mutations at this codon are present.
Gene absence/deletion detection allowed us to confirm that the absence of katG -a mechanism that is rare in clinical samples [63][64][65][66]74] -is highly likely to confer resistance to isoniazid.Additionally, we found that the absence of pncA is likely to cause resistance to pyrazinamide, as has been noted previously [67].One finding that requires further investigation is the variability in ethionamide phenotype when ethA is absent.We found that only 63 % of the samples with ethA missing and an ethionamide phenotype were resistant.Ang et al. have suggested that ethA deletion alone does not always cause resistance and there might be an alternative pathway via mshA [75].
Given the size of the Illumina dataset used in this work, the results provide a good marker of Illumina whole-genome sequencing's ability to replace traditional phenotyping methods.With the catalogue used in this study, DrPRG meets the WHO's target product profile for next-generation drug-susceptibility testing for both sensitivity and specificity for amikacin, and sensitivity only for rifampicin, isoniazid, levofloxacin and moxifloxacin.However, if we exclude cases where all tools call rpoB S450L or katG S315T for phenotypically susceptible samples (these are strong markers of resistance [16] and therefore we suspect sample swaps or phenotype error [76]), DrPRG also meets the specificity target product profile for rifampicin and isoniazid.For the other first-line drugs, ethambutol and pyrazinamide, ethambutol does not have a WHO target and DrPRG's pyrazinamide sensitivity is 0.2 % below the WHO target (although the confidence interval spans the target), while the pyrazinamide specificity target is missed by 0.8 %.
The primary limitation of the DrPRG method relates to minor allele calls.DrPRG uses Pandora for novel variant discovery, combining a graph of known population variants (which can be detected at low frequency) with de novo detection of other variants if present at above ~50 % frequency.Thus, it can miss minor allele calls if the allele is absent from its reference graph.While this issue did not impact on most drugs, it did account for the majority of cases where DrPRG missed pyrazinamideresistant calls (in pncA), but the other tools correctly called resistance.Unlike most other genes, where there are a relatively small number of resistance-conferring mutations, or they are localized to a specific region (e.g. the rifampicin resistancedetermining region in rpoB), resistance-conferring mutations are numerous -with most being rare -and distributed throughout pncA [16,77,78].Adding all of these mutations leads to decreased performance of the reference graph (Section S3 and [33]), and so improving minor allele calling for pyrazinamide remains a challenge we need to revisit in the future.
One final limitation is the small number of Nanopore-sequenced MTB isolates with phenotypic information.Increased Nanopore sequencing over time will provide better resolution of the overall sensitivity and specificity values and improve the methodological nuances of calling variants from this emerging, and continually changing, sequencing technology.
In conclusion, DrPRG is a fast, memory-frugal software program for predicting drug resistance.We showed that on MTB it performs as well as, or better than, two other commonly used tools for resistance prediction.We also collected and curated the largest dataset of MTB Illumina-sequenced genomes with phenotype information and hope this will benefit future work to improved genotypic drug susceptibility testing for this species.While we applied DrPRG to MTB in this study, it is a framework that is agnostic to species.MTB is likely one of the bacterial species with the least to gain from reference graphs, given its relatively conserved (closed) pan-genome compared to other common species [79].As such, we expect the benefits and performance of DrPRG to improve as the openness of the species' pan-genome increases [11], especially given its good performance on a reasonably closed pan-genome.
Fig. 1 .
Fig. 1.Drug phenotype counts for Illumina (upper) and Nanopore (lower) datasets.Bars are stratified and coloured by whether the phenotype is resistant (R; orange) or susceptible (S; green).Note, the y-axis is log-scaled.PAS para-aminosalicylic acid.
Fig. 2 .
Fig. 2. Sensitivity (upper panel; y-axis) and specificity (lower panel; y-axis) of resistance predictions for different drugs (x-axis) from Illumina data.Error bars are coloured by prediction tool.The central horizontal line in each error bar is the sensitivity/specificity and the error bars represent the 95 % confidence interval.Note, the sensitivity panel's y-axis is logit-scaled.This scale is similar to a log scale close to zero and to one (100%), and almost linear around 0.5 (50 %).The red dashed line in each panel represents the minimal standard WHO target product profile (TPP; where available) for next-generation drug susceptibility testing for sensitivity and specificity.INH isoniazid; RIF, rifampicin; EMB, ethambutol; PZA, pyrazinamide; LFX, levofloxacin; MFX, moxifloxacin; OFX, ofloxacin; AMK, amikacin; CAP, capreomycin; KAN, kanamycin; STM, streptomycin; ETO, ethionamide; LZD, linezolid; DLM, delamanid.
Fig. 3 .
Fig. 3. Sensitivity (upper panel; y-axis) and specificity (lower panel; y-axis) of resistance predictions for different drugs (x-axis) from Nanopore data.Error bars are coloured by prediction tool.The central horizontal line in each error bar is the sensitivity/specificity and the error bars represent the 95 % confidence interval.Note, the sensitivity panel's y-axis is logit-scaled.This scale is similar to a log scale close to zero and to one (100%), and almost linear around 0.5 (50 %).The red dashed line in each panel represents the minimal standard WHO target product profile (TPP; where available) for next-generation drug susceptibility testing for sensitivity and specificity.INH isoniazid; RIF, rifampicin; EMB, ethambutol; OFX, ofloxacin; AMK, amikacin; CAP, capreomycin; KAN, kanamycin; STM, streptomycin; ETO, ethionamide.
Fig. 4 .
Fig. 4. Impact of gene deletion on resistance classification.The title of each subplot indicates the gene and drug it effects.Bars are coloured by their classification and stratified by tool.Count (y-axis) indicates the number of gene deletions for that category.The NA bar (white with diagonal lines) indicates the number of samples with that gene deleted but no phenotype information for the respective drug.TP true positive; FN, false negative; TN, true negative; FP, false positive; NA, no phenotype available.
Fig. 5 .
Fig. 5. Benchmark of the maximum memory usage (left panels) and runtime (right panels) for Illumina (upper row) and Nanopore (lower row) data.Each point and violin is coloured by the tool, with each point representing a single sample.Statistical annotations are the result of a Wilcoxon rank-sum paired data test on each pair of tools.Dashed lines inside the violins represent the quartiles of the distribution.Note, the x-axis is log-scaled.
79.Five reasons to publish your next article with a Microbiology Society journal 1 .
Park S-C, Lee K, Kim YO, S, Chun J. Large-scale genomics reveals the genetic characteristics of seven species and importance of phylogenetic distance for estimating pan-genome size.Front Microbiol 2019;10:834.80. Zwyer M, Çavusoglu C, Ghielmetti G, Pacciarini ML, Scaltriti E, et al.A new nomenclature for the livestock-associated Mycobacterium tuberculosis complex based on phylogenomics.Open Res Europe 2021;1:100.When you submit to our journals, you are supporting Society activities for your community.2. Experience a fair, transparent process and critical, constructive review.3.If you are at a Publish and Read institution, you'll enjoy the benefits of Open Access across our journal portfolio.4. Author feedback says our Editors are 'thorough and fair' and 'patient and caring'. 5. Increase your reach and impact and share your research more widely.Find out more and submit your article at microbiologyresearch.org. | 8,106 | 2023-08-01T00:00:00.000 | [
"Biology"
] |
Assessment of college students’ mental health status based on temporal perception and hybrid clustering algorithm under the impact of public health events
The dynamic landscape of public health occurrences presents a formidable challenge to the emotional well-being of college students, necessitating a precise appraisal of their mental health (MH) status. A pivotal metric in this realm is the Mental Health Assessment Index, a prevalent gauge utilized to ascertain an individual’s psychological well-being. However, prevailing indices predominantly stem from a physical vantage point, neglecting the intricate psychological dimensions. In pursuit of a judicious evaluation of college students’ mental health within the crucible of public health vicissitudes, we have pioneered an innovative metric, underscored by temporal perception, in concert with a hybrid clustering algorithm. This augmentation stands poised to enrich the extant psychological assessment index framework. Our approach hinges on the transmutation of temporal perception into a quantifiable measure, harmoniously interwoven with established evaluative metrics, thereby forging a novel composite evaluation metric. This composite metric serves as the fulcrum upon which we have conceived a pioneering clustering algorithm, seamlessly fusing the fireworks algorithm with K-means clustering. The strategic integration of the fireworks algorithm addresses a noteworthy vulnerability inherent to K-means—its susceptibility to converging onto local optima. Empirical validation of our paradigm attests to its efficacy. The proposed hybrid clustering algorithm aptly captures the dynamic nuances characterizing college students’ mental health trajectories. Across diverse assessment stages, our model consistently attains an accuracy threshold surpassing 90%, thus outshining existing evaluation techniques in both precision and simplicity. In summation, this innovative amalgamation presents a formidable stride toward an augmented understanding of college students’ mental well-being during times of fluctuating public health dynamics.
INTRODUCTION
Public health crises, exemplified by the COVID-19 pandemic, exert profound psychological ramifications.It is noteworthy that a majority of afflicted individuals evinced psychological other investigations, the technique for median clustering was utilized to cluster maternal MH care.
In time psychology, time perception refers to the continuous and sequential reaction of humans to time stimuli operating directly on their own senses.That is, humans can evaluate the perception of time duration and velocity without the use of a timer (Meck, 1996).The sense of time is comparable to the perception of other elements, including color, form, and temperature.It is the human species' natural perceptual instinct.Incorporating time perception into the psychological evaluation of college students can thereby enhance the evaluation index system (Coelho et al., 2004).
In the realm of scholarly exploration, it is notable that the preponderance of extant investigations employ supervised machine learning methodologies to cultivate models, harnessing meticulously annotated data samples for model acquisition.Nevertheless, within the domain of mental health (MH), a conspicuous lacuna is evident, primarily manifesting as a multitude of nascent MH conditions that confront a dearth of lucid characterization or exhibit an ephemeral and nebulous disposition.In response, this scholarly discourse introduces the domain of time perception from the annals of psychology, seamlessly integrating it into the construction of an evaluative framework.This novel paradigm serves as an avenue to address the inherent deficiencies, facilitating the augmentation of evaluation accuracy.Moreover, it assumes a pivotal role in elucidating the nuanced MH phases experienced by college students, thereby presenting a compass for informed psychological counsel and timely intervention.
Central to this approach is the translation of temporal perception into a quantifiable metric, ingeniously amalgamated with established evaluative metrics to engender a unified metric.This novel metric lays the foundation for our ensuing proposition: a pioneering clustering algorithm underpinned by the amalgamated metric.This innovative clustering algorithm, a confluence of the fireworks algorithm and K-means, is meticulously designed to surmount a notable challenge intrinsic to the K-means algorithm-its susceptibility to converging to local optima.
The fireworks algorithm is strategically integrated to address and surmount this challenge, imparting resilience and expanding the algorithm's potential to unearth global optima.By embracing this synergistic amalgamation, our endeavor is to cultivate a transformative methodology that transcends the prevailing limitations, proffering heightened precision in evaluation while fostering a deeper understanding of the intricate MH trajectories traversed by college students.Ultimately, this approach stands poised as an invaluable compass for psychological guidance and efficacious interventions, distinctly illuminating the pathway toward ameliorated mental well-being within the academic fraternity.
Evaluation classification algorithm
Cluster analysis is an essential component of data mining technology.By separating the data set into numerous classes, comparable samples are classified into one class based on the properties of the data, while different samples are sorted into separate classes to ensure homogeneity within classes and heterogeneity between classes (Meck, 1996).Scholars from around the world have incorporated cluster analysis into psychological prediction and evaluation, using various clustering algorithms to analyze psychological indicators in order to provide guidance for psychological intervention in advance, as a result of the widespread application of data mining technology (Coelho et al., 2004;Ariff, Bakar & Zamzuri, 2020).
K-means clustering is a partitioning-based technique with low time complexity, high clustering efficiency, and excellent clustering quality.Some academics in the industrial sector choose the K-means algorithm and the hierarchical clustering algorithm to classify temperature fluctuations.Some researchers developed self-organizing feature mapping (SOM) based on the K-means method (Wu, Zhao & Guo, 2020;Chang & Yang, 2009;Lee & Macqueen, 1980;Qin & Gui, 2022).They employed a SOM training data set and K-means clustering based on the output findings of the training set to generate superior clustering results, which led to the visualization and understanding of the model being successful (Beauchaine & Beauchaine III, 2002).
The K-means algorithm is also sensitive to the starting center and requires data distribution, which are both downsides.Hierarchical clustering techniques typically have a high temporal complexity, and popular algorithms such as ROCK and Chameleon do not support big data sets.SOM is a model-based clustering approach with shortcomings including high temporal complexity, inability to handle huge data sets, and clustering results that are sensitive to model parameters.The model's advantage comes in the fact that it can accurately describe the data (Li et al., 2021;Jamali & Ayatollahi, 2015;Tang, 2021).As a result of a comprehensive examination of the clustering algorithm, discriminant analysis, principal component analysis, and other techniques have been utilized to interpret clustering results.Using principal component analysis, a number of scholars have tackled the problem of indicators' high repetition.In addition, the results of the study indicate that discriminant analysis and principal component analysis have yielded fruitful interpretations of clustering results.The TwoStep algorithm is an enhanced hierarchical clustering algorithm that decreases the algorithm's temporal complexity, automatically determines the ideal cluster number, and scales well (Dong & Shen, 2022).
State assessment is an essential component of daily management.It is a sophisticated and abstract nonlinear issue that is affected by numerous variables and has a complex change rule.Due to the inability of the traditional state assessment model to fulfill the requirements of the present complicated quality assessment task, the application of intelligent algorithms to increase the efficiency and accuracy of quality assessment has become the focus of current research.
Some researchers presented a state assessment technique based on the adaptive BPNN model, which introduced the adaptive learning rate momentum to optimize the model's network topology and ensure its stability.Some academics have proposed a research methodology for project evaluation based on the analytic hierarchy approach (AHP) (Nelsen, Kayaalp & Page, 2021;Irawan, 2019).Integrating multi-person and multi-attribute aspects, qualitative analysis and quantitative computation are used to assess the quality of management.Some researchers have proposed a design scheme for an association mining-based foreign language evaluation model that combines the data analysis method with the quantitative language evaluation model, extracts the evaluated association rule features from the reconstructed phase space as the clustering center of information fusion, and realizes the optimal design of the evaluation model via adaptive regression analysis.Using the AHP, some researchers designed a method for providing a realistic assessment of the quality of network resources, created assessment objectives-centered material, and established a hierarchical structure model (Sorour et al., 2014;Yanfang & Yamin, 2021).
The Fireworks Algorithm (FWA) is a novel form of swarm optimization technique that simulates the search process of neighborhood space during the explosion of fireworks.It is capable of balancing global and local search, and has the benefits of simple implementation, straightforward operation, and powerful search capabilities (Jiang et al., 2022;Li & Tan, 2019;Xue, 2020).The Fireworks Algorithm, inspired by the explosion of fireworks in the night sky, has found promising applications in optimization problems.This natureinspired metaheuristic approach simulates the explosion of fireworks to enhance the search process within complex solution spaces.Through its utilization, the algorithm has exhibited remarkable efficacy in solving a variety of optimization problems, ranging from engineering design and parameter tuning to financial portfolio optimization (Karimov & Ozbayoglu, 2015;Rahmani et al., 2015).
On the other hand, the K-means algorithm, a fundamental clustering technique, has established its dominance in partitioning datasets into distinct groups.By iteratively assigning data points to clusters and recalculating cluster centroids, the algorithm minimizes the within-cluster variance, effectively grouping similar data points together.Widely used in data analysis and pattern recognition, the K-means algorithm has proven instrumental in applications such as image segmentation, customer segmentation, and anomaly detection.
Drawing parallels between the two, it becomes evident that while the Fireworks Algorithm excels in optimization tasks by efficiently exploring solution spaces, the Kmeans algorithm specializes in data clustering by identifying inherent patterns within datasets.Interestingly, their areas of application are not mutually exclusive.Recent research has begun to explore the synergy between these algorithms, leveraging the Fireworks Algorithm's global exploration capabilities to initialize K-means and enhance its convergence towards better clustering solutions.This fusion of methodologies showcases the dynamic nature of algorithmic development, where distinct techniques can complement each other to provide novel and improved solutions to complex problems.
In conclusion, the Fireworks Algorithm and the K-means algorithm, though operating in different domains, exhibit remarkable potential in their respective application areas.Their unique characteristics and strengths offer opportunities for cross-disciplinary utilization, opening doors to innovative hybrid approaches that harness the best of both worlds.As the field of optimization and clustering continues to evolve, these algorithms stand as notable pillars, shaping the landscape of intelligent problem-solving techniques.
METHOD Setting of temporal perception
Translating temporal awareness into quantifiable metrics is a crucial step in enhancing comprehension of methodology, particularly in fields that involve time-dependent processes or data analysis.This process involves converting the inherent understanding of time-related factors into measurable and meaningful indicators.this include: (1) Identify key temporal factors, which includes variables such as time intervals, durations, sequencing of events, frequency of occurrences, timestamps, and historical trends.if user are analyzing website user behavior, they might define metrics such as ''average time spent on page,'' ''time between interactions,'' ''conversion rate over time,'' or ''rate of content consumption.''(2) Temporal aggregation: users may need to aggregate the temporal data to create meaningful intervals or time units for analysis.This could involve grouping data into hours, days, weeks, months, or other relevant timeframes.(3) Validation and iteration: Validate the translated metrics by comparing them with existing benchmarks, theories, or expectations.If necessary, refine metrics and methods based on feedback and insights gained from the analysis.
FWA-K-means
K-means algorithm can be optimized by using FWA with the ability of balancing global and local search, and the obtained data results can be used as the initial clustering center of K-means algorithm to solve the problem that K-means algorithm is easy to fall into local optimum.This is done in an effort to address the issues of low accuracy, poor reliability, and low efficiency of the mental state assessment model for college students.
The steps of the proposed hybrid clustering method are as follows: Step 1: Initialize parameters by entering the fireworks scale n, current iteration number t and maximum iteration number T.
Step 2: Calculate the fitness, explosion radius, and number of sparks generated by each firework.
In FWA, there are two parameters that play a decisive role.The first is the explosion radius r i of firework i where f is the fitness function, and r is The second key parameter is the number of sparks where k is a constant, and when s i ≤ aW , s i = [0.1W], when s i ≤ 0.5W ,s i = [0.5W] Step 3: Calculate the fitness of spark and the fitness of Gauss mutation spark.
The formation of a spark is shown as where j is the dimension of the spark, l i is the uniformly distributed random number, and U [−1,1] is the scaling parameter.
Step 4: Select n individuals from the fireworks, sparks, and Gauss variant sparks as the next generation fireworks.
The mutation operator is introduced in FWA to generate Gaussian sparks and increase the diversity of the population.
Step 6: Take N fireworks individuals as the initial cluster center point, calculate the cluster center distance, and divide each data object into the nearest cluster.
The calculated value of the distance between samples is The update of the cluster center is shown as follows It is necessary for the K-means clustering method to repeatedly update the divided categories as well as the clustering center point c. up to the point where the termination condition is satisfied.The number of iterations must reach a maximum before the default termination condition is met, which is either the objective function of the algorithm falling below the threshold or the number of iterations reaching that maximum.
The loss function is Step 7: Update the clustering center of each category.The flow chart of the hybrid clustering algorithm is shown in Fig. 1.
Establishment of evaluation system
The design of the intelligent assessment index system of college students' MH is the initial step in the construction of the evaluation model of college students' MH.In time psychology, time perception refers to the continuous and sequential response of humans to time stimuli affecting their own senses.Thus, humans can assess the perception of time duration and velocity without a timer.Comparable to the perception of other elements, such as color, shape, and temperature, is the perception of time.It is the natural perceptual instinct of the human species.By including time perception into the psychological evaluation of pupils, the evaluation index system can be improved.
Combining the current scenario of college students' MH and the notion of time perception in psychology, Fig. 2 depicts the intelligent evaluation index system for college students' MH.
Further, the index is quantified.To keep it simple and universal, each indicator is divided into three levels, with scores ranging from 0 to 10.A score greater than or equal to 8 indicates normal, a score ranging from 3 to 7 indicates average, and a score less than 3 indicates certain obstacles in this area.
For the final MH score, referring to the previous literature, this article divided it into four grades, namely normal, mild anxiety, moderate anxiety and severe anxiety.Table 1 Hyperparameter settings.
Hyperparameter Value
Maximum number of iterations 100
Individual variation 6
The explosion radius 25 The fireworks scale 6 The clustering center 10 To validate the proposed hybrid clustering algorithm for the effectiveness of college students MH evaluation, select six college students as the research object, because the data is difficult to obtain, in this article, the number of each college selection for 200 people, first by the psychological association of colleges and universities to makes a preliminary test of the number of selected college students psychological rating, namely the sample label.
In order to verify the effectiveness of the proposed model, simulation experiments are carried out.The experimental simulation environment is Intel I9-10900K, the memory is 64GB, and the GPU is RTX3080.
The hyperparameter Settings are shown in Table 1
Model comparision
In this research, the assessment results of the hybrid clustering model are compared with Kmeans, K-means ++, and K-means+SOM to demonstrate its advantages.K-means++ is an improved initialization technique for the K-means clustering algorithm.The primary goal of K-means++ is to select initial cluster centroids in a way that enhances the convergence speed and quality of the final clustering results.The traditional K-means algorithm often suffers from sensitivity to the initial placement of centroids, which can lead to suboptimal clustering outcomes or slow convergence.K-means++ addresses this issue by following a systematic approach to initialize the centroids.The algorithm selects the first centroid randomly from the data points.Subsequent centroids are then chosen with a higher probability of being far from existing centroids, effectively spreading out the initial centroids.This initialization process helps K-means converge to a better solution and often requires fewer iterations to achieve convergence compared to random initialization.K-means+SOM refers to a combination of the K-means clustering algorithm and the Self-Organizing Map (SOM), which is a type of artificial neural network used for data visualization and clustering.In this hybrid approach, the strengths of both K-means and SOM are leveraged to improve clustering results.
The insightful comparison outcomes are visually depicted through the intricate graphical representations found in Figs. 3 and 4. Figure 3 elucidates the evaluation accuracy achieved by a diverse array of models, offering a glimpse into their respective strengths and weaknesses.Meanwhile, Fig. 4 delves deeper into the intricacies of error rates, shedding light on the nuances of the evaluation models' performance.
Remarkably, the results reveal a clear hierarchy in performance, with the conventional K-means algorithm displaying the least favorable accuracy, closely trailed by the enhanced K-means++.However, the real breakthrough comes with the innovative integration of K-means+SOM, where the fusion of the SOM algorithm demonstrates its potential in substantially augmenting clustering accuracy.This notable improvement is accompanied by a significant reduction in the error rate, underscoring the efficacy of the hybrid approach.
The pivotal significance of the method detailed in this article becomes even more pronounced upon scrutinizing its impact on student MH (mental health) status evaluation.The discernible increase in precision signifies a more nuanced understanding of students' psychological well-being.The reduced error rate in assessing students' MH status implies a higher level of confidence in the evaluation process, which in turn contributes to the overall improvement of students' psychological health status assessment.
Moreover, an additional facet of this study involves a comprehensive examination of the modeling time required by various models.To provide a more robust assessment, two universities were randomly selected for testing, yielding illustrative insights displayed in Figs. 5 and 6.Notably, these figures shed light on the intricate interplay between modeling time and algorithmic intricacies.
Upon careful observation, a noteworthy revelation surfaces -the hybrid clustering algorithm introduced in this article, despite integrating the fireworks algorithm, remains judiciously efficient in its modeling time.In sharp contrast to other algorithms under scrutiny, the hybrid approach manages to strike a harmonious balance.It seamlessly incorporates the prowess of the fireworks algorithm while ensuring that the modeling time does not experience a significant escalation when juxtaposed against the established benchmark set by the K-means algorithm.This facet of the study underscores a pivotal achievement-the successful integration of advanced algorithmic techniques without incurring a notable trade-off in terms of computational efficiency.This substantiates the algorithm's practical applicability and reinforces its potential for real-world implementation.The ability to optimize clustering accuracy without imposing an undue burden on computational resources adds a layer of practicality that enhances the overall appeal and feasibility of the proposed approach.In essence, the analysis of modeling time, as depicted in Figs. 5 and 6, contributes a holistic perspective to the evaluation of the hybrid clustering algorithm's efficacy.The findings underscore not only its substantial accuracy enhancements but also its commendable computational efficiency, further solidifying its status as a promising contender for enhancing diverse evaluation processes.
DISCUSSION
Comparing the results against the backdrop of existing research, the validity of the hybrid clustering algorithm takes center stage.This algorithm, incorporating the FWA (Fireworks Algorithm) known for its prowess in balanced global and local searches, presents a unique and effective solution.Its synergy with the K-means clustering technique results in the construction of a high-precision curriculum quality evaluation model.This distinctive model demonstrates the potential to revolutionize the evaluation landscape, offering a more accurate and comprehensive approach to curriculum quality assessment.
In conclusion, the comprehensive analysis of the comparison outcomes and the subsequent discussions serve to underscore the significance of the methodology proposed in this study.The findings not only shed light on the limitations of traditional approaches but also emphasize the potential for innovation through the fusion of algorithms, as showcased by the successful integration of K-means+SOM and FWA.This study opens up new avenues for enhancing evaluation accuracy and effectiveness, particularly in the realm of student psychological well-being and curriculum quality assessment.
Moreover, under the emergency of public health events in Beijing, colleges and universities should, on the one hand, actively improve the economic support system, provide adequate psychological counseling services, conduct rich psychological education activities, and take other measures to improve the MH level of students from low-income families and prevent the formation of a strong inferiority complex.On the other hand, the MH of college students is strongly supported by their families.On the path of development for college students, if parents can adopt scientific and effective educational concepts, enhance parent-child contact, and provide enough spiritual and material support, it will be of tremendous assistance in maintaining their MH.Therefore, it is advised that relevant departments expedite the creation and promulgation of relevant regulations to promote the MH level of college students, in order to train for the country comprehensive talents with high professional quality and healthy psychological quality.
Figure 1
Figure 1 Flow chart of hybrid clustering algorithm.
Figure 2
Figure 2 The evaluation index system for college students' MH.For the final MH score, referring to the previous literature, this article divided it into four grades, namely normal, mild anxiety, moderate anxiety and severe anxiety.Full-size DOI: 10.7717/peerjcs.1586/fig-2
Figure 5
Figure 5 Comparison of modeling time of different algorithms in University A. The modeling time of different models is compared and two universities are randomly selected, as shown here and in Fig. 6.Full-size DOI: 10.7717/peerjcs.1586/fig-5
Figure 6
Figure 6 Comparison of modeling time of different algorithms in University D. The modeling time of different models is compared and two universities were randomly selected for testing, as shown in Figs. 5 and this figure.Full-size DOI: 10.7717/peerjcs.1586/fig-6 | 5,076.4 | 2023-09-27T00:00:00.000 | [
"Computer Science"
] |
Towards building a Robust Industry-scale Question Answering System
Industry-scale NLP systems necessitate two features. 1. Robustness: “zero-shot transfer learning” (ZSTL) performance has to be commendable and 2. Efficiency: systems have to train efficiently and respond instantaneously. In this paper, we introduce the development of a production model called GAAMA (Go Ahead Ask Me Anything) which possess the above two characteristics. For robustness, it trains on the recently introduced Natural Questions (NQ) dataset. NQ poses additional challenges over older datasets like SQuAD: (a) QA systems need to read and comprehend an entire Wikipedia article rather than a small passage, and (b) NQ does not suffer from observation bias during construction, resulting in less lexical overlap between the question and the article. GAAMA consists of Attention-over-Attention, diversity among attention heads, hierarchical transfer learning, and synthetic data augmentation while being computationally inexpensive. Building on top of the powerful BERTQA model, GAAMA provides a ∼2.0% absolute boost in F1 over the industry-scale state-of-the-art (SOTA) system on NQ. Further, we show that GAAMA transfers zero-shot to unseen real life and important domains as it yields respectable performance on two benchmarks: the BioASQ and the newly introduced CovidQA datasets.
Introduction
A relatively new task in open domain question answering (QA) is machine reading comprehension (MRC), which aims to read and comprehend a given text and then answer questions based on it. Recent work on transfer learning, from large pre-trained language models like BERT and XLNet has practically solved SQuAD (Rajpurkar et al., 2016;Rajpurkar et al., 2018), the most widely used MRC benchmark. This necessitates harder QA benchmarks for the field to advance. Additionally, SQuAD and other existing datasets like NarrativeQA (Kočiskỳ et al., 2018) and HotpotQA suffer from observation bias: annotators had read the passages before creating their questions.
In industry research, there is an urgent demand to build a usable MRC QA system that not only provides very good performance on academic benchmarks but also real life industry applications (Tang et al., 2020) in a ZSTL environment. In this paper, to build such a system, we first focus on Natural Questions (NQ) (Kwiatkowski et al., 2019): a MRC benchmark dataset over Wikipedia articles where questions (see Figure 1) were sampled from Google search logs. This key difference from past datasets eliminates annotator observation bias. Also, NQ requires systems to extract both a short (SA, one or more entities) and a long answer (LA, typically a paragraph that contains the short answer when both exist). The dataset shows human upper bounds of 76% and 87% on the short and long answer selection tasks respectively (for a "super-annotator" composed of 5 human annotators). The authors show that systems designed for past datasets perform poorly on NQ. We propose GAAMA that possesses several MRC technologies that are necessary to perform well on NQ and achieve significant boosts over another industry setting competitor system (Alberti et al., 2019a) pre-trained on a large language model (LM) and then over millions of synthetic examples. Specifically, GAAMA builds on top of a large pre-trained LM and focusses on two broad dimensions: 1. Improved Attention: With the reduction of observation bias in NQ, we find a distinct lack of lexical and grammatical alignment between answer contexts and the questions. For example, here is a question to identify the date of an event from the SQuAD 2.0 dataset: According to business journalist Kimberly Amadeo, when did the first signs of decline in real estate occur? This question can be aligned almost perfectly with the text in the answering Wikipedia paragraph in order to extract the year 2006: Business journalist Kimberly Amadeo reports: "The first signs of decline in residential real estate occurred in 2006." In contrast, as shown in Example 1 from Figure 1, a question from NQ to identify the date of marley's death requires parsing through a number of related sub clauses to extract the answer December 24, 1836 from the context.
This need for improved alignment leads us to explore two additional attention mechanisms.
• Attention-over-Attention (AoA) (Cui et al., 2017): on top of BERT's existing layer stack, we introduce a two-headed AoA layer which combines query-to-document and document-to-query attention. • Attention Diversity (AD) Motivated by (Li et al., 2018), we explore a mechanism that maximizes diversity among BERT attention heads. Intuitively, we want different attention heads to capture information from different semantic subspaces, which BERT currently does not enforce. Finally, we experiment combining the two strategies, yielding a gain of ∼1.5% for both short and long answers.
2. Data Augmentation: Given the data hungry nature of BERT-based models, we explore three strategies for data augmentation (DA). Crowd-sourced DA introduces human annotated Q&A pairs from prior MRC datasets. Synthetic DA introduces large amounts of machine generated QA pairs, inspired by the prior successes of (Alberti et al., 2019a;Dong et al., 2019). Unlike previous work, which predominantly relied on computationally expensive beam search decoding, we apply fast and diversity-promoting nucleus sampling (Holtzman et al., 2019) to generate 4M questions from a transformer-based question generator (Sultan et al., 2020). Adversarial DA performs a novel sentence-order-shuffling to perturb the native NQ data so as to tackle the inherent positional bias in Wikipedia-based MRC as shown by (Min et al., 2019;Kwiatkowski et al., 2019).
We find that, contrary to previous industry research SOTA (Alberti et al., 2019a) on NQ, it is not necessary to perform large scale synthetic DA. Instead we achieve better results with a well aligned Pre-Training (PT, a gain of 1.3-1.6%).
Most QA applications in an industry involve multiples domains e.g. Amazon Kendra 1 for Enterprise Search, Google Search, and IBM Watson Assistant 2 for Customer Service. Hence, there exists a need to develop one robust QA system that would work with ZSTL on a plethora of domains. Of course, one could futher fine-tune the system on the new domain to achieve better performance. However, the process is rather expensive as it demands manual human annotation which in real world applications is very scarce . Hence, we explore GAAMA's ZSTL effectiveness on two publicly available benchmark bio-medical datasets: BioASQ (Tsatsaronis et al., 2015) and the newly introduced CoVIDQA (Tang et al., 2020). The former is an annual shared task for QA over biomedical documents involving factoid questions. The latter is built on top of the CORD-19 corpus (Wang et al., 2020) consisting of questions asked by humans about the Covid-19 disease. The COVID-19 pandemic has caused an abundance of research to be published on a daily basis. Providing the capability to ask questions on research is vital for ensuring that important and recent information is not overlooked and available to everyone. GAAMA consistently delivers competitive performance when compared to baselines either trained on the target domain or zero-shot transferred to the target.
Overall, our contributions can be summarized as follows: 1. We propose a novel system that investigates several improved attention and enhanced data augmentation strategies, 2. Outperforms the previous industry-scale QA system on NQ, 3. Provides ZSTL capabilities on two unseen domains and 4. Achieves competitive performance compared to the respective corresponding baselines.
Related Work
Most recent MRC systems either achieve SOTA by adding additional components on top of BERT such as syntax or perform attention fusion (Wang et al., 2018) without using BERT. However, we argue that additional attention mechanisms should be explored on top of BERT such as computing additional cross-attention between the question and the passage and maximizing the diversity among different attention heads in BERT. Our work is also generic enough to be applied on recently introduced transformer based language models such as ALBERT (Lan et al., 2019) and REFORMER (Kitaev et al., 2020).
Another common technique is DA (Zhang and Bansal, 2019) by artificially generating more questions to enhance the training data or in a MTL setup (Yatskar, 2018;Dhingra et al., 2018;. (Alberti et al., 2019a;Alberti et al., 2019b) combine models of question generation with answer extraction and filter results to ensure round-trip consistency to get the SOTA on NQ. Contrary to this, we explore several strategies for DA that either involve diverse question generation from a dynamic nucleus (Holtzman et al., 2019) of the probability distribution over question tokens or shuffling the existing dataset to produce adversarial examples.
Recently Min et al., 2019) focus on "open" NQ, a modified version of the full NQ dataset for document retrieval QA that discards unanswerable questions. Contrary to that, we specifically focus on the full NQ dataset and believe there is room for improvement from a MRC research standpoint.
Model Architecture
In this section, we first describe BERT QA , GAAMA's underlying QA model, and two additional attention layers on top of it. Figure 2 shows our overall model architecture with details explained below.
. , x T ) BERT QA adds three dense layers followed by a softmax on top of BERT for answer extraction: and t e denote the probability of the t th token in the sequence being the answer beginning and end, respectively. These three layers are trained during the finetuning stage. The NQ task requires not only a prediction for short answer beginning/end offsets, but also a (containing) longer span of text that provides the necessary context for that short answer. Inspired by prior work from (Alberti et al., 2019b), we only optimize for short answer spans and then identify the bounds of the containing HTML span as the long answer prediction 3 . We use the hidden state of the [CLS] token to classify the answer type ∈ [short, long, yes, no, null], so y a denotes the probability of the y th answer type being correct. Our loss function is the averaged cross entropy on the two answer pointers and the answer type classifier: where 1(b) and 1(e) are one-hot vectors for the ground-truth beginning and end positions, and 1(a) for the ground-truth answer type. During decoding, the span over argmax of b and argmax of e is picked as the predicted short answer.
Attention Strategies
In this section, we outline our investigation of the attention mechanisms on top of the above BERT QA model. Our main question: BERT already computes self-attention over the question and the passage in several layers-can we improve on top that?
Attention-over-Attention (AoA)
Our first approach is AoA: originally designed (Cui et al., 2017) for cloze-style question answering, where a phrase in a short passage of text is removed in forming a question. We seek to explore whether AoA helps in a more traditional MRC setting. Let Q be a sequence of question tokens [q 1 , . . . , q m ], and C a sequence of context tokens [c 1 , . . . , c n ]. AoA first computes an attention matrix: where C ∈ R n×h , Q ∈ R m×h , and M ∈ R n×m . In our case, the hidden dimension is h = 1024. Next, it separately performs on M a column-wise softmax α = sof tmax(M T ) and a row-wise softmax β = sof tmax(M). Each row i of matrix α represents the document-level attention regarding q i (queryto-document attention), and each row j of matrix β represents the query-level attention regarding c j (document-to-query attention). To combine the two attentions, β is first row-wise averaged: The resulting vector can be viewed as the average importance of each q i with respect to C. This tokento-sequence attention encoded in AoA is a key difference from BERT attention. β is then used to weigh the document-level attention α.
The final attention vector s ∈ R N represents document-level attention weighted by the importance of query words.
Since the output of AoA is a vector of document length, to use it for answer start and end prediction we add a two-headed AoA layer into the BERT QA model and this layer is trained together with the answer extraction layer during the finetuning stage. Concretely, the combined question and context hidden representation H L from BERT is first separated to H Q and H C 4 , followed by two linear projections of H Q and H C respectively to H Q i and H C i , i ∈ {1, 2}: . Therefore, the AoA layer adds about 2.1 million parameters on top of BERT which already has 340 million. Next, we feed H C 1 and H Q 1 into the AoA calculation specified in Equations (1) -(3) to get the attention vector s 1 for head 1. The same procedure is applied to H Q 2 and H C 2 to get s 2 for head 2. Lastly, s 1 and s 2 are combined with b and e respectively via two weighted sum operations for answer extraction.
Attention Diversity (AD) layer
It has been shown through ablation studies (Kovaleva et al., 2019;Michel et al., 2019) that removing BERT attention heads can achieve comparable or better performance on some tasks. Our objective is to find out if we can diversify the information captured and train a better BERT model by enforcing diversity among the attention heads.
In a Transformer model, (Li et al., 2018) examine a few methods to enforce such diversity and see an improvement on machine translation tasks. Contrary to that we start with a pre-trained BERT model, take the attention output from scaled dot-product attention and compute the cosine similarity between all pairs of heads: We then average D for the per-token similarity and add it as an additional loss term. For each token, there are 16 + 15 + ... + 2 total similarity calculations, 16 being the number of heads in BERT QA . Figure 3 shows the modified structure of Multi-head Attention in the Transformer architecture. We apply this technique during finetuning on NQ and to the last layer of BERT only. It will be interesting to see how this additional training objective affects BERT pretraining, which we leave as future work.
Model Training
Our models follow the now common approach of starting with the pre-trained BERT language model and then finetune over the NQ dataset with an additional QA sequence prediction layer as described in section 3.1. Note that unless we specify otherwise, we are referring to the pre-trained "large" version of BERT with Whole Word Masking (BERT W ). BERT W has the same model structure as the original BERT model, but masks whole words instead of word pieces for the Masked Language Model pre-training task and we empirically find this to be a better starting point for the NQ task.
Data Augmentation (DA)
Model performance in MRC has benefited from training with labeled examples from human annotated or synthetic data augmentation from similar tasks. This includes the prior SOTA on NQ by (Alberti et al., 2019a) where 4 million synthetically generated QA pairs are introduced. In this paper, we similarly adapt and evaluate three different approaches for data augmentation: Crowd-sourced, Synthetic, and Adversarial. Crowd-sourced DA: We leverage the previously released SQuAD 2.0 MRC dataset that obtained ∼130k crowd-sourced question, answer training pairs over Wikipedia paragraphs. Note that we present results using a "pre-training" (PT) strategy where we first train on the augmentation data and, finally, perform fine-tuning exclusively on the NQ domain. We also experimented with a multi-task-learning setup as in (Ruder et al., 2019;Xu et al., 2018), but omit those experimental results for brevity since PT consistently proved to be a better augmentation strategy. Synthetic DA: We also pre-train a model on 4M automatically generated QA examples. The generation works as follows: similar to (Dong et al., 2019), we first fine-tune a masked LM for question generation using SQuAD1.1 training examples-we choose RoBERTa for its extended LM pretraining. Then a SQuAD MRC model trained on ten predefined question types-e.g. what, how, when, and how many, as opposed to full-length questions-is used to identify potential answer phrases in NQ training passages. Finally, we use diversity-promoting nucleus sampling (Holtzman et al., 2019) with a nucleus mass of .95 to sample questions from these passage-answer pairs, which has been shown to yield better QA training examples than standard beam search (Sultan et al., 2020). Adversarial DA: Sentence Order Shuffling (SOS) The SOS strategy shuffles the ordering of sentences within paragraphs from the NQ training set. The strategy is based on an observation in the preliminary BERT QA model that predictions favored earlier rather than later text spans. As noted by (Kwiatkowski et al., 2019), this appears to reflect a natural bias in Wikipedia that earlier texts tend to be more informative for general questions (a default long answer classifier predicting the first paragraph gets a LA F1 of 27.8%). Hence, our perturbation of the sentence ordering is similar in spirit to the types of perturbations introduced by (Zhou et al., 2019) for SQuAD 2.0 based on observed biases in the dataset.
Datasets
Source Domain We choose NQ as our source dataset. It provides 307,373 training queries, 7,830 development queries, and 7,842 test queries (with the test set only being accessible through a public leaderboard submission). For each question, crowd sourced annotators also provide start and end offsets for short answer spans 5 within the Wikipedia article, if available, as well as long answer spans (which is generally the most immediate HTML paragraph, table, or list span containing the short answer), if available. The dataset also forces models to make an attempt at "knowing what they don't know" (Rajpurkar et al., 2018) by requiring a confidence score with each prediction. For evaluation, we report the offset-based F1 overlap score. For additional details on the data and evaluation see (Kwiatkowski et al., 2019). Target Domain To test GAAMA's ZSTL transfer capability, we choose two academic 6 benchmark datasets on a related domain: Bio-medical. The first one uses a subset of the questions and annotations from task 8b of the BioASQ competition (Tsatsaronis et al., 2015). Specifically, we extract 1,266 factoid biomedical questions for which exact answers can be extracted from one of the PubMED abstracts marked as relevant by the annotators. We report the Factoid Mean Reciprocal Rank (MRR) as the evaluation metric. Secondly, we choose the very recent CovidQA (Tang et al., 2020) benchmark to illustrate GAAMA's performance on a globally important transfer learning dataset. This is a QA dataset specifically designed for COVID-19 and manually annotated from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge. It is the first publicly available QA resource on the pandemic intended as a stopgap measure for guiding research until more substantial evaluation resources become available. It consists of 124 question-article pairs (v0.1) and hence does not have sufficient examples for supervised machine learning. CovidQA evaluates the zero-shot transfer capabilities of existing models on topics specifically related to COVID-19. One difference of CovidQA from the other QA datasets we evaluate is that it requires systems to predict the correct sentence that answers the question. Hence we intuitively report the P@1, R@3, and MRR based on the official evaluation metric.
Competitors
We compare GAAMA against three strong competitors from the industry research: 1) A hybrid of a decomposable attention model for Natural Language Inference (Parikh et al., 2016) and DrQA , a retrieve and rank QA model, which obtains commendable results on SQuAD.
2) The NQ baseline system (Alberti et al., 2019b) and 3) The current industry SOTA on NQ (Alberti et al., 2019a) which utilizes 4 million synthetic examples as pre-training. Architecturally, the latter is similar to us but we propose more technical novelty in terms of both improved attention and data augmentation. We note there is very recent academic work (Zheng et al., 2020)which we omit as GAAMA outperforms them on short answers and more importantly we compare against large scale industry SOTA for the scope of this paper. Since their work is more academic, their model enjoys being computationally more expensive for accuracy than GAAMA as they involve computing graph attentions that are typically more difficult to be run in parallel if we want to do whole graph propagation (Veličković et al., 2018). 6 Results: Attention Strategies: Both the AoA and AD strategies provide a meaningful (0.7 − 0.9%) improvement over a baseline BERT W model as shown in Table 1. Note that our baseline BERT W already achieves a stronger baseline than previously published SOTA by (Alberti et al., 2019a) by relying on the stronger whole-word-masking pre-training mechanism for the underlying BERT model. Combining both attention strategies with the SQUAD 2 PT yields the best single model performance, though the improvements are primarily on LA performance rather than SA. Exploring why only LA improves is left as part of our future work once we start with even larger, better pre-trained models. Data Augmentation: As seen in Table 2, using a (well aligned) crowd-sourced dataset (SQuAD 2) for pre-training proves to be quite effective. It provides the largest data augmentation gain in SA F1, ∼1.6%, as well as a ∼1% gain in LA F1. Employing 4 million synthetic question answer pairs also provide similar gains in SA F1 and an even better gain (∼2.3%) in LA F1. From an efficiency perspective, however, SQuAD 2 PT only introduces 130K additional examples to the training process, whereas synthetic data augmentation requires training over 4M additional examples (on top of the training required for the data generator). We also find that it was unhelpful to combine SQUAD 2 PT with 4M synthetic examples for improving single model performance; so we evaluate our best performing model architectures only using the SQuAD 2 PT strategy. (Parikh et al., 2016) 31.4 54.8 BERTL w/ SQuAD 1.1 PT (Alberti et al., 2019b) 52.7 64.7 BERTL w/ 4M Synthetic (Alberti et al., 2019a) 55.1 65.9 This Work GAAMA: BERTW + AoA + AD + SQuAD 2 PT 57.0 68.6 Table 2: Performance of various Data Augmentation strategies. SQuAD helps short but synthetic helps long answers.
ZSTL Experiments
We create a random train (75%) and test split (25%) of the BioASQ 8b annotated questions in order to assess the performance of GAAMA with and without training. Comparison with more heavily fine-tuned prior art (Yoon et al., 2019) is left as part of future work and beyond the scope of this work as they focus on fine-tuned large language models e.g. BioBERT (Lee et al., 2020) with more extensive vocabularies. Note again, that our work focuses on minimizing these steps for new target domains. Hence, since our objective is not to keep retraining GAAMA for every new domain, we refrain from changing the underlying pre-trained LM. We observe that the GAAMA's ZSTL config performs competitively (0.56 lower on MRR) on BioASQ showing that there is hope of transferring models zero-shot to entirely unseen domains.
On CovidQA, we predict the sentence that contains our predicted answers. Table 6 shows the results. GAAMA performs quite competitively to a BioBERT baseline and outperforms it on all the three metrics. This amplifies the fact that it is not always necessary to start with a domain-specific LM. We note that GAAMA gets better P@1, slightly lower R@3, and the same MRR and hence it still gives a tough competition to a system trained on empirically a much better performing pre-trained LM than BERT: T5 (Raffel et al., 2019). We also note that both the T5 and BioBERT baselines are trained specifically to do sentence classification whereas GAAMA performs reading comprehension to extract answer spans and we predict the sentence that contains the spans. So no new "task-specific' training is involved in this process. Table 6: ZSTL performance of GAAMA vs. the prior work on the CovidQA dataset.
Efficiency
Inference: Inference efficiency is a crucial requirement of industry-scale systems. We investigate the inference times of both base and large models; while large models are ideal for academic benchmarks, the faster inference times of base models can be worth the reduction in accuracy in industrial settings. Measurements are carried out using a random sample of examples from the NQ dev set with a Nvidia R Tesla R P100 GPU and 8 threads from an Intel R Xeon R E5-2690 16-core CPU. In order to decrease inference time, we simulate passage retrieval to send the model the most relevant passage by selecting the first correct top level candidate if there is one and the first (incorrect) top level candidate if there is not. We find in Table 4 that switching from base to large yields an 8.3% absolute increase in F1 in exchange for 1.3x to 2.8x increases in inference time. When running the model on a GPU these result in manageable 95th percentile inference times of less than a second; whereas on the CPU the 95th percentile times are multiple seconds. We conclude that either of these models could be deployed in production environments on GPU only. In future work we intend to explore network pruning or knowledge distillation techniques for potential speedups with the large model. Training: Efficient training is also an important component of industry-scale systems. To this end we consider both the number of model parameters and the amount of PT data. Our AoA implementation adds less than 1% to BERT W 's parameters and AD does not add any as it is implemented in the loss. Similarly, by using a well-aligned PT dataset (SQuAD 2.0) we are able to rival the performance of the much larger 4M synthetically generated corpus (Alberti et al., 2019a) with only 130K examples as seen in Table 2.
7 Analysis of GAAMA's Components Table 3 shows the ablation study of GAAMA's components. Note that our best model's performance on short answers (57.2) almost matches a single human performance 7 . When doing manual error analysis on a sample of the NQ dev set, we do observe patterns suggesting that each of GAAMA's components do bring different strengths over just the best final combination (BERT w + AoA + AD + SQuAD2 PT) e.g. the Wikipedia article for Salary Cap contains multiple sentences related to the query "when did the nfl adopt a salary cap": The new Collective Bargaining Agreement (CBA) formulated in 2011 had an initial salary cap of $120 million...The cap was first introduced for the 1994 season and was initially $34.6 million. Both the cap and...
The later sentence contains the correct answer, 1994, since the question is asking for when the salary cap was initially adopted. The SOS augmented model correctly makes this prediction whereas our SQUAD 2 augmented models predict 2011 from the earlier sentence. There are also cases where the correct answer span appears in the middle or later part of a paragraph and, though our SQUAD 2 augmented models predict the spans correctly, they assign a lower score (relative to its optimal threshold) than the SOS augmented model. The position bias, therefore, appears to hurt the performance of the system in certain situations where location of the answer span relative to the paragraph is not a useful signal of correctness. On average, of course, the BERT W +SQUAD2 PT + AoA + AD configuration performs the best and manual error analysis indicates some ability to better attend to supporting evidence when it is further out from the correct answer span. For example, the correct answer in example 1 from figure 1 is December 24, 1836 which the AoA + AD model correctly identifies the answer span despite the question's and context's lack of lexical and grammatical alignment. While the base BERT W models fail at extracting the date (instead predicting a span more closely associated with the keywords in the query such as seven years earlier).
Conclusion
Although large pre-trained language models have shown super-human performance on benchmark datasets like SQuAD, we show that there is plenty of room to make improvements on top of BERT QA . Specifically, we outline prior strategies that do not work on a real benchmark consisting of "natural questions" showing the difficulty of the dataset and need for better algorithms. We introduce GAAMA and outline several strategies that are broadly classified under attention and data augmentation and show how effective it can be to attain competitive performance on NQ compared to other industry baselines. We also outline GAAMA's OOTB zero-shot transfer on two unseen datsets and show optimistic performance. Our future work will involve adding larger pre-trained language models like T5 and also exploring multi-lingual QA. | 6,581.4 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Simulation of Natural Convection Heat Transfer Enhancement by Nanoparticles in an Open-Enclosure Using Lattice Boltzmann Method
A numerical analysis is performed to investigate the laminar, free convection flow in an Open Enclosure Using Lattice Boltzmann Method (LBM) in the presence of Carbon nanotube and Cu nanoparticles. The problem is studied for different volume fractions of nanoparticles, and aspect ratio of the cavity for various Rayligh numbers. The volume fraction of added nanoparticles to water (as base fluid) is lower than 1% to make dilute suspensions. The study presents a numerical treatment based on LBM to model convection heat transfer of Carbon nanotube based nanofluids. Results show that adding a low value of Carbon nanotube to the base fluid led to significant enhancement of convection rate. Results show that adding nanoparticles to the base fluid enhances the rate of natural convection in a cavity. Make a comparison between Carbon nanotube and Cu-nanoparticles shows that the Carbon nanotube-nanoparticle has better performance to enhance convection rate at comparison with Cunanoparticles.
Introduction
The enhancement of fluids heat transfer is an interesting topic for different kinds of industrial and engineering applications.A well-known way to enhance the rate of convection heat transfer of conventional fluids such as water, oil and ethylene glycol which has low thermal conductivity [1] [2] is adding nano scale conductive particles.The added particles can be metals [3], non metals [4] or Carbon nanotubes [5] [6].Because of the high thermal conductivity of these particles, they can improve conductivity of the suspensions systematically.Nowadays, nanoparticle added fluids are known as nanofluid that first time Choi [7] named this kind of fluid suspensions.He presented enhancement of convection heat transfer by adding nanoparticles to the fluids.In recent years, many studies have been conducted to study heat transfer of nanofluids numerically and experimentally [7]- [12].Khanafer et al. [8] presented a heat transfer enhancement by adding nanoparticles to fluid in a two dimensional enclosures at natural convection regime for the different Grashof numbers.They presented an increase of ave Nu about 7.5%, 12%, 15.5% and 20%, respectively, by adding a volume fraction of added Cu nanoparticles to the Water equal to 4%, 6%, 8% and 10% when 4 10 Gr = .
Gherasim et al. [9] conducted to an experimental study and represented heat transfer enhancement possibility of coolants with suspended Al 2 O 3 nanoparticles dispersed in water inside a radial flow cooling device.Saleh et al. [10]
Gr =
and also for Cu nanoparticles at same condition the values are about 3.9%, 7.9%, 11.8%, 15.7%, 19.5%.In 1991, Iijima [13] discovered Carbon nanotubes as an allotrope of Carbon which is made of long-chained molecules of Carbon with Carbon atoms arranged in a hexagonal complex to form a tubular structure.In last decade Carbon nanotubes has been mentioned as attractive topic by many researches which is generally due to their special properties at physical view such mechanical and thermal properties.At this point of view, the Carbon nanotubes have extraordinary thermal properties such as thermal conductivity about twice as high as diamond [14] or thermal stability up to 2800 C in vacuum.The higher thermal conductivity of Carbon nanotubes relative to other nanoparticles led to that the nanofluids containing cylindrical Carbon nanotubes are expected to have better heat transfer properties compared with the other nanofluids with spherical nanoparticles [15] [16].Gavili et al. [17] simulated the mixed convection in a two-sided lid-driven differentially heated square cavity for nanofluids containing Carbon nanotubes for nanofluids.Natural convection in a square cavity and its fluid flow is a classical benchmark in heat transfer problems.Open cavities are a kind of 2-D cavity which has an open side.These kinds of cavities have special physics for both flow and temperature fields in open side because of outgoing of flow from this side.Many studies have been done on analysis of buoyant flows and their heat transfer in open cavities.Javam et al. [18] investigated stability of stratified natural convection flow in open cavities.Mohammad et al. [19] presented natural convection in an open ended cavity and slots they analyzed the effect of aspect ratio of cavity on heat transfer rate.They also presented a good procedure for simulating open boundaries in Lattice Boltzmann Method, their technique demonstrated the abilities of the LBM to simulate Natural convection in open cavities.The progress of using the Lattice Boltzmann Method (LBM) as a numerical technique to simulate the heat transfer and fluid flow has been obvious in the last decade [20]- [23].The Lattice Boltzmann Method has well-known advantages such easy implementation, possibility of parallel coding and simulating of complex geometries and fluid dynamic problems such as melting, fuel cell, porous media, nanofluids and etc.The convection heat transfer of different nanoparticle-based nanofluid in a cavity was mentioned in current years [24]- [27].Fattahi et al. [24] applied Lattice Boltzmann Method to study natural convection heat transfer of Al2O3 and Cu nanofluid in a cavity.Their results show that increasing the solid volume fraction led to a heat transfer enhancement at any Rayleigh number and also heat transfer increases with increase in Rayleigh number for a particular volume fraction.Nemati et al. [29] is used.On other hand, the study applies the Patel model [30] to evaluate of the effective thermal conductivity of Cu-nanofluid.To the best of author knowledge, the effects of carbon nanotubes on flow and thermal fields of natural convection is an unknown perspective which is not understand heretofore.Therefore, in this study the used numerical method is LBM with coupled double population approach for flow and temperature fields.The effect of the volume fraction of the Cu and Carbon nanotube-particles on average Nusselt number, streamlines and temperature contours is investigated for various Rayligh numbers.Also the aspect ratio of the cavity is studied as an important geometric parameter of the 2D cavities in different condition to exhibit the role of this parameter on both heat transfer rate and fluid flow of the base fluid and the nanofluid.
Lattice Boltzmann Method for Flow and Thermal Fields
The LBM utilizes two distribution functions, for the flow and temperature fields.It uses modeling of movement of fluid particles to define macroscopic parameters of fluid flow.The distribution functions are obtained by solving the lattice Boltzmann Equation (LBE), which is a special form of the Kinetic Boltzmann Equation.
The basic form of the Lattice Boltzmann Equation with an external force by introducing BGK approximations can be written as follows for the both flow and the temperature fields [20]: , , , , eq t t g x e t t t g x t g x t g x t m τ and t τ are the dimensionless collision-relaxation times for the flow and tem- perature fields, respectively.They defined as follows [31]: and temperature field ( eq g α ) are calculated as follows in different α directions: ( ) ( ) For these model the values of e α and w α for various α directions will be, re- spectively: In order to incorporate external force in collision part of Lattice Boltzmann model (Equation ( 1)), radiation heat transfer and viscous dissipation are neglected at the numerical simulation.Therefore, to capture buoyancy force effects in the flow field, the Boussinesq approximation is applied.Thus, to model buoyancy force in Equation ( 1), the external force needs to be assumed as below in the needed direction: ( ) Finally, macroscopic variables can be calculated as follows: Flow density:
Nanofluids Modeling
At present study, nanofluid is assumed as a single phase fluid.Thermal diffusivity of nanofluid is as follows: ( ) The density, heat capacitance and thermal expansion of nanofluid can be defined as [32]: ( ) The effects of nanoparticles on the viscosity of nanofluids are introduced by the socalled apparent viscosity which is presented by app µ [28]:
Main Problem
In the present study, the effects of Carbon nanotube and Cu-nanoparticles on natural convection heat transfer in a 2D deep open cavity are investigated using the Lattice Boltzmann Method based on double population approach and bounce back method.
The horizontal walls of the cavity are assumed to be insulated while the left wall is maintained at a uniform temperature ( h T ) differentially higher than the open end side temperature ( c T ).The considered physical system is presented in Figure 2. The effect of Carbon nanotube/Cu-nanoparticles on both fluid flow and heat transfer is investi- temperature are tabulated in Table 1.
The dimensionless quantities are as follows: The boundary conditions on the solid walls are in the following forms: 0, 1, for 0, 0 1 0, 0, for 0 , 0 0, 0, for 0 , 1 The boundary conditions on the east open side ( , 0 1 x W y = ≤ ≤ ) are in the following forms: The boundary conditions in the LBM can be implemented through the distribution function.For both flow and thermal fields, the distribution functions out of the domain are known from the streaming process.The unknown distribution functions are those toward the domain.
Result and Discussion
The presence of nanotube particles with high thermal conductivity in suspensions has different effects on temperature and flow fields.The effect of added nanotubes and Cu nanoparticles on flow and temperature fields of the Water (the base fluid) is presented in Figure 3 for the open cavity with 1 A = at different Rayligh numbers.At the present study to be sure that nanofluid behavior is completely same as single phase fluid, the volume fraction value of added nanotubes is less than 1% to make a dilute suspension, therefore it can be acceptable that the overall shape of the contours are same for both pure fluid and nanofluid with added nanotube particles.Also, the maximum value of stream-function in the flow field is presented in this figure for Water, Water-Cu and Water-Carbon nanotube nanofluid at different case studies.
The presented values for maximum stream-function shows that adding the 1% volume fraction of Carbon nanotube-nanoparticles increase the maximum value of streamfunction about 57%, 59% and 75% respectively for The overall view of this figure shows that the increase of cavity's aspect ratio leads to reduce of flow strength.The distance between hot and cold side in an enclosure is one of most important parameter of natural convection regime, the increase of this distance change the natural convection regime to a conduction manner in the problem domain.
Conclusions
The
[ 25 ]
applied LBM to investigate the effect of Cu, Al 2 O 3 and CuO nanoparticles on mixed convection in a lid-driven cavity.Their results show that adding nanoparticles increase the rate of mixed convection heat transfer of the base fluid for all tested Reynolds numbers.Abu-Nada and Chamkha[26] conducted to study of mixed convection heat transfer of Water-Al 2 O 3 nanofluid in an inclined cavity.Heat transfer enhancement due to increase of nanoparticles volume fraction at different Richardson and Grashof numbers was presented in their results.Their results show that ave Nu on hot wall enhances about 3.3%, 8.4%, 11.9% and 17.2% respectively for 2%, 5%, 7% angle of the cavity is equal to zero and also this values are 3%, 8.5%, 12% and 17% for 2 Ri =.The investigation of the heat transfer enhancement by adding the Carbon nanotube to the base fluid in an open-ended cavity is the main aim of the present study.Also a comparison between one type of Carbon nanotubes i.e. single wall Carbon nanotubes (SWCNT) and Cu-nanoparticles is one of the major tasks of the study.In this simulation the effective conductivity and viscosity were calculated based on the new theoretical models.The presented model by Masoumi et al. [28] is applied for effective viscosity of the Cu and Carbon nanotube-nanofluid.To simulate thermal conductivity of Carbon nanotube-nanofluid a new theoretical model presented by Sabbaghzadeh and Ebrahimi By considering D2Q9 model as shown in Figure1, for applied lattice scheme for both flow and temperature fields, equilibrium distribution functions for flow field ( eq f α )
Figure 1 .
Figure 1.Discrete velocity vectors for the D2Q9 model of LBM.
Figure 2 .
Figure 2. Schematic geometry of the problem.
Figure 5 .
Figure 5. Streamlines and temperature contours of the base fluid, Carbon nanotube and Cu-nanofluid ( 1% ϕ = ) at effect of Carbon nanotube and Cu nanoparticles on natural convection heat transfer in an open-end enclosure was studied numerically.The problem was investigated at different aspect ratios of the cavity (1 4 A ≤ ≤ ) and the volume fractions of Carbon na- notube and Cu-nanoparticles ( 0 1% ϕ ≤ ≤ ) when Rayligh number vary from 10 3 to 10 5 .Some of most important results that have been achieved in this study are as follows: Results show that adding a low value of Carbon nanotube to the base fluid led to significant enhancement of convection heat transfer. The heat transfer rate is at closed relation with thermal conductivity of the suspensions therefore the use of nanoparticles with better thermal conductivity leads to better heat transfer enhancement at base fluid. Make a comparison between Carbon nanotube and Cu-nanoparticles shows that the Carbon nanotube-nanoparticle has better performance to enhance convection rate. The aspect ratio of the cavity plays an important role on natural convection heat transfer.An increase of this parameter leads to heat transfer reduction in a 2D deep open cavity. Rayligh number loses its importance on natural convection in the high aspect ratios.
studied convection heat transfer of a nanofluid-filled trapezoidal enclosure.They investigated the effect of different volume fraction of Al 2 O 3 and Cu nanoparticles on heat transfer enhancement of Water as base fluid.They reported an enhancement of natural convection heat transfer about 5 10 | 3,145.8 | 2016-09-07T00:00:00.000 | [
"Physics"
] |
Optimized Bone Regeneration in Calvarial Bone Defect Based on Biodegradation-Tailoring Dual-shell Biphasic Bioactive Ceramic Microspheres
Bioceramic particulates capable of filling bone defects have gained considerable interest over the last decade. Herein, dual-shell bioceramic microspheres (CaP@CaSi@CaP, CaSi@CaP@CaSi) with adjustable beta-tricalcium phosphate (CaP) and beta-calcium silicate (CaSi) distribution were fabricated using a co-concentric capillary system enabling bone repair via a tailorable biodegradation process. The in vitro results showed the optimal concentration (1/16 of 200 mg/ml) of extracts of dual-shell microspheres could promote bone marrow mesenchymal cell (BMSC) proliferation and enhance the level of ALP activity and Alizarin Red staining. The in vivo bone repair and microsphere biodegradation in calvarial bone defects were compared using micro-computed tomography and histological evaluations. The results indicated the pure CaP microspheres were minimally resorbed at 18 weeks post-operatively and new bone tissue was limited; however, the dual-shell microspheres were appreciably biodegraded with time in accordance with the priority from CaSi to CaP in specific layers. The CaSi@CaP@CaSi group showed a significantly higher ability to promote bone regeneration than the CaP@CaSi@CaP group. This study indicates that the biphasic microspheres with adjustable composition distribution are promising for tailoring material degradation and bone regeneration rate, and such versatile design strategy is thought to fabricate various advanced biomaterials with tailorable biological performances for bone reconstruction.
Results
Characterization of bioceramic microspheres. Figure 1A illustrates that the bioceramic microspheres formed through tri-nozzle systems and the schematic illustration of the core-shell structure of the microspheres. The optical images revealed the monodisperse microsphere morphology. SEM images (Fig. 1B-D) indicated the dual-shell feature in the CaP@CaSi@CaP and CaSi@CaP@CaSi, and the area-selected EDX spectra also confirmed the corresponding chemical composition in different layers, which was consistent with the CaP or CaSi component in the ceramic slurries during the preparation of the microspheres. Moreover, the high-magnification SEM images showed that the as-sintered core or shell component in the microspheres exhibited low-densification porous structures.
Ionic concentrations of extracts. The Ca, P, and Si concentrations in the microsphere extracts were shown in Table 1. The CaP released appreciable amount of P ions (18.42 ± 2.76 ppm) into the cell culture medium in comparison with that (4.70 ± 0.31 ppm) in the primary DMEM. The CaSi@CaP@CaSi extract had significantly high Ca concentration (38.06 ± 3.54 ppm), which was nearly 2-fold than that in the CaP@CaSi@CaP extract (p < 0.05). In particular, it is worth noting that the dual-shell microsphere extracts had a comparatively high Si concentration (>100 ppm), while the Si concentration in the pure CaP extract were slightly detected (<2 ppm).
Proliferation of BMSCs cultured in conditioned media. The CCK8 assay was performed to compare proliferation of rBMSCs (rat bone marrow-derived mesenchymal stem cells) in the diluted ionic extracts of microspheres (Fig. 2). Different dilution of extracts showed different impact on the proliferation of rBMSCs. Among the three groups of extracts, the CaSi@CaP@CaSi caused the earliest and highest suppressive effect on cell proliferation at high concentration of the extracts (1/4) while CaP showed the lowest ( Fig. 2A). After 7 days of seeding, only at the 1/16 dilution that the proliferation of rBMSCs cultured in the extracts of all three groups showed a higher OD value than did rBMSCs cultured without any microspheres extracts (Fig. 2B). Therefore, the 1/16 dilution of the extract with similar higher proliferation was chosen as the optimal concentration for the following studies that required longer cultivation time up to 14 days. ALP activity of rBMSC exposed in conditioned media. The ALP activity of rBMSCs cultured in the ionic extracts at the dilution of 1/16, and medium alone was examined. The ALP staining displayed that dual-shell microsphere extracts showed more intensive dying than that for CaP extract and medium alone at 7 d (Fig. 3A). The quantitative analysis showed that ALP activity increased over time. The CaSi@CaP@CaSi group and CaP@CaSi@CaP group induced higher ALP activity than did CaP group and the control after 14 days of culture (p < 0.05; Fig. 3B).
Scientific REPORTS | (2018) 8:3385 | DOI:10.1038/s41598-018-21778-z Alizarin Red staining and quantitative analysis. The Alizarin Red staining was performed to show the nodule formation and calcium deposition. The color area and intensity of dual-shell microsphere groups were obviously larger and stronger than that of the CaP (Fig. 4A). Quantitative analysis of the alizarin red staining showed optical density of dual-shell microsphere groups were significantly higher than that in the control group (p < 0.05), but there were no significant differences between the dual-shell microspheres (p > 0.05).
μCT analysis. Figure 5A showed the 2D, 3D μCT-reconstructed images of the bone defect. It could be seen that the critical-sized bone defect without filling any material (Blank group) remained non-healing cavity at 18 weeks postoperatively. The CaP microspheres were displayed as uniform high density images for the whole repair stage (18 weeks) accompanying with limited biodegradation. However, the dual-shell microspheres were displayed with high density and low density images alternatively, and the microspheres were degraded with time. It was showed that the CaSi phase was preferentially biodegraded either in the internal shell layer in CaP@CaSi@ CaP microspheres or in the core and external shell layer in CaSi@CaP@CaSi microspheres. On the other hand, all the new bone started extending from the peripheral host bone at 6 weeks. Newly formed bone also scrambled onto the surface of the filled microspheres, except the Blank group. At 12 and 18 weeks, the amount of new bone increased and continued to gather towards the center from the outer ring of the defect. New bone tissue was infiltrated into the intervals among the microspheres and to connect with each other. According to the quantitative analyses for the BV/TV, Tb.N and RV/TV based on the 3D μCT scan (Fig. 5B-D), it was clear that the amount of newly formed bone increased in all microspheres groups from 6 N observed in all the groups showed no significant difference (p > 0.05). At 12 and 18 weeks, the BV/TV and Tb.N were both the highest in CaSi@CaP@CaSi group and the CaP@CaSi@CaP group was the second highest one, while the BV/TV in pure CaP group increased slowly. The RV/TV for the defects filled with the CaP (>25%) was significantly higher than other groups at 18 weeks (p < 0.01) and that of the CaSi@CaP@CaSi group was significantly lower than the CaP@CaSi@CaP group (p < 0.05) at both 12 and 18 weeks after surgery.
Histological observation. Figures 6 and 7 showed the HE-staining histological images of the bone specimens. At 6 weeks after surgery, no obvious inflammation was observed in all the groups (Figs 6 and 7). In the pure CaP group, numerous multinucleate cells appeared around microsphere surface, and material was resorbed very slowly at 12 weeks. The newly formed bone was mainly present in the peripheral region of implants at 12 and 18 weeks (Figs 6Bii,Cii and 7G,H,M,N, respectively). In contrast, the degradation of dual-shell microspheres was higher and a thin layer of newly formed bone was observed around the surface of all microspheres. Multinucleate cells were observed directly onto the surface of the CaP@CaSi@CaP microspheres, while not found on the outer surface of CaSi@CaP@CaSi at 6 weeks. (Fig. 7C-F). More vessels were also found near the surface of CaP and CaP@CaSi@CaP than that of CaSi@CaP@CaSi group at 6 weeks. As time went further to 18 weeks, vessels were observed in all the groups without obvious difference. New bone tissues invaded into most of the microspheres from the edge to the inner core as shown in rectangular frame in Fig. 7, and active osteoid tissues (labelled with triangle) were lined adjacent to the new bone (NB) at 12 and 18 weeks (Figs 6Biii,Biv,Ciii,Civ and 7I,K,O,Q, respectively). However, very limited amount of the pure CaP materials degraded and less newly formed bone tissue exhibited (Fig. 7M,N). Furthermore, less residual materials and more mature lamellar bone were observed in the CaSi@CaP@CaSi group compared with the CaP@CaSi@CaP group after 18 weeks (Figs 6Ciii, Civ and 7O,P,Q,R, respectively).
Discussion
Ideal bone substitutes are expected to have both osteoconductive and osteostimulative properties, and show a matched degradation rate with the new bone formation in a long term 25,26 . Bioceramic microspheres (~>500 μm in diameter) have been developed to treat bone defects, and these granules can be implanted into various shapes for ease of use 27 . The interconnected pores in the microsphere system can benefit drug delivery, osteoblastic cell migration and new bone tissue ingrowth 28,29 . However, the macropore enlargement in sparingly dissolvable diopside and HA microsphere system is very slow, which inevitably affect new bone tissue ingrowth 28,30 . Thus, we here to demonstrate an innovative approach to combine low degradable β-TCP with high degradable β-CaSi through double-shell hierarchy structure distribution instead of homogeneous hybrid.
CaP and CaSi could be easily integrated into the dual-shell microspheres which produced different biodegradation rates with time. The variation in biodegradation derived from the composition distribution in specific layer contributed to the bioactive ion release and surface bioactivity, and thus produced tunable osteostimulation in the calvarial bone defects. In fact, some previous studies have confirmed that the biodegradation rate of CaSi porous scaffolds and granules was overhigh for disadvantageous for mature bone formation in the bone defects 17,18 . Taking the thickness of thin-wall bone tissue in skull defect into account, the pure CaSi microspheres group was not included in this study.
The previous in vitro studies have demonstrated that the degradation rate of β-CaSi ceramics was apparently higher than that of β-TCP 20,31 . This is mainly because the solubility product constant of β-CaSi (2.5 × 10 −8 ) is much higher than that of β-TCP (2.0 × 10 −29 ), which suggested that faster dissolution of β-CaSi ceramics was mainly determined by the chemical composition of the materials. However, in vivo degradation studies of the β-CaSi porous scaffolds have showed that it is quickly biodegraded before new bone tissue remodeling which resulted in a non-healing osteogenesis 7 . Also, previous studies showed that high ion dissolution products from bioceramics could deteriorate cell viability 19 .
In this in vitro study, it is found that the 1/16 extract (200 mg/ml) of all bioceramic microspheres is the optimal one for cell proliferation, but the 1/4 extract of the CaSi@CaP@CaSi microspheres showed inhibitory effect. This is possibly attributed to the over-high Si concentrations in the conditioned cell culture medium. According to the internal diameters of the tri-nozzle system, the volume fraction of the dual-shell microspheres from inner core to external shell was 6.4%, 27.9%, and 65.7%, respectively. Theoretically, the mass fraction of CaP and CaSi in CaP@CaSi@CaP and CaSi@CaP@CaSi microspheres could be calculated as 73.99%/26.01% and 29.86%/70.14%, respectively. Thus, the CaSi phase in the external shell of CaSi@CaP@CaSi could readily dissolve in the cell culture medium. In contrast, the silicon release from internal CaSi shell of CaP@CaSi@CaP would be retarded by the external CaP shell though the CaP shell which demonstrated to be porous structure 32 .
Usually, ALP is expressed for the extracellular-matrix maturation that has been widely be used as a marker for osteoblast differentiation 33 , and Alizarin Red staining predicts the late mineralization phase of mature osteoblasts 34 . The current study indicates that both CaP@CaSi@CaP and CaSi@CaSi@CaP conditioned cell culture medium (1/16 dilution) is more beneficial for stimulating ALP expression and calcium nodule mineralization of BMSCs, as compared to pure CaP and the control. This may be contributed to the corporative effects of Ca ions and Si ions. Ca has been proven a potent effector on MSC differentiation, proliferation and differentiation 35 . An elevation in Ca could enhance osteoblastic differentiation 36 . The importance of Si in stimulating cell osteogenic differentiation in vitro has also been confirmed by previous studies 37,38 . The underlying mechanism through the activating ERK signaling pathway has been explained by Wang et al. 39 and Zhang et al. 40 . While Han et al. 41 suggested the WNT and SHH signaling pathway were also involved of the effect of silicate ions on proliferation and osteogenic differentiation of BMSCs. Thus, it may enhance the osteogenic cell proliferation, mineralization, and bone-related gene expression [39][40][41] .
Our evaluation in vivo showed that the value of BV/TV and Tb.N of CaP@CaSi@CaP group and CaSi@CaP@ CaSi group measured by μCT quantitative analysis was remarkably higher than that of pure CaP group after 12 weeks of implantation. In the early stage at 6 weeks, no significant difference among defects filled with three groups of microspheres and unfilled was observed. The spontaneous healing capacity of surgically produced rabbit cranial defects might be taken into account 42 . Although spherical structure was designed for higher contact surface area and suitable shape fit for any irregular defect, the innate healing capacity originates from the defect margin might be impeded by the stuffed material which had a broad contact area with defect edges in the early stages 43 . Nevertheless, in the later stage, microspheres guided bone tissue stretching onto the surface and supported substantial bone regeneration while the unfilled group tended to collapse (Fig. 6A-C). On further increasing implantation to 12-18 weeks, the BV/TV values were significantly higher for CaP@CaSi@CaP (16.47% and 23.43%) and CaSi@CaP@CaSi (19.66% and 29.65%) than the CaP group (10.5% and 15.43%) at 12 and 18 weeks (Fig. 5). Meanwhile, the CaP group degraded significantly slower than the groups containing β-CaSi. On one hand, the prolonged existence of residual materials might limit the space for bone and vascular tissues to grow 44,45 . On the other hand, this superiority of bone regeneration might be ascribed to the chemical composition of the materials in the presence of CaSi. For CaP@CaSi@CaP group, the porous nature of CaP shell enabled the release of bioactive Si ions through the eternal shell layer. As the external shell layer of CaSi@CaP@CaSi, the rapid biodegradation of CaSi layer may induce an increase in silicon ion concentration in situ. Silicon is found to be essential for normal bone growth and development 46 . Wang et al. found that introduction of 50% and 80% wt% β-CaSi into β-TCP could dramatically enhance the amount of newly formed bone in the long term up to 26 weeks 17 . Similar results were observed previously 21,24 . However, our results were superior to these studies that the new bone tissue could not only migrate into the macropore of the microsphere scaffolds, but also readily invade into the dual-shell microspheres from the edge to the inner core with the preferential biodegradation of CaSi phase in vivo (Fig. 7I,J,Q,R).
Another interesting aspect for the present study is to compare the osteogenic efficiency of the composition distribution of CaP and CaSi in dual-shell microspheres. The μCT analysis showed that there is no significant difference in the value of Tb.N at 12 and 18 weeks (Fig. 5C), whereas the BV/TV and RV/TV data between the dual-shell microsphere groups is significantly different at 18 weeks. The CaSi@CaP@CaSi group displayed a higher value of BV/TV than the other group (29.8% & 23.5%; Fig. 5B). The quantitative analysis revealed a distinct difference in RV/TV between CaP@CaSi@CaP and CaSi@CaP@CaSi in material biodegradation (Fig. 5D). From 12 to 18 weeks, the RV/TV in CaSi@CaP@CaSi group was decreased remarkably, but a mild decrease was determined for the CaP@CaSi@CaP group. On the other hand, the newly formed bone in CaSi@CaP@CaSi group was not only around the microspheres but also in the inner core layer at 12 and 18 weeks after implantation (Fig. 6Biv,Civ). The central bone was connected to the periphery bone or existed in a form of bony islet separated from the defect margin, which we considered to have been stimulated by the Si ions released from the core layer of CaSi. In contrast, the newly formed bone in the CaP@CaSi@CaP group mainly existed encircling the entire body. This phenomenon may be associated with the biochemical properties of CaP and β-CaSi, and their specific locations in the microspheres. From the chemical point of view, the solubility of β-CaSi is much higher than that of β-TCP in physiological environment which indicates a likely faster bio-dissolution of β-CaSi in vivo 7 . As for the closely packed bioceramic microsphere system, the primary porosity is very low (~30%), and the interconnected pore size is directly associated with the diameter of microspheres. Along with the biodegradation of microspheres, the interconnected pore size increases facilitating new tissue ingrowth. Therefore, in CaP@CaSi@ CaP group, the slow degradation of CaP in core layer and external shell layer in vivo may allow a comparative slow new bone tissue ingrowth. And in CaSi@CaP@CaSi group, it showed an appreciable bioresorption at the early stage and for a long-term stage (Figs 5, 6 and 7), thus a clear spatiotemporal evolution from material to new bone tissue took place in the gap of microspheres. The low-biodegradation CaP internal retained for supporting new bone remodeling.
To summarize, a schematic diagram could be used to illustrate the different new bone regeneration pattern due to the different interconnected pore evolution of the three groups of bioceramic microspheres in calvarial defects (Fig. 9). In fact, all bioceramic microsphere arrays were initially closely packed in the calvarial bone defects (Fig. 9Ai,Bi,Ci). Then, the external surface exhibited different biodegradation rate, which resulted in different enlargement of the macroporous structures between microspheres, and meanwhile the internal microporous structures in different component layers were varied differently. The biodegradation of biphasic bioceramic microspheres and bioactive ion release would readily produce large enough interconnected pore architectures for cells migration and nutrient infiltration. Based on these different composition distribution and biodegradation characteristics, it is reasonable to assume that the pure CaP microspheres underwent a very slow biodegradation with time; but as for the CaP@CaSi@CaP and CaSi@CaP@CaSi microspheres, the biodissolution of CaSi layer in the external or internal shell layer and the calcium and silicon release through the porous CaP external shell layer is thought to be favorable promoting more new bone regeneration and ingrowth. With the prolongation of implantation time, the external CaSi shell layer of CaSi@CaP@CaSi microspheres were degraded completely and the sparingly dissolvable internal CaP shell would support the new bone remodeling and maturation (Fig. 9Ciii). The core CaSi part gradually biodegraded and was replaced by new bone tissues; but in contrast, the new bone tissue ingrowth in the CaP@CaSi@CaP group would be retarded due to the slow-biodegradation nature of the external CaP shell layer.
The TRAP staining results showed that TRAP-positive cells appeared on the interface between all the microspheres and new bone tissue at 6-18 weeks, which means cell-mediated resorption is involved. Besides, the material was gradually resorbed and replaced by new bone tissues. TRAP-positive multinucleate cells were observed on both the surface of β-TCP and β-CaSi in other studies 7,47,48 . Since the type 5 isoenzyme of acid phosphatase is a lysosomal enzyme found in osteoclasts, TRAP staining in this study was to show the activity of osteoclasts 49 . At 6 weeks, in the CaP and CaP@CaSi@CaP groups the TRAP-positive cells and multinucleate cells were observed in the interface between material and newly formed bone in Trap-staining and HE staining, respectively (Figs 7B,D and 8A,B). It suggests that cell-mediated CaP degradation is involved. It is also observed previously that the degradation of CaP bioceramics can occur by both solution-mediated dissolution and activities of osteoclastic cells 50 . Nevertheless, for CaSi@CaP@CaSi group, neither multinucleate cells in HE staining nor TRAP-positive cell were found in the junctional zone between the external shell layer and the newly formed bone (Figs 7F and 8C). The TRAP-positive cells were observed interior of CaSi@CaP@CaSi where were more likely to be CaP layer. These results may indicate a mainly fast dissolution-mediated degradation of the CaSi shell of CaSi@CaP@CaSi. Other study also observed multinucleate cells were hardly observed on the rapidly degraded particles in vivo 20 . Studies in vitro demonstrated that soluble Si could inhibit osteoclast phenotypic genes expressions, osteoclast formation and bone resorption 50,51
Conclusions
The in vitro cell responses and in vivo osteogenic efficacy of the dual-shell CaSi/CaP microspheres were comprehensively assessed. It is demonstrated that their appropriately diluted ion extracts may enhance ALP activity and nodule formation, indicating that their potent ability for mediating osteogenic differentiation of rBMSCs. When implanted in calvarial defects, the dual-shell microspheres displayed appreciable bone ingrowth when compared to β-TCP. The specific distribution of CaP and CaSi in dual-shell structure resulted in different regeneration patterns with time. The single layer of CaSi@CaP@CaSi microsphere arrays in such thin-wall bone defects showed a superior performance in balancing the biodegradation and bone regeneration through full utilization of the physicochemical and biological properties of β-TCP and β-CaSi. These results suggest that the gradient distribution in such dual-shell microspheres with different biodegradable components readily tailor the biodegradation of core or internal shell layer and avoid the structural collapse of scaffolds, that is helpful for exerting respective effect comparative independently.
Materials and Methods
Preparation and characterization of dual-shell microspheres. The β-TCP and β-CaSi powders were prepared by the conventional chemical precipitation method. 24 To obtain dual-shell microspheres, β-TCP and β-CaSi powders were added into sodium alginate hydrogel (15 wt%) with constant stirring, respectively. The two slurries went through the co-concentric tri-nozzle (diameter: Ø2.0 mm, Ø1.4 mm, Ø0.8 mm) through different micro-tubes, resulting in dual-shell droplets, and then the granules were collected by 0.5 M Ca(NO 3 ) 2 solutions. Depositing different components in the core or shell layers, two kinds of dual-shell CaP@CaSi @CaP and CaSi@ CaP@CaSi granules were created. The microspheres were washed by deionized water three times. After drying at 80 °C, the particles were finally sintered at 1150 °C for 3 h. The pure CaP microspheres were prepared by extruding the β-TCP slurry while the other condition remained the same. The fracture surface and chemical analysis of the microspheres was characterized using scanning electric microscopy (SEM, SIRION-100, FEI, USA) with energy-dispersive spectroscopy (EDX). Cell proliferation assay. All the animal procedures including in vivo animal study were performed in accordance with the ARRIVE guidelines 52 and regulations of laboratory animal use of Zhejiang University (no.866, Yuhangtang Road, Hangzhou, P.R. China). The study protocol was reviewed and approved by the Animal Care and Experiment Committee of Zhejiang University (ZJU20160455). Rat bone marrow-derived mesenchymal stem cells (rBMSCs) were obtained from the femora of 3-4-week old Sprague-Dawley rats. To determine the proper concentration of the ionic extracts for following study, a serial of diluted extracts with 1/4, 1/8, 1/16, 1/32, and 1/64 concentrations were prepared by diluting the original extracts with serum-free DMEM. The BMSCs were seeded in 96-well plates at 5 × 10 3 cells/well. After 24 h, the culture medium was replaced by various concentrations of the microsphere extracts supplemented with 10% fetal bovine serum (Gibco) and then cultured for 1, 4 and 7 d, respectively. The cell viability was evaluated using the Cell Counting Kit-8 (CCK-8; Dojindo Molecular Technologies, Tokyo, Japan) according to the manufacturer's instructions. At each time point, the culture medium was changed with CCK-8 mixture solution containing 10% volume of CCK-8 solution. After incubation for 2 h at 37 °C, 100 μl of the reaction solution was transferred to a new 96-well plate, and the optical density was measured at 450 nm using a multifunctional microplate reader (SpectraMax M5, Molecular Devices, USA). All experiments were performed in triplicate, and the results were shown as units of optical density (OD) absorbance value. And the results were shown as the ratio of optical density (OD) absorbance value of experimental groups divided by that of the control group.
Preparation of biomaterial extracts.
Alkaline phosphatase (ALP) staining and activity assay. To investigate the early differentiation of BMSCs stimulated by the ionic extracts, the BMSCs were seeded in 6-well plates at a density of 1 × 10 5 cells/well and cultured in DMEM with 1/16 concentration of microsphere extracts described above. The cell layers at day 7 and 14 were rinsed with phosphate-buffered saline (PBS) and fixed with 4% paraformaldehyde for 15 min at 4 °C. The fixed cells were immersed in a mixture of BCIP/NBT working solution (Beyotime, Jiangsu, China) for 30 min at room temperature and then washed with PBS. The stained samples were then observed. The levels of ALP activity at day 7 and 14 were measured according to the manufacturer's instruction (Wako, Japan). Briefly, the cell culture supernatant was collected and centrifuged at 15,000 rpm for 5 min at 4 °C. 20 μl of supernatant was added into 100 μl working assay solution. After incubation at 37 °C for 15 min, the reaction was stopped by adding 80 μl stop solution to each well. Immediately the absorbance at 405 nm was measured with microplate reader (SpectraMax M5, Molecular Devices, USA). Total protein content was determined of the same samples using BCA Protein Assay Kit (Takara). The relative ALP activity was expressed by normalizing the amount of nitrophenol released with total amount of cellular protein.
Alizarin Red Staining and quantitative analysis. In order to identify the mineralization, Alizarin Red staining was performed on day 14 after the BMSCs were cultured in DMEM with 1/16 concentration of microsphere extracts described above. The cells were fixed in 4% paraformaldehyde for 15 min and then washed with Scientific REPORTS | (2018) 8:3385 | DOI:10.1038/s41598-018-21778-z deionized water, the photos were taken under light microscope (Zeiss AX10; Germany). For quantitative analysis, the staining was dissolved in 10% cetylpyridinuchlotide (Sigma) in 10 mM sodium phosphate (Aladdin) and the ODs were measured at 540 nm using a microplate reader (SpectraMax M5, Molecular Devices; USA).
Implantation of microspheres in calvarial bone defects of rabbits.
Before animal surgery, the microspheres were sterilized by autoclaving. Fifteen male New Zealand white rabbits weighting 2.8-3.0 kg were used in this study. Under general anesthesia by intravenous injection of pentobarbital sodium (30 mg/kg, Sigma), rabbits were placed in prone position. After shaving and disinfection of the operation areas, a longitudinal incision was made along the midline of the scalp. Full-thickness skin was flapped and periosteum was bluntly dissected to expose the cranium surface. Four separated circular defects were created in the cranium using an 8-mm diameter trephine bur without damaging the duramater under 0.9% physiologic saline constant irrigation. The four circular defects were randomly filled with three groups of microspheres or left the implant-free defects as Control. Operative areas were stratified sutured, and then penicillin (400,000 U/d) was administered through intramuscular injection for 3 d. The rabbits were euthanized at 6, 12, 18 weeks (n = 5) after surgery. The cranium containing the materials were harvested and fixed for 48 h in 4% paraformaldehyde for further study.
Micro-computed tomographic analysis. Three-dimensional micro-computed tomography (μCT) (Y. Cheetah, Y. XLON, German) was used to assess bone formation in the defects, at a voltage of 80 kV, an electric current of 62.5 μA and a projection number of 720. Then 3D images were reconstructed using the software VGStudio Max 2.2. The amount of bone regeneration was evaluated by analyzing bone volume proportion of the total defect volume (BV/TV), trabecular number (Tb.N) and residual material volume proportion of the total defect volume (RV/TV).
Histological analysis.
Following μCT analysis, all the specimens were immersed in EDTA decalcifying solutions for 3 weeks with changing of fresh decalcifying solutions every 3 d. Then, specimens were dehydrated in graded series of alcohol solution and finally embedded in paraffin. A series of 5 μm thick sections were cut perpendicular to the horizontal plane of the circular defect and stained with hematoxylin and eosin (H&E).
TRAP staining. Serial sections were stained for tartrate-resistant acid phosphate (TRAP) with commercial TRAP kit (Sigma, St Louis, USA). The sections were incubated in the incubation solution made up of Fast Garnet GBC Base Solution, sodium nitrate solution, naphthol AS-BI phosphoric acid, acetate solution, tartrate solution and deionized water for 60 min at 37 °C. | 6,451.2 | 2018-02-21T00:00:00.000 | [
"Materials Science",
"Biology",
"Engineering"
] |
Magnetic field expulsion in optically driven YBa2Cu3O6.48
Coherent optical driving in quantum solids is emerging as a research frontier, with many reports of interesting non-equilibrium quantum phases1–4 and transient photo-induced functional phenomena such as ferroelectricity5,6, magnetism7–10 and superconductivity11–14. In high-temperature cuprate superconductors, coherent driving of certain phonon modes has resulted in a transient state with superconducting-like optical properties, observed far above their transition temperature Tc and throughout the pseudogap phase15–18. However, questions remain on the microscopic nature of this transient state and how to distinguish it from a non-superconducting state with enhanced carrier mobility. For example, it is not known whether cuprates driven in this fashion exhibit Meissner diamagnetism. Here we examine the time-dependent magnetic field surrounding an optically driven YBa2Cu3O6.48 crystal by measuring Faraday rotation in a magneto-optic material placed in the vicinity of the sample. For a constant applied magnetic field and under the same driving conditions that result in superconducting-like optical properties15–18, a transient diamagnetic response was observed. This response is comparable in size with that expected in an equilibrium type II superconductor of similar shape and size with a volume susceptibility χv of order −0.3. This value is incompatible with a photo-induced increase in mobility without superconductivity. Rather, it underscores the notion of a pseudogap phase in which incipient superconducting correlations are enhanced or synchronized by the drive.
A number of recent experiments have made use of ultrashort pulses to dynamically reduce or enhance signatures of superconductivity.For example, irradiation with visible or ultraviolet pulses has been used to study the disruption and recovery of the superconducting state [21][22][23][24][25] .In figure 1(a-c) we summarize the results of one such experiment 21 where a YBa2Cu3O6.5 (" " = 52 K) thin film was kept at a temperature " = 10K < " " and photo-excited with near infrared pulses polarized in the ab-plane.The dynamics of the optical conductivity was tracked by probing the sample transient optical properties with a time delayed THz pulse.Figure 1b displays the imaginary part of the optical conductivity + # (-) measured before (red line) and after photo-excitation at the peak of the response (blue line).After photo-excitation, the 1 -⁄ divergence of + # (-), characteristic of the superconducting state, was strongly reduced, reflecting the disruption of equilibrium superconductivity.Figure 1c shows the time evolution of lim $→& -+ # (-), a quantity that in equilibrium is proportional to the superfluid density.This measurement displays a prompt reduction of the superfluid density within the first few picoseconds, followed by a slower relaxation back to the superconducting state.
In a series of more recent experiments, mid infrared optical pulses were used to drive YBa2Cu3O6+x along the insulating c-axis direction, coupling to apical oxygen vibrations, to coherently modulate the electronic properties.For this type of excitation, heating is limited 26 and new types of coherence can be activated.A transient state with superconducting-like optical properties [17][18][19] was observed up to temperatures far in excess of equilibrium Tc and throughout the equilibrium pseudogap phase.
Representative results are summarized in figure 1(d-f).Single crystals of YBa2Cu3O6.48were held at a temperature " = 100 K ≃ 2" ' and irradiated with ~500 fs long pulses, polarized along the crystal c-axis and centered at 15 µm wavelength, resonant with infrared active modes that modulate the apical oxygen position 27 .The terahertz frequency + # (-) spectra measured before (red line) and after (blue line) photoexcitation are shown for ≈ 1 ps.The peak value is comparable to that measured at equilibrium below !!along the c-axis direction in this doping.These data are reproduced from Ref. 19.
in figure 1e.After excitation, + # (-) acquired the same ~1 -⁄ behavior observed along the c-axis in the equilibrium low-temperature superconducting phase 19 .These results, which have been taken as suggestive of photo-induced superconducting correlations, were observed transiently over time windows that range from 1 ps to 5 ps, depending on the duration of the drive 20 .For the experimental conditions explored here, the timedependent superfluid density (figure 1f) shows that the superconducting-like response is enhanced and reaches a maximum value within ~1 ps, relaxing back to equilibrium on a similar time scale, compatible with the decay of the driven phonon 6 .
In this paper, we explore whether the similarities in optical conductivity between the transient state and the low temperature equilibrium superconductor, extend to the magnetic properties.In analogy with "field-cooled" Meissner diamagnetism, we search for an ultrafast magnetic field expulsion when the material is excited in a static applied magnetic field.Under these conditions, non-superconducting high mobility carriers would not modify the magnetic field surrounding the sample, as high conductivity only opposes changes in the magnetic flux.On the other hand, a photo-induced state with superconducting correlations would expel the magnetic field from its interior because of a change in its magnetic susceptibility 28 .
To perform these measurements, we adapted existing techniques of magneto-optical imaging 29 and magneto-optic sampling 30 , to enable ultrafast magnetic field measurements in a GaP (100) magneto-optic detection crystal.Through the Faraday effect, the polarization rotation of a probe laser pulse yielded the value of the time-and spacedependent magnetic field with sensitivity of better than 1µT.
We first validated the reliability of this technique by measuring the equilibrium superconducting transition in YBa2Cu3O7 at equilibrium.The experimental configuration is shown in figure 2a.The sample was a ~150 nm-thick film of YBa2Cu3O7 grown on Al2O3, out of which a 400 µm diameter half disc shape with a well-defined edge was created using optical lithography.The YBa2Cu3O7 film was kept at a temperature " = 30 K < " " with a spatially homogeneous ~2 mT magnetic field applied perpendicular to the plane of the film (vertical direction in the figure) and generated using a Helmholtz coil pair.A ~75 µm thick, (100)-oriented GaP crystal was used as magneto-optic detector and was placed directly on top of the sample.A linearly polarized 800 nm, ultrashort probe pulse was focused to a spot size of ~50 µm on the GaP crystal, impinging at near normal incidence.The Faraday effect induced a polarization rotation on the beam reflected from the second GaP/vacuum interface, providing a measurement of the vertical component of the local magnetic field, averaged over the probed volume, which due to the reflection geometry was traversed twice.In this experiment the polarity of the external magnetic field Bext was periodically cycled, and signals acquired with +Bext applied field were subtracted from those acquired with -Bext.In this way, only contributions to the polarization rotation induced by the local magnetic field were measured (see Supplementary Sections S2, S3, and S4 for more details on the experimental setup).
Below " " , the superconductor expels magnetic fields from its interior and changes in the vertical component of the B field can be estimated using a magnetostatic calculation (see supplementary S6 for more details).The results of this calculation are shown in the color plot overlayed to the experimental geometry.The magnetic field outside the sample is expected to be reduced in the center and enhanced near the edge as magnetic flux is expelled from the sample (blue and red regions in figure 2a, respectively).These changes were probed by scanning the probe beam in the horizontal direction and by measuring the sensed magnetic field as a function of distance from the edge.
The results of this measurement are displayed in figure 2b.As predicted, we observed a reduction of the measured magnetic field when measuring above the sample (blue shaded area) and a corresponding enhancement near the edge (red shaded area).The different amplitudes of the effect measured in these two locations are determined by the geometry of the experiment.As shown in the color plot in figure 2a, the decay of the changes in the local magnetic field along the vertical direction is steeper near the edge than above the center of the sample.Because the magneto-optic detector averaged the magnetic field along the vertical direction, a smaller amplitude signal was detected when measuring near the edge where the field experienced a steeper decay (see Supplementary Material section S6).
A second test for time-resolved magnetometry was performed in the conditions of figure 1(a-c).We tracked the dynamics of the magnetic field expulsion when superconductivity was destroyed with an ultraviolet (400 nm) pulse in a YBa2Cu3O7 thin film.The geometry of the experiment, shown in figure 3a, was the same as those used in the equilibrium measurements, with the only addition of the pump pulse, which struck the sample from the bottom.Note that the thin YBa2Cu3O7 film was completely opaque to 400 nm radiation, and the beam was shaped as a half gaussian to match the half-disc shape defined on the sample.This geometry ensured that the magneto-optic detector never interacted directly with the optical pump (for more experimental details see section S2 and S3 of the Supplementary Material).
The pump induced changes in the local magnetic field were measured as a function of pump-probe time delay in two different positions, near the edge and above the sample.
The results of these measurements are displayed in figure 3b.As superconductivity is disrupted (see figure 1(a-c)), the magnetic field penetrates back into the sample within few picoseconds, causing a small decrease in the magnetic field near the edge of the sample (red symbols) and a large increase above (blue symbols).These changes were measured to persist for several picoseconds.Further details about these measurements are described in Supplementary Section S7.
We next turn to the core observations reported in this paper, and to the measurement of magnetic field expulsion in YBa2Cu3O6.48after excitation with 15 μm mid infrared pulses.
For the case of photo-induced superconductivity studied here, we expect to observe The error bars denote the standard error on the mean.
pump-induced magnetic field changes in opposite direction compared to the ones shown in figure 3b, that is a positive change near the edge and a negative change above the photoexcited area.
A YBa2Cu3O6.48 single crystal with an exposed ac-surface was held at a series of temperatures " > " " and irradiated using ~1 ps long pulses centered at ~15 μm wavelength, with a peak field strength of ~2.5 MV/cm.A homogeneous magnetic field 9 ext , tuned to amplitudes between 0 to 12.5 mT, was applied along a direction perpendicular to the sample surface, along the ab planes of YBa2Cu3O6.48.These magnetic field values were lower than the documented 31 ; "+ for optimally doped YBa2Cu3O6+x, and comparable to the one measured in this sample.
Local pump-induced changes to this magnetic field were measured using the same ultrafast magnetometry technique discussed above.Figure 4a illustrates the measurement configuration.Due to the thickness of the single crystalline ac-oriented sample (~500 µm), being significantly greater than the penetration depth of the mid infrared light (~1 µm), both pump and probe were made to impinge onto the sample/detector from the same side.The detector was placed near the edge of the photoexcited region and was completely shielded from the mid infrared pump by two 30 µmthick z-cut Al2O3 crystals, placed above the detector and on its side (see Supplementary Material section S2).Importantly, the Al2O3 crystals also created a sharp edge for the photo-excited region, a prerequisite to maximize the changes in the magnetic field in its vicinity.Note also that in this experiment the field polarity was periodically cycled, and the pump pulses reached the sample at half the repetition rate of the probe pulses, yielding double-differential pump-probe measurements.This ensured that only effects resulting from a combination of mid infrared excitation and the applied magnetic field were detected.Furthermore, the cut of the GaP crystal was chosen to have zero electro- optic coefficient, to avoid confounding contributions to the probe polarization rotation (see Supplementary Materials S5 for more details).
Figure 4b displays pump induced changes in the local magnetic field measured ~50 µm from the edge created by the MIR mask.The measurements were performed as a function of pump-probe time delay at two different base temperatures of 100 K (red symbols) and 300 K (orange symbols).Upon photo-excitation, a prompt increase of the magnetic field was observed, peaking at a value of ~ 10 µT (9 app /1000) at 100 K and of ~ 3 µT (9 app / 3000) at 300 K.This transient magnetic field expulsion persisted for ~1 ps, a duration comparable to the lifetime of the superconducting-like conductivity spectra of figure 1f.
A first evaluation of these data is consistent with a photo-induced Meissner-like response.
For the experimental geometry of figure 4a we first compare the measured changes in magnetic field to those expected for a quasi-static change in magnetic susceptibility.As will be discussed below, this assumption is not entirely justified, although it provides a good estimate to gauge the size of the effect.Assuming that the changes in the photoexcited region are homogenous, and in the fictitious situation of a slow change in the susceptibility of the material, the raw magnetic field changes yield a static induced = calc of order -0.3, as shown in figure 4c.We emphasize that this value, is many orders of magnitude greater than what would be observed if the material were made a perfect conductor (assuming weak Landau diamagnetism).Even if it were transformed into a non-superconducting material with the strongest known metallic diamagnetism, such as that observed in graphite, the effect would be at least two orders of magnitude smaller than what observed in this measurement.The colossal photo-induced diamagnetism observed is rather reminiscent of the field-cooled susceptibility of an equilibrium type-II superconductor (see Supplementary Section S1 and S6).
As an additional check, we measured the Faraday rotation induced by the magnetic field expulsion as a function of the incoming probe-polarization angle > inc .The polar plot in the inset of figure 4b shows the dependence on > inc of the measured magnetic field expulsion at the peak of the response.No variations were observed as > inc was swept from 0 to 2?, corroborating that the signal originated from a Faraday effect.Other effects, such as the Pockels effect, which we stress again should be identically zero because of the crystal cut, would also show a different dependence on the input polarization (see Supplementary Section S5 for a detailed discussion and section S9 for other verification measurements).
We next discuss the measured signal taking into account that the changes in diamagnetic susceptibility occur dynamically rather than quasi-statically.Because the change in magnetic field inside the material takes place within approximately 1 ps, the photoexcited area of the sample hosts a rapid change in magnetic field (dB/dt) which is expected to be a source of a picosecond long propagating electromagnetic pulse (see schematic representation in figure 5a). Figure 5b displays pump-induced changes in the local magnetic field measured at three selected distances of 50 µm, 110 µm, and 170 µm from the edge of the photoexcited region (along the horizontal direction, figure 5a).As the propagation distance was increased, the signal was attenuated and peaked at longer delays, as expected for a propagating electromagnetic wave.As displayed in figure 5c, from these values we extracted a propagation speed of ~ @ A 2 ⁄ , where A 2 is the group index in GaP at ~ 1 THz frequency 32 .
Lastly, we display the dependence of the measured effect as a function of temperature and applied magnetic field.In figure 6a the peak of the pump-induced magnetic field expulsion is displayed as a function of temperature (full time traces are reported in section S8 of the Supplementary Materials).These follow approximately the same temperature dependence of the photo-induced superfluid density, extracted from transient THz data 19 , underscoring a common origin for these two physical observations and a correlation with the temperature scale of the pseudogap.As already shown in figure 4, the present data shows how the mid infrared drive applied here can generate a colossal diamagnetic response (= calc (300 K) ~ = calc (100 K)/2) even at room temperature.
Figure 6b shows the same quantity as in figure 6a but measured as a function of the applied magnetic field at a fixed temperature " = 100K.Up to the highest B-field that could be generated (~12.5 mT), the value of the expelled field monotonically increases.
We next consider possible explanations for the observed magnetic field enhancement, which is an effect that depends on the applied magnetic field and must be related to a change of magnetic susceptibility in the irradiated area.As discussed above, a quench of the magnetic susceptibility = 3 from virtually zero to a value of order -0.3, would give rise to a reduction of magnetic field in the photo-irradiated area and an enhancement outside it, explaining the data well.Alternative explanations could involve the interaction of the drive field with small diamagnetic currents that may already exist in the material before excitation.Indeed, if one were to assume that pairing and local superconducting coherences exist throughout the pseudogap phase, it is possible that an amplification mechanism similar to the one discussed for Josephson Plasma polaritons and reported in reference [6] could produce a sizeable magnetic field expulsion qualitatively similar to the one observed.
Both of these effects would underscore some form of photo-induced superconductivity.
The first mechanism would be based on a quench of the magnetic susceptibility and hence be compatible with the notion that a true transient superconducting phase is formed, potentially underscoring the existence of a hidden superconductor in the pseudogap phase.The second mechanism would instead amount to an amplification of pre-existing superconducting fluctuations in the pseudogap phase, hence a truly dynamical phenomenon reminiscent of "Floquet" superconductivity, however loosely defined.
Both of these scenarios highlight the highly unconventional nature of this class of physical phenomena, and the role played by coherent electromagnetic fields to engineer quantum materials phases away from equilibrium.
S2.1a shows a micrograph of the YBa2Cu3O7 film after patterning.The thin films and GaP (100) detector were then mounted into an Al2O3 plate that could be fixed directly on the cold finger of the cryostat.The choice of Al2O3 as a material for the sample holder ensures that the effect of eddy currents on the applied field is minimized.A GaP (100) crystal (Sur-faceNet GmbH) with a ~1.5° wedge was used as a detector and had a thickness of ~50 μm near the thinner edge.The wedge was used to separate the front and back reflection from each other, therefore allowing to collect only the back reflection containing the accumulated Faraday rotation across the detector.This detector was put in close contact with the sample (see Figure S2.1b), additionally making sure that its back surface and the sample plane were not coplanar, to avoid interference between the reflections from these two surfaces.The gap between detector and the patterned YBa2Cu3O7 film was ~10 μm.This experimental geometry was used for the equilibrium superconductivity and disruption measurements reported in figures 2 and 3 in the main text.
The YBa2Cu3O6.48single crystals were polished after growth to expose an ac-oriented surface that allowed access to the crystal c-axis.The single crystal sample was glued on an edge of a half-disc shaped Al2O3 plate.On the top face of the same Al2O3 plate a GaP (100) detector analogous to the one used for the disruption measurements was glued and was in contact with the YBa2Cu3O6.48crystal.A 30 μm thick Al2O3 crystal was placed on top of the GaP crystal and acted as a shield preventing the 15 μm wavelength pump pulses from reaching the GaP detector.A second 30 μm thick Al2O3 crystal was placed on the side to also protect the detector from the side.Note that while transparent for light at 800 nm
S3. Experimental Setups and Data Acquisition
The equilibrium spatial scans and superconductivity disruption measurements shown in figure 2 and 3 of the Main Text were performed using the experimental setup sketched in figure S3.1.Ultrashort (100 fs) 800 nm laser pulses were produced starting from a commercial Ti:Al2O3 oscillator/amplifier chain that produced pulses with energies up to 2 mJ at a repetition rate of 900 Hz.These pulses were split using a beamsplitter into two branches.The lowest intensity branch was used after attenuation for probing the polarization rotation in the GaP (100) magneto-optic detector.To minimize the noise sources in the measurement, the polarization of the beam was set using a nanoparticle high-extinction ratio linear polarizer.As non-normal incidence reflections introduce a phase delay between s and p polarization, incidence angle fluctuations can give rise to polarization noise.To minimize this, only reflections close to normal incidence were used in the setup and a commercial system using active feedback was used stabilize the laser beam pointing.After traversing and being reflected from the second surface of the Faraday detector, the polarization state of light was analyzed using a half-waveplate, Wollaston prism and balanced photo-diode setup that allowed us to quantify the Faraday effect in the magneto-optic detection crystal.The higher intensity branch was mechanically chopped at a quarter of the repetition rate (225 Hz) and frequency doubled to obtain 400 nm pulses using a β-BaB2O4 (BBO) crystal that were used to photo-excite the YBa2Cu3O7 thin film samples.A mask, illuminated by these ultraviolet pulses, was imaged onto the back surface of the sample to create a halfgaussian beam with an edge that matched the long edge of the half disc shaped YBa2Cu3O7 sample.This, together with YBa2Cu3O7 being fully opaque to 400 nm radiation ensured that the GaP detector is not exposed to the pump light.The YBa2Cu3O7 thin film samples at 1kHz repetition rate and was used to pump a home built three stage OPA that generated ~2mJ total energy signal and idler pulses.These pulses were mixed in a 0.4mm thick GaSe crystal to obtain ~150 fs long, ~20 μJ energy pulses centered at ~20 THz, close to resonance with the B1u apical oxygen phonon modes of YBa2Cu3O6.48.These pulses were then chirped using a 10mm NaCl rod to a duration of ~1 ps, in order to match the optimum pulse length for inducing superconducting-like optical properties in YBa2Cu3O6.48 2 .While the sample stages and cryostat were similar between the two setups, in this case the polarization analysis setup is fully "in-line", i.e. the beam travels directly from the polarizer to the Wollaston analyzer without being reflected by additional mirrors other than the detector.This contributed to further reduce spurious sources of polarization noise.A magnetic field was applied at the sample position using a pair of Helmholtz coils whose polarity was switched at ~10 Hz frequency and could reach a maximum amplitude of 12.5 mT.
In both experimental setups the polarity of the magnetic field is cycled periodically at a sub-harmonic of the pump and probe repetition rates.To obtain differential pump-probe measurements the electrical pulses from the balanced photodetector were digitized using a commercial 8 channel 40MS/s data acquisition card, triggered at the lowest frequency used in the experiment.These signals, acquired in the time-domain, were then integrated, after applying boxcar functions, yielding the amplitude of the signal from the sum and difference channels of the balanced photodetector for each probe laser pulse.Since the acquisition of a full pulse sequence required the acquisition of many pump-probe cycles, the sample clock signal of the data acquisition card is derived using direct digital synthesis from the oscillator repetition rate.In this way drifts in the cavity length and repetition rates of the system do not affect the relative timing of the boxcar functions with respect to the arrival time of the electrical pulse.
S4. Data Reduction and Analysis
As mentioned in the previous section for all the experiments the polarity of the magnetic field was cycled periodically and measurements with pump and without pump were acquired to yield differential pump probe measurements and isolate contribution to the polarization rotation that were induced by the applied magnetic field.In the following we discuss this approach in detail and the impact it has on the measured quantities.
For the measurements shown in figure 2 where Δ$ pump-off and Δ$ pump-on (averaged over n-pulses) are the magnetic field induced polarization rotations measured with the pump off and on respectively and Δ$ pp is the magnetic field induced change in polarization rotation due to the pump.These quantities yielded the amplitude of the magnetic field and its pump-induces changes, after calibration of the Faraday effect in the GaP (100) detector (see Supplementary Section S5).To cancel out residual drifts, the phase of the magnetic field, as well as that of the pump laser, with respect to the probe laser, are periodically alternated between 0 and (.The measurements reported in figure 4, 5, and 6 of the main text were acquired using a slightly different scheme to ensure that the sample was excited in a constant magnetic field.Here the probe repetition rate was 2 kHz and the pump struck the sample every second probe pulse (i.e. at 1 kHz) while the magnetic field polarity was modulated following a square wave at a lower frequency of around ~10 Hz.This ensured that the sample was photoexcited in a constant magnetic field.The same quantities as described above were calculated yielding double-differential pump probe measurements that distilled only the contributions to the polarization rotation arising from pump-induced changes in the magnetic properties of the sample.
S5. GaP (100) as a Magneto-Optic Detector
The ultrafast optical magnetometry technique we introduced in the main text relies on the Faraday effect which directly relates the magnetic field applied to a material to the polarization rotation of a linearly polarized beam traversing the medium.This relation is normally reported as: where θ represents the rotation of the polarization of the input beam, B is the magnitude of the magnetic field along the light propagation direction inside the medium and L is the thickness of the medium.The proportionality constant V is known as the Verdet constant, which is a material dependent constant depending also on other parameters such as the wavelength of the incoming polarized light.
In the past the Faraday effect in ferromagnetic crystals and thin films has been used to image the magnetic properties of superconductors at equilibrium 3,4 .While these types of detectors (such as Bi:Y3Fe5O12, EuS, and EuSe) offer very high sensitivity (*~ 10 5 rad•T -1 •m -1 ) they have limited time resolution, down to 100 ps at best, due to the presence of low lying magnetic excitations (e.g.ferromagnetic resonance) at sub-THz frequencies.Diamagnetic II-VI and III-V semiconductors such as ZnSe, ZnTe and GaP have a magneto-optic response featuring Verdet constants that are two to three orders of magnitude smaller than those observed in ferromagnetic materials.Although less sensitive, these materials have the advantage of not being magnetically ordered and offer significantly better time resolution 5,6 .Furthermore, their Verdet constant is mostly temperature independent ensuring a flat detector response in a broad temperature range.
The measurements shown throughout the manuscript were performed using GaP detectors prepared as detailed in Supplementary Section S2.The sensitivity of these detectors was calibrated using the same polarization analysis setup used for the measurements.
This was done recording the field-induced polarization rotation applying known magnetic fields with the in-situ Helmholtz coil pair which was independently calibrated using a Lakeshore 425 gaussmeter.The results of this calibration measurement, performed at Since GaP is an optically isotropic material, the Faraday effect is also expected to be isotropic and its strength only depends on the orientation of the light propagation direction with respect to the magnetic field inside the crystal.Hence, the measured magnetic fieldinduced polarization rotation is expected to be independent on the input polarization angle with respect to the crystal axes as shown in figure S5.2a.
In GaP, inversion symmetry is broken and besides being magneto-optically active it also features an electro-optic (Pockels) effect, that is an electric field induced birefringence.
Since ultrafast magnetic field pulses are propagating electromagnetic waves, a time dependent electric field at THz frequencies is also present and care must be taken to distinguish the two effects.In the geometry of our experiment, with an applied magnetic field perpendicular to the plane of the detector, we expect electric fields associated to ultrafast changes in the local magnetic field to be polarized in the plane of the detector.Based on the symmetry of the electro-optic tensor, electric fields polarized in the plane of a (100) oriented GaP crystal will not cause any birefringence for probe beams propagating along the [100] crystal axis.
Small contributions due to misalignment of the crystal orientation, finite incidence angles, or electric fields polarized out of the (100) plane will give rise to a signal at the balanced photo-detector that depends on the angle of the probe beam input polarization, due to symmetry.These contributions can be calculated extracting the field induced birefringence from the eigenvalues and eigenvector of the dielectric impermeability tensor and calculating the expected signal using Jones calculus 8 .For example, in figure S5.2b we report the polarization dependence of the photo-detector signal that is expected to be observed with an electric field in the (100) plane and a finite incidence angle.The signal should exhibit an eight-fold dependence on the input polarization.This is in contrast with the polarization dependence reported in the inset of Figure 4b of the main text and corroborates that the measured signal originated from a Faraday effect.
S6. Magnetostatic Calculations
The changes in the magnetic field surrounding the sample were calculated in COMSOL using a finite element method to solve Maxwell's equations taking into account the geometry of the experiment.The solution domain was defined as a spherical region of 1mm radius where a constant uniform magnetic field was applied.A half-disc shaped region characterized by a constant, field independent, spatially homogeneous magnetic susceptibility 5 0 was placed in the center of the spherical region and was used to model the magnetic response of either the patterned YBa2Cu3O7 thin film or the photo-excited region in YBa2Cu3O6.48.
While the size of the half disc in the simulation was exactly matched to the one used in the experiments for the patterned YBa2Cu3O7, assumptions had to be made regarding the size of the photo-excited region in YBa2Cu3O6.48.The latter was modelled as a half disc of 375 μm diameter, coinciding with the measured 15 μm pump beam spot size, using different thickness values corresponding to different assumption on the pump penetration depth as discussed below.The weak magnetic response of the substrate or of the unperturbed YBa2Cu3O6.48bulk were not included in the modelling as they are expected to be several orders of magnitude smaller due to their much lower magnetic susceptibility.To account for the detector response which, as written in the Main Text, generates a polarization rotation which is proportional to the average of the magnetic field in the volume probed by the light pulse, the results of the calculation were integrated along the detector thickness, yielding from three-dimensional field maps a two-dimensional map of the magnetic field averaged along the detector thickness.These were then convoluted with a two-dimensional Gaussian function to account for the spatial resolution given by the finite size of the focus of the probe beam.
Figure S6.1 shows a comparison between a line scan measured across the straight edge of the YBa2Cu3O7 half disc and the results of a magnetostatic calculation performed using geometrical parameters that reflect the experimental conditions.In this simulation 5 0 was varied to achieve the best agreement with the experimental data.We extracted a value for 5 0 ~− 1 which is compatible with the zero-field-cooled magnetic properties of YBa2Cu3O7 thin films 9 .
We used the same calculations to quantify the magnetic susceptibility that the photo-excited region in YBa2Cu3O6.48should acquire after photo-excitation to produce a magnetic field change equal to the one measured at the peak of the pump-probe response.This was achieved running the calculations for a set of 5 0 values and thicknesses of the photo-excited region to obtain "calibration curves" that related the average magnetic field 50 μm away from the edge to the susceptibility 5 0 .
The curve reported in the Figure 4c
S7. Spatially Resolved Pump-Probe Scans in YBa2Cu3O7
The data shown in figure 3 were acquired in two different positions (above the sample center and outside of it near the edge) as a function of time delay between the 800 nm probe pulse and the ultraviolet (400 nm) pump pulse.In Figure S7.1 we report spatial dependent measurements of the pump-induced magnetic field changes as superconductivity is disrupted in YBa2Cu3O7 at one selected time delay t = 10 ps.The pump-probe signal shows a spatial dependence similar to that of the static magnetic field expulsion.On the edge of the superconductor, where an enhanced magnetic field is observed at equilibrium, destruction of superconductivity induced a negative pump-induced magnetic field change, indicating that the applied magnetic field penetrated back into the sample.Above the sample center, instead, a reduced magnetic field is observed at equilibrium and dis- In analogy with the data in figure 4b, a time delay scan was acquired for each temperature.
The peak value of each of these scans was extracted via a Gaussian fit of the data and the peak value was plotted as a function of temperature in Figure 6a.The data reported in figure S8.1 shows that the dynamics of the magnetic field expulsion is mostly independent of temperature and only the peak value reduces as the temperature is increased.
Figure 1 |
Figure 1 | Transient optical properties of YBa2Cu3O6+x upon photo-excitation.(a) Below !!, femtosecond near-infrared pulses polarized in the ab-plane of YBa2Cu3O6.5 are used to break superconducting pairs and disrupt superconductivity.The ab-plane optical conductivity of the transient state is probed with THz time domain spectroscopy.(b) Imaginary part of the optical conductivity " " ($) measured at equilibrium (red line) and at the peak of the pump-probe response (blue line).At equilibrium " " ($) shows a 1 $ ⁄ behavior, indicative of dissipationless transport.After photo-excitation, the 1 $ ⁄ divergence is dramatically reduced.(c) Time evolution of $" " ($)| #→% , a quantity that at equilibrium is indicative of the superfluid density in a superconductor.The time dependence shows a prompt reduction of the cooper pair density after photo-excitation, persisting for several picoseconds.These data are reproduced from Ref. 21.(d) Above !!, YBa2Cu3O6.48 is excited with intense mid infrared pulses, resonant with the apical oxygen phonon mode.Broadband THz pulses probe the c-axis optical conductivity of the sample.(e) Same quantity as in (b) but measured along the sample c-axis both at equilibrium (red line) and after mid infrared excitation (blue line).(f) Same as in (c) but measured along the sample c-axis following mid infrared irradiation.Here, after excitation, a finite transient "superfluid density" appears and persists
Figure 2 |
Figure 2 | Ultrafast Optical Magnetometry probing the magnetic response of a superconductor.(a)At equilibrium below !!, a lithographically defined YBa2Cu3O7 half-disc of 400 μm diameter expels a static, externally applied magnetic field * ext (see supplementary S2 for a full 3D view of the geometry).The local changes in the magnetic field are represented using a color plot obtained from a magnetostatic calculation assuming that the superconducting film behaves as a medium with homogeneous magnetic susceptibility + ∼ −1.This spatially inhomogeneous magnetic field is quantified by measuring the polarization rotation, induced by the Faraday effect, of a linearly polarized 800 nm probe pulse, reflected after propagation through a GaP (100) crystal placed in close proximity to the sample.To isolate the magnetic contributions to the polarization rotation, the static applied magnetic field polarity is cycled at a frequency of 450 Hz, while the laser probe pulses impinge on the sample at 900 Hz frequency.(b) Ratio of the measured local magnetic field B to the applied one * ext , as function of distance across the edge of the sample.An increased local magnetic field is measured near its edge with vacuum (red) and a reduced one above the sample near its center (blue).The error bars denote the standard error on the mean.
Figure 3 |
Figure 3 | (a) The same YBa2Cu3O7 half-disc shown in figure 2 is cooled below !! and photo-excited with an ultraviolet laser pulse to disrupt the superconducting state (see supplementary S2 and S7 for a full 3D view of the geometry).The time and spatially dependent pump-induced changes in the local magnetic field are quantified, analogously to figure 2a, through the Faraday effect in a GaP (100) crystal.To isolate the magnetic contributions to the polarization rotation and extract pump induced changes, the applied magnetic field polarity is cycled at 450 Hz frequency while the pump is chopped at 225 Hz frequency.(b) Pump-induced changes in the local magnetic field ΔB normalized to the external magnetic field * ext , measured near the edge (red) and near the center of the half disc (blue) as function of pump-probe delay.
Figure 4 |
Figure 4 | Magnetic field expulsion after phonon excitation in YBa2Cu3O6.48.(a) Schematic of the experiment.A thin Al2O3 crystal is placed on top and next to the exposed side of the GaP (100) detection crystal to completely reflect the 15 μm pump and prevent it from generating a spurious non-linear optical response in the GaP (100) crystal.The thin Al2O3 crystal also creates a well-defined edge in the mid infrared pump beam, shaping the photo-excited region into a half disc of ~ 375 μm diameter.The time dependent changes in the local magnetic field expulsion are probed positioning the probe beam in the vicinity of the edge of the photo-excited region.(b) Pump-induced change in the measured magnetic field (Δ*) as function of pump-probe delay measured at two different temperatures of 100 K (red) and 300 K (yellow).The upper plot shows the cross-correlation of the pump and the probe pulses measured in-situ in a position adjacent to the sample.Its peak defines the time-zero in the delay scan.The inset shows the dependence of the peak value of the pump-induced magnetic field expulsion (Δ* peak ), measured via polarization rotation, on the input polarization angle / inc chosen for the probe pulse.(c) Results of a magnetostatic calculation accounting for the geometry and placement of the detector that relate the sensed magnetic field change to a change in the magnetic susceptibility of the photo-excited region in YBa2Cu3O6.48.Details of this calculation are contained in Supplementary Section S6.The error bars denote the standard error on the mean.
Figure 5 |
Figure 5 | Propagating electromagnetic wave emerging from the photo-excited region.(a) Section view of the experimental configuration.Here, measurements are performed at a base temperature != 100 K, at different distances from the edge of the photo-excited region.(b) Pump-induced change of magnetic field as function of pump-probe delay, for three selected positions of 50μm (blue symbols), 110μm (red symbols), and 170μm (green symbols).The solid lines are gaussian fits to the data to extract the peak amplitudes and arrival times.(c) Magnetic field peak arrival times extracted from Gaussian fits to the time delay traces measured at different distances from the edge.The grey dashed line shows the expected increase of propagation time with distance based on the group velocity for a 1 THz electromagnetic wave in GaP.The error bars denote the standard error on the mean.
Figure 6 |
Figure 6 | Scaling of the magnetic field expulsion with experimental parameters.(a) Dependence of the photo-induced magnetic field expulsion with temperature (red circles).For each data point a time delay trace was acquired and fitted to extract the peak value ∆* peak reported here.See supplementary section S8 for more details.The dashed line shows in comparison the temperature dependence of the photo-induced superfluid density obtained from optical experiments in Ref. 19.(b) Peak magnetic field expulsion ∆* peak measured for different applied magnetic fields at a fixed time delay 6 = 0.75 ps and at a base temperature != 100 K.The error bars denote the standard error on the mean.
Figure S2. 1 .
Figure S2.1.(a) Micrograph of the YBa2Cu3O7 thin film patterned into a 400 μm diameter half disc shape.(b) Sketch of the sample configuration highlighting the positioning of the GaP (100) magnetooptic detector with respect to the YBa2Cu3O7 half disc.
Figure S2. 2 .
Figure S2.2.(a) Sketch of the sample and detector assembly highlighting the positioning of the Al2O3 filters and GaP (100) detector with respect to the YBa2Cu3O6.48single crystal.(b) Top-view micrograph of the sample and detector assembly showing the GaP (100) detector (yellow) seen through the Al2O3 filter and positioned in the vicinity of the YBa2Cu3O6.48single crystal (black).
Figure S3. 1 .
Figure S3.1.Experimental setup used for the superconductivity disruption measurements shown in figures 1 and 2 of the main text.
Figure S3. 2 .
Figure S3.2.Experimental setup used for the mid-infrared pump, Faraday effect probe measurements shown in figures 4, 5 and 6 of the main text.
and 3 of the Main Text the magnetic field polarity was cycled following a sinewave at 450 Hz frequency and the pump was mechanically chopped at 225 Hz.A timing diagram of the acquisition scheme is shown in figure S4.1.The amplitude of the signal of the balanced photodetector difference channel is normalized by that of the sum channel in a pulse-by-pulse manner.For convenience we label these signals as !!± to indicate those acquired with the pump off for positive and negative polarities of the applied magnetic field and " !± to indicate the same signals acquired with the pump on.The subscript n runs on the pulse number in the sequence.The following quantities are calculated as follows:
Figure S4. 1 .
Figure S4.1.Timing diagram of the acquisition scheme used for the superconductivity disruption measurements shown in figure 2 and 3 of the main text.
Figure S4. 2 .
Figure S4.2.Timing diagram of the acquisition scheme used for the mid-infrared pump, Faraday effect probe measurements shown in figures 4, 5, and 6 of the main text.
Figure S5. 1 .
Figure S5.1.Sensitivity calibration measurement of the 70 μm thick GaP (100) detectors used for the measurements presented in this work.The measurements were performed at a temperature T = 100K as a function of the applied magnetic field.
Figure S5. 2 .
Figure S5.2.(a) Input polarization dependence of the measured signal expected for the Faraday effect in a GaP (100) crystal under the presence of a magnetic field parallel to the [100] direction.(b) Same as in (a) but due to the electro-optic effect, assuming a finite incidence angle and an electric field applied in the (100) plane.
of the Main Text is calculated under the assumption of a thickness 6 of the photo-excited region equal to 2 μm, corresponding to the electric field penetration depth of the pump, defined as6 = !"•Im[& 2 0 ], where 7 8 4 is the stationary complex refractive index of YBa2Cu3O6.48along the c-axis 10 at the pump frequency.This assumption is justified given the sublinear fluence dependence reported in figure S10.1.
Figure S6. 1 .
Figure S6.1.(a) Ratio of the measured local magnetic field B to the applied one !ext , as function of distance across the straight edge of the YBa2Cu3O7 half disc (see figure S2.1).The solid line is a guide to the eye.(b) Results of a magnetostatic calculation using the method described above for the set of parameters discussed in the text.Despite the simplicity of the model, the simulated !!!"# ⁄ shows a good agreement with the experimental data.
ruption of superconductivity induced a positive signal, indicating that magnetic field shielding ceased after photo-excitation.Due to the specific pulse sequence used for the measurement (see Supplementary Section S4) spatially inhomogeneous trapped magnetic flux was present and the amplitude of the magnetic field change is slightly altered compared to what one would expect from the equilibrium measurements.
Figure S7. 1 .
Figure S7.1.(a) Sketch of the experimental geometry.The pump beam is kept fixed and the probe beam is moved across the edge of the YBa2Cu3O7 half-disc shaped film.(b) Space-dependent pumpinduced changes in the local magnetic field measured at T = 30K, with an applied magnetic field of 2 mT and pump fluence of 5 mJ/cm 2 .
Figure S9. 1 .Figure 5
Figure S9.1.Time dependence of the local magnetic field near the edge of the YBa2Cu3O6.48crystal measured upon photoexcitation with 15 µm pump pulses at temperatures of 100 K, 150 K, 200 K, 250 K, and 300 K.The shaded areas display the Gaussian fits used to extract the amplitude of the pumpprobe signal.These data were measured at a constant pump peak electric field of 2.5 MV/cm, pump pulse duration of ~ 1 ps and an applied magnetic field of 10 mT.The error bars denote the standard error on the mean.The data at 100 K and 300 K have been averaged for significantly longer compared to other temperatures, yielding a noticeably lower noise.
Figure S9. 1 .
Figure S9.1.(a) Sketch of the experimental geometry.The pump and the probe beams are both moved parallel to the edge of the detector from an area where the YBa2Cu3O6.48crystal is present underneath the GaP detector layer to one where it is not.(b) Measured pump-induced changes in the local magnetic field at a temperature T = 100 K, time delay t = 0.75 ps, and peak electric field of 2.5 MV/cm.The solid line is a guide to the eye.The error bars denote the standard error on the mean.
Figure S10. 1 .
Figure S10.1.Fluence dependence of the pump-induced changes in the local magnetic field near the edge of the YBa2Cu3O6.48crystal measured upon photoexcitation with 15 µm pump pulses at a temperature of 100 K.These data were measured at the peak of the response and an applied magnetic field of 10 mT.The error bars denote the standard error on the mean. | 10,488.8 | 2024-05-01T00:00:00.000 | [
"Physics"
] |
Electrostatic structures associated with dusty electronegative magnetoplasmas
By using the hydrodynamic equations of positive and negative ions, the Boltzmann electron density distribution and the Poisson equation with stationary dust, a three-dimensional (3D) Zakharov–Kuznetsov (ZK) equation is derived for small but finite amplitude ion-acoustic waves. However, the ZK equation is not appropriate to describe the system either at critical plasma compositions or in the vicinity of the critical plasma compositions. Therefore, the modified ZK (MZK) and extended ZK (EZK) equations are derived. The generalized expansion method is used to analytically solve the ZK, MZK and EZK equations. A new class of solutions that admits a train of well-separated bell-shaped periodic pulses is obtained. In certain conditions, the latter degenerates to either solitary or shock wave solutions. The effects of the physical parameters on the nonlinear structures are examined in many plasma environments having different negative ion species, such as D- and F-regions of the Earth's ionosphere, as well as in laboratory plasma experiments. Numerical analysis of the solutions revealed that the profile of the nonlinear pulses suffers amplitude and width modifications due to enhancement of the dust practices, negative ions, positive-to-negative ion mass ratio and positive/negative ion cyclotron frequency. Furthermore, the necessary conditions for both solitons and shocks propagation as well as their polarity are examined.
Introduction
It is well known that dust particles are common in the universe and they represent much of the solid matter in it. Dust particles often contaminate fully ionized or partially ionized gases and form the so-called 'dusty plasma', which occurs frequently in nature. In astrophysics, in the early 1930s, dust was shown to be present in the interstellar clouds where it appears as a selective absorption of stellar radiation interstellar reddening. Dust particles play a very important role in the solar system, in cometary tails, in planetary rings and also in the evolution of the solar system from its solar nebula to its present form. They are also found in environments such as production processes, flames, rocket exhausts, etching experiments and experiments on dust plasma crystals [1]. The dust particles are of micrometer or submicrometer size and their mass is large compared to the masses of the ion species. Due to the presence of such heavy particles, the plasma normal mode could be modified. For example, the ion-acoustic wave is one of the modified normal modes, which are called the dust ion-acoustic waves (DIAWs). Shukla and Silin [2] were the first to report theoretically the existence of DIAWs in unmagnetized dusty plasmas. Later, the DIAWs were observed experimentally in laboratory experiments [3,4]. Furthermore, many efforts have been made to understand the properties of linear and nonlinear DIAWs in dusty plasmas ([5]- [7]; [8] and references therein; [9,10]).
Negative ion plasma is a plasma containing both negative ion species and positive ion species in addition to electrons. This type of plasma is of great importance for various fields of plasma science and technology. The existence of a considerable number of negative ions in the Earth's ionosphere [11] and cometary comae [12] is well known. Positive-negative ion plasmas are found in plasma processing reactors [13], in neutral beam sources [14] and in low-temperature laboratory experiments [15,16]. Moreover, negative ions have been found to outperform positive ions in plasma etching. Therefore the importance of negative ion plasmas for the field of plasma physics is growing. To treat such plasmas skillfully, basic study and development of diagnostic techniques on negative ion plasmas are indispensable. Recently, the 3 Cassini spacecraft has conclusively demonstrated the presence of heavy negative ions in the upper region of Titan's atmosphere [17]. These particles may act as organic building blocks for even more complicated molecules. In a negative ion plasma, the number of electrons decreases according to the charge neutrality, i.e. n e = n + − n − , where n e , n + and n − are the electron, positive ion and negative ion densities, respectively. The resulting decrease in the shielding effect produced by electrons, which is one of the main effects governing the behavior of plasmas, characterizes the specific phenomena of negative ion plasmas. From this perspective, it seems to follow that the negative ions only influence plasmas secondarily. Although this is a fact, most of the phenomena are actually affected by the negative ions themselves as well as by the lack of electrons [16].
Recently, it was found that the presence of negative ions in dusty plasma could change the plasma composition and plasma transport properties [18], as well as the dust charges [19,20]. For example, Kim and Merlino [19] reported the conditions under which dust grains could be positively charged in an electron-ion plasma with both positive and negative ions. Later, positively charged nanoparticles in the night time polar mesosphere were observed by Rapp et al [21]. Rosenberg and Merlino [22] investigated the effect of positive and negative dust grains on the ion-acoustic wave instability in a plasma with negative and positive ions. Due to the presence of a magnetic field, the behavior of the ion-acoustic waves in the presence of negatively charged ions and dust particles can drastically change. On the other hand, the presence of negative ions, as well as either positive or negative dust particles, can produce various nonlinear structures, such as solitons and shocks. There remains a wealth of other sources that point towards the existence of dust particles in space observation (cf [19,22]). Mamun et al [23] have investigated the properties of solitary waves and double layers in an electronegative dusty plasma in a planar geometry, and compared the results with experiment [24]. It should be mentioned here that the origin and mechanism for the generation of dust particles in space plasma is still an important problem. Therefore, it is of practical interest to examine the effect of dust particles on the properties of the ion-acoustic excitations in dusty electronegative magnetoplasmas. This paper is organized as follows. In section 2, we present the governing equations for the nonlinear DIAWs in positive-negative ion plasma. The reductive perturbation method is employed to derive the Zakharov-Kuznetsov (ZK) equation describing the system. Furthermore, we shall obtain appropriate equations, including higher-order nonlinearity, describing the evolution of the nonlinear pulses at critical plasma compositions and in the vicinity of the critical plasma compositions. The generalized expansion method is used to solve analytically the evolution equations, and obtain a train of well-separated bell-shaped periodic pulses that can change to solitary excitations as well as shock pulses. Section 3 contains the numerical results and discussion. Finally, the results are summarized in section 4.
Basic equations and formulation of the problem
We consider 2D, magnetized and collisionless four-component plasmas consisting of positive ions, negative ions, electrons and stationary dust particles. The external magnetic field is directed in the x axis, i.e. B = B 0x , wherex is the unit vector along the x axis. The propagation of the nonlinear electrostatic excitations is governed by a system of fluid equations for the positive and negative ion fluids, distinguished by using the index '+' and '−', respectively. The 4 dynamics are governed by the continuity equation and the momentum equations The Poisson equation reads In equations (1)-(4), n +,− is the positive (negative) ion number density, while n d and n e (= n e0 exp(eφ/k B T e )) are the dust and electron densities, respectively. Furthermore, u +,− is the positive (negative) ion fluid velocity, φ is the electrostatic wave potential, e(Z d ) is the magnitude of electron (dust) charge, m +,− is the mass of positive (negative) ion, B 0 is the magnitude of the ambient magnetic field and c is the speed of light. δ = +1 for positive dust particles and δ = −1 for negative dust particles. Equations (1)-(4) may be cast in a reduced (non-dimensional) form, for convenience of manipulation. For positive ions, ∂u +,z ∂t for negative ions, and for electrons, 5 Finally, the system is closed by the Poisson equation as Adopting the neutrality hypothesis, the heavy plasma species density n d (including positive and negative dust as an ensemble) will be taken to be fixed and continuous, thus providing a globally neutralizing background. The variables appearing in equations (5)- (14) have been scaled by appropriate quantities. Thus, the density n j (for j = +, −, d and e) is normalized by the unperturbed positive ion density n +0 , u +,− is scaled by the positive ion sound speed C s+ = (k B T e /m + ) 1/2 , the potential φ is normalized by k B T e /e and the positive-(negative-) ion cyclotron frequency ω c± = eB 0 /(m ± c) is normalized by the ion plasma period ω −1 p+ = (4π e 2 n +0 /m + ) −1/2 . The space and time variables are in units of the characteristic Debye length λ D+ = (k B T e /4π e 2 n +0 ) 1/2 and the ion plasma period ω −1 p+ , respectively. We define the mass ratio Q = m + /m − . The neutrality condition implies where α = n e0 /n +0 , β = n −0 /n +0 and γ = Z d n d0 /n +0 (the index '0' denotes the unperturbed density states). The upper bar in equations (5)-(14) will be omitted henceforth.
Derivation of the ZK equation
To investigate the nonlinear propagation of the electrostatic DIAWs, we shall employ the reductive perturbation method [25]. According to this method, the independent variables can be stretched as where ε is a small (real) parameter and λ is the wave propagation speed. The dependent variables are expanded as and The transverse velocity (y and z components) of the positive/negative ion fluid is expanded as and so forth, i.e. similar expressions hold upon switching sign + → − and/or axis y → z in the index notation. Employing the variable stretching (16) and the expansions (17)-(20) into equations (5)-(14), we may now isolate distinct orders in ε and derive the corresponding variable contributions. The lowest-order equations in ε read (1) .
The Poisson equation provides the compatibility condition The next order in ε yields and From the second terms in ε together with (21), (22) and (23), we obtain the desired ZK equation, The nonlinearity A and the dispersion coefficients B and D are given by and
Derivation of the MZK and EZK equations
It is obvious that, at some critical compositions, e.g. for critical negative ion density β = (3 − αλ 4 )/3Q 2 , the quadratic nonlinearity coefficient A acquires negligible value. Therefore, an analogous equation (or simple ad-hoc stretching (16)) can be formulated and we may require 7 a different perturbative scaling by adjusting our parameter scaling appropriately. Let us now introduce new stretched space-time coordinates [7]: We shall use expansions (17)- (19) but the transverse velocities will be given as Recall that similar expressions hold upon switching the sign + → − and/or the axis y → z in the index notation of equation (35). We substitute the stretching (34) and the expansions (17)- (19) and (35) into the basic equations (5)- (14). Of course, the lowest order of ε recovers the linearized solutions (21) and (22) as well as the compatibility condition (23). For the next order of ε, we obtain while Poisson's equation gives The coefficient of φ (2) is identically zero, due to the compatibility condition (23), while the coefficient of φ (1) is precisely A/B, vanishing in the case at hand. Thus, Poisson's equation is automatically satisfied. The next order in ε gives the following equations: and Eliminating the third-order variables from equations (43)-(46), we finally obtain a modified ZK (MZK) equation as ∂φ (1) ∂τ where B and D are given by (32) and (33), respectively, and C is given by Recall that equation (47) was derived at A = 0, i.e. it is correct only at the vanishing of the nonlinear coefficient A. So, we have to ensure that A = 0 in equation (48a). Therefore, from equation (31), we obtain β Q 2 = 1 − (αλ 4 /3), which can be used in equation (48a) to obtain Summarizing the above analysis, we have developed a theory for nonlinear excitations in multicomponent plasmas, in fact reducing the system of plasma-fluid equations to the ZK equation. At critical composition of negative ion concentration, i.e. β ≡ β c = (3 − αλ 4 )/3Q 2 , the nonlinear coefficient A vanishes. Since the pulses evolve when there is a balance between the effects of dispersion and nonlinearity, the required balance is missing at β ≡ β c . Equation (47) describes the pulse excitations in this case. However, an important question arises here regarding the strength of the nonlinearity A. Specifically, one may wonder whether the coefficient A may acquire small values and what happens then. Therefore, we may anticipate that higher-order nonlinearity enters into play, if A is of the order of, say, ε. To answer this question, one should obtain an appropriate equation that describes the evolution of the system in this case. Now, we shall therefore explicitly assume that A ∼ ε, in order to examine the behavior of the nonlinear waves in multicomponent plasma in the vicinity of the critical composition of negative ion concentration β c . We have shown that the right-hand side of Poisson's equation is given to O(ε 2 ) by We now assume that the deviation of the system from the state of critical density is O(ε), i.e. the coefficient of φ (1)2 is O(ε 2 ), and thus the right-hand side of (49) is O(ε 3 ). So, in Poisson's equation to O(ε 3 ), we have to take this quantity into account [26]. Then (46) will be replaced by where n (2) e , n (2) + and n (2) − are still given by (28), (36) and (39), respectively. If we omit the rather lengthy and cumbersome calculations, we obtain the EZK equation: ∂φ (1) ∂τ which is the evolution equation in the vicinity of the critical negative ion density. It consists of the nonlinear terms of the ZK equation (30) and the MZK equation (47). Thus the evolution 9 equation (51) can be studied for the particular cases of the ZK or the MZK equation and serves especially as the transitive link between the various ZK equations. Note that the nonlinear and dispersion coefficients are still given by (31), (32), (33) and (48a).
Analytical solutions of the ZK, MZK and EZK equations
The analytical solutions of equations (30), (47) and (51) can be obtained using the generalized expansion method [27]. The mathematical details are given in the appendix. For simplicity, we shall use the notation ϕ = φ (1) to summarize the final solutions of the evolution equations.
where φ 0 = 3ϑ/(AL X ) is the maximum (potential perturbation) amplitude, W = √ 4 /ϑ is the pulse width, L X and L Y are the direction cosines (i.e. L 2 and ϑ is the soliton speed to be determined later. (ii) Equation (47) has a localized-pulse solution as (iii) Equation (51) admits both solitary and shock wave solutions, which are given as, respectively, and where A, B, D, C and C are given by (31), (32), (33), (48a) and (48b), respectively.
Numerical analyses and discussion
There are many types of plasmas present in space and used in laboratory experiments. For example, (H + , O − 2 ) and (H + , H − ) plasmas have been found in the D-and F-regions of the Earth's ionosphere, but (Ar + , F − ) plasma was used to study the ion-acoustic wave propagation [28] in laboratory experiments. It is clear that the three types of plasmas have different components, which result in different mass ratios Q (= m + /m − ). As expected, the mass ratio plays a role in the properties of the wave propagation. Therefore, part of our interest is in investigating the effect of the mass ratio Q on the existence of the pulse excitations. The mass ratios for (H + , O − 2 ), (H + , H − ) and (Ar + , F − ) plasmas are 0.03, 1 and 2.1, respectively. Firstly, we shall investigate the properties of the solitary excitations represented by equation (30) (which has a solution (52)). Note that the sign of the nonlinear coefficient A affects the sign of φ 0 , which in turn determines the polarity of the potential perturbation φ (1) ≡ ϕ. Therefore, both positive and negative pulses may possibly be obtained, for different parameter values, respectively, representing a potential hump or a dip (an electrostatic potential hole). Furthermore, note that the soliton amplitude φ 0 is proportional to the soliton speed ϑ and inversely proportional to the soliton width W . Thus, faster solitons will be taller and narrower, while slower ones will be shorter and wider. The qualitative features of the soliton properties are thus recovered [29]. Figure 1 presents the contour plot of the wave amplitude φ 0 with dust density/concentration γ and negative ion density/concentration β for positive dust particles at different values of mass ratio Q. Firstly, the light-colored regions correspond to higher (lower) values of the positive (negative) wave amplitude. For (H + , O − 2 ) plasma in the Earth's ionosphere, the mass ratio Q = 0.03 is low; it is seen that the positive potential has a wider region than the negative potential (cf figure 1(a)). The behavior of the wave amplitude φ 0 for the other plasma types is presented in figures 1(b) and (c) for the Earth's ionosphere (H + , H − ) and laboratory experiment (Ar + , F − ) with larger mass ratios Q = 1 and 2.1, respectively. It is seen that, for larger mass ratio, the negative potential becomes much more dominant than the positive potential. Increasing dust density γ leads to a decrease (increase) in the positive (negative) wave amplitude. However, increasing negative ion density β enhances (decreases) the positive (negative) pulse amplitude. The contour plot of the wave amplitude φ 0 with dust concentration γ and negative ion concentration β for negative dust particles and different values of mass ratio Q is depicted in figure 2. It is obvious that increasing the dust density γ and negative ion density β increases (decreases) the positive (negative) pulse amplitude. From equation (52), it is clear that the width W of the solitary pulses depends on dust density γ , negative ion density β and positive/negative ion cyclotron frequency ω c+,− . The effects of the dust density γ and the negative ion density β on the solitary width W are depicted in figure 3 for (H + , H − ) plasma in the Earth's ionosphere, where the mass ratio Q = 1. It is seen that, for positive dust particles (cf Figure 3(a)), the increase in dust concentration γ (negative ion concentration β) shrinks (enhances) the pulse width. For negative dust particles (cf Figure 3(b)), increasing both dust density γ and negative ion density β makes the solitary pulses much wider. For different mass ratios of the Earth's ionosphere plasma (H + , O − 2 ) and laboratory plasma (Ar + , F − ), we obtain the same qualitative behavior as in figure 3. The effects of positive/negative ion cyclotron frequency ω c+,− on the solitary pulses width are depicted in figure 4. Here, we shall consider the positive dust case since the negative dust case gives the same qualitative behavior as positive dust. It is seen that increasing the positive ion cyclotron frequency ω c+ has an important role, i.e. it decreases the solitary pulse width for (H + , O − 2 ) plasma with low mass ratio Q = 0.03, but increasing the negative ion cyclotron frequency ω c− has no significant effect. For (H + , H − ) and (Ar + , F − ) plasmas with higher mass ratios Q = 1 and 2.1, respectively (cf Figures 4(b) and (c)), both positive and negative ion cyclotron frequencies ω c+,− make the width narrower (i.e. more spiky).
Recall that, at a critical composition of negative ion concentration (i.e. β ≡ β c = (3 − αλ 4 )/3Q 2 ), the nonlinear coefficient A in equation (30) acquires negligible value. Since the solitons evolve when there is a balance between the effects of dispersion and nonlinearity, the required balance is missing at β c . Equation (47) describes the pulse excitations in this case, which have a solitary pulse solution given by equation (53). Now, it is important to examine the properties of the electrostatic excitations for equation (53). It is clear that the width has the same behavior as equation (52) but the amplitude depends now on a new nonlinear coefficient C (given by equation (48b)). It is obvious that, to have real amplitude, the nonlinear coefficient C must be positive, which we shall examine now. We have plotted the contour plot of C for becomes wider with an increase in positive dust particles γ . The effect of the negative dust particles has different scenarios, as depicted in figure 9(b). Increasing the negative dust particle density γ and the negative ion density β causes the propagation of taller and narrower positive pulses but shorter and wider negative pulses. Now, we shall discuss the existence of the double layers (shocks). As we saw above in equation (55), is always positive, therefore the existence of the double layer requires C < 0 (given by equation (48a)). The Earth's ionosphere plasma (H + , H − ) will be used as an example to investigate numerically the nonlinear coefficient C. The numerical analyses in figures 5(a) and 6(a) show that, for low negative ion concentration β, the dominant situation corresponds to C > 0, while for high negative ion density β, the nonlinear coefficient C < 0. However, it is clear from figures 5(a) and (b) and 6(a) and (b) that the region of negative C usually overlaps with the region of positive . Physically, double layers propagate when the nonlineardispersion balance is missing and cannot then survive the solitary pulses propagating. On the other hand, when the nonlinear-dispersion balance is achieved, the double layers cannot exist. Thus, even for C < 0, the solitary pulse still exists (due to > 0) and therefore the double layers cannot propagate in the present plasma system. It is expected that, for non-Maxwellian electron distributions (such as nonthermal distribution, non-isothermal distribution), the double layers could exist, which will be considered in the future.
First of all, in view of the analysis and interpretation of our results, we should point out that solutions (A.5)-(A.7) admit a train of well-separated bell-shaped periodic pulses. The latter can degenerate to solitary pulses at certain values of m (where m is the modulus of the three Jacobi elliptic functions (cn, dn and sn)). On the other hand, when m → 0, the Jacobi elliptic functions degenerate to the triangular functions, i.e. sn → sin, cn → cos (which support periodic solutions). However, for m → 1, the Jacobi elliptic functions degenerate to the hyperbolic functions, i.e. sn → tanh, cn → sech (which support solitary solutions). In figure (10), we have numerically analyzed solution (A.6) 'as an example' for different values of m. It is clear that, for small values of m, solution (A.6) admits periodic pulses, whereas increasing m → 1 leads to propagation of the solitary pulses. This result could not have been obtained using the travelling wave directly (see, e.g., [8]), but it is obvious that the generalized expansion method supports periodic, solitary and double layer solutions, depending on the physical parameters of the system.
Summary
In this paper, we have studied the nonlinear propagation of ion-acoustic waves in dusty electronegative magnetoplasmas, where a background of stationary dust was considered. We have derived the ZK equation, the MZK equation (at critical plasma composition) and the EZK equation (in the vicinity of the critical plasma composition). Using the generalized expansion method, a new class of solutions of the evolution equations that admits a train of well-separated bell-shaped periodic pulses is obtained. In certain conditions, these solutions degenerate to solitary and shock wave solutions. We have used the present model to investigate the behavior of the nonlinear structures in different plasma environments with different negative ion species, such as D-and F-regions of the Earth's ionosphere (H + , O − 2 ) and (H + , H − ), as well as in laboratory plasma experiment, (Ar + , F − ). Numerical analysis of the solutions revealed that the profile of the nonlinear pulses suffers amplitude and width modifications due to enhancement of the dust particle density γ , negative ion density β, positive-to-negative ion mass ratio Q and positive/negative ion cyclotron frequency ω c+,− . Furthermore, the necessary conditions for the propagation of both solitary pulses and shock waves, as well as their polarity, are examined.
Appendix. Solutions of equations (30), (47) and (51) using the generalized expansion method
To obtain the possible analytical solutions of equations (30), (47) and (51), we assume that ϕ = φ (1) (ξ ) and ξ = L X X + L Y Y − ϑτ, (A.1) where L X and L Y are the direction cosines (i.e. L 2 X + L 2 Y = 1) and ϑ is the acoustic speed to be determined later. Putting (A.1) into (51), we obtain −ϑϕ + A 0 ϕϕ + B 0 ϕ 2 ϕ + γ ϕ = 0, (A.2) where A 0 = AL X , B 0 = C L X and = L X (B L 2 X + DL 2 Y ). According to the generalized expansion method [27] and due to the mixed nonlinearity, the solution of equation (A.2) can be represented by (51) where both A and C are not zero (so, in this case, we must use the expression of C from (48a)). | 6,108.2 | 2010-07-01T00:00:00.000 | [
"Physics"
] |
Ammonia Emission Factors and Uncertainties of Coke Oven Gases in Iron and Steel Industries
: In this study, uncertainties related to NH 3 concentration, emission factor, and emission factor estimation in the exhaust gas of the steel sintering furnace using COG (coke oven gas) among the by-product gas generated in steel production was estimated to identify the missing source. By measuring the NH 3 concentration in the exhaust gas of steel sintering furnace using COG, a concentration between 0.02 and 0.12 ppm was found, with an average concentration of 0.06 ppm, confirming the emissions of NH 3 . Using this measurement of the NH 3 concentration, an NH 3 emission factor of 0.0061 kg NH 3 / ton was derived. The uncertainty of the developed NH 3 emission factor of the sintering furnace using COG was analyzed using a Monte Carlo simulation. Consequently, the uncertainty range of NH 3 emission factor of the sintering furnace using COG was derived to be − 11.4% to + 12.89% at the 95% confidence level. According to the results of this study, NH 3 emissions occur from the use of COG, yielding values higher than the NH 3 emission factor resulting from the use of LNG(liquefied natural gas) in combustion facilities. It should be recognized that it is a missing emission source and the corresponding emission should be calculated. first step is the selection of the model and the construction of the NH 3 emission factor calculation worksheet. Second, the probability density functions of the input variables required for the development of the NH 3 emission factor were tested through a suitability test. The significance level for hypothesis testing was selected at 95%, and each probability density function was calculated through a conformity test of data such as the NH 3 emission concentration and the emission flow needed for NH 3 emission factor development. The third step is to perform a Monte Carlo simulation, and a random sampling simulation was performed using the Monte Carlo simulation software, ver. 11.1.2.4 (Oracle Crystal ball, Oracle, Redwood City, California). The fourth step is to calculate the uncertainty range of the 95% confidence interval through the simulation results.
Introduction
The recent PM 2.5 concentration level in Korea was 24 µg/m 3 as of 2018. Korea's average PM 2.5 concentration in 2018 was ranked as the 27th highest in the world, according to a survey of 3000 cities in 73 countries by Air Visual, an air-quality monitoring company. When only considering OECD (Organization for Economic Cooperation and Development) countries, average PM 2.5 concentration in 2018, Korea exhibits the second poorest air quality after Chile (24.9 µg/m 3 ), with a PM 2.5 level twice as high as that of major developed countries such as France(13.2 µg/m 3 ), Japan(12.0 µg/m 3 ), the United Kingdom(10.8 µg/m 3 ), and the United States (9.0 µg/m 3 ) [1].
One of the main causes of this problem is the increase in secondary fine dust generation materials, and the factors contributing to the generation of secondary fine dust include NOx, SOx, VOCs, and NH 3 [2][3][4][5]. Among the air pollutants, Korea is managing NOx and SOx abundances in accordance with the air pollution total amount system, and measuring air pollution in real time through the air pollution measurement network [6][7][8]. However, NH 3 is mainly managed in terms of its odor, and measurements are not performed in real time. In addition, studies related to NH 3 emission sources are insufficient.
According to previous studies, the analysis change in Korea PM 2.5 concentration of the air pollutant reduction; through an air quality model, it was revealed that NH 3 emission reduction provided the greatest potential for reducing the PM 2.5 concentration of all pollutants in Korea [9,10].
In Korea, the emission factor provided by the United States Environmental Protection Agency (US EPA) or CO-oRdination d'INformation Environnementale AIR(CORINAIR) from the European Environmental Agency, is used for the estimation of ammonia emissions. In addition, there are many unknown sources of pollution [11][12][13]. Accordingly, the importance of related studies is increasing, addressing issues such as missing sources associated with NH 3 generation and the development of NH 3 emission factors that reflect the situation in Korea.
Recently, renewable energy sources have been used as a component of the recent measures to reduce air pollution and greenhouse gas emissions. In the steel industry, Coke Oven Gas (COG), Blast Furnace Gas (BFG), and Linz-Donawitz Gas (LDG), which are discharged from the steel production process, are being recycled as energy sources in the process and power generation [14][15][16][17].
Among the NH 3 emission sources of steel mills in Korea, emissions for the sintering furnace currently refer to the US EPA 1994 value, and the NH 3 emission factor is applied only to the fuel sources such as coal, petroleum, and liquefied natural gas (LNG) used in the sintering furnace. However, for the by-product gases (COG, BFG, LDG) recently used in the steel production process, NH 3 emission calculation and emission factor development are not being properly performed. Therefore, this study aims to identify the missing source by estimating the uncertainty related to NH 3 concentration, emission factor, and emission factor calculation in the exhaust gas of the steel-sintering furnace, using COG among the by-product gases used in the steel production process.
Selection of Objective Facilities
In this study, NH 3 samples were collected to confirm the NH 3 concentrations at the COG gas outlets of the sintering furnaces in steel mills. The types of fuel and number of samples collected in the target facilities are shown in Table 1. The sample collection at target facility was conducted more than 3 times, and 10 samples were collected. The steel company's COG composition was found to be the highest, with 52.92% H 2 and the second-highest with 21.65% CH 4 .
Ammonia Analysis at Iron and Steel Production Facility
To measure the NH 3 emission concentration at the sintering furnace COG gas outlets, the indophenol method was used in this study among the methods suggested for the measurement of NH 3 emissions in the odor process test method and air pollution process test method of Korea [18]. The indophenol method is a method for estimating the NH 3 content by measuring the absorbance of indophenols produced by the reaction with ammonium ions when sodium hypochlorite and phenol-sodium nitroprusside solutions are added to the sample solutions for analysis. For the NH 3 sample collection, NH 3 absorption solution (total 50 mL of boric acid solution for absorption) was added in two 50 mL volumetric flasks and flowed through 80 L of exhaust gas at 4 L/min for approximately 20 min using a mini-pump (SIBATA MP-ΣNII, Japan). To remove the moisture contained in the gas discharged from the exhaust gas outlet of the sintering furnace, a desiccator bottle containing silica gel was connected in front of the NH 3 sample collection device. The schematic for NH 3 sample collection is shown in Figure 1. The absorptivity of the absorption solution used to collect NH 3 at 640 nm wavelength was measured by through spectrophotometry (Shimadzu 17A, Japan). The accuracy of the spectrophotometry equipment is 0.5% of reading, and the linearity test results using ammonia standard samples 0.0009 mM/L, 0.0022 mM/L, 0.0134 mM/L, 0.0538 mM/L, and 0.0627 mM/L, R 2 values showed high linearity with 0.9992.
Ammonia Analysis at Iron and Steel Production Facility
To measure the NH3 emission concentration at the sintering furnace COG gas outlets, the indophenol method was used in this study among the methods suggested for the measurement of NH3 emissions in the odor process test method and air pollution process test method of Korea [18]. The indophenol method is a method for estimating the NH3 content by measuring the absorbance of indophenols produced by the reaction with ammonium ions when sodium hypochlorite and phenol-sodium nitroprusside solutions are added to the sample solutions for analysis. For the NH3 sample collection, NH3 absorption solution (total 50 mL of boric acid solution for absorption) was added in two 50 ml volumetric flasks and flowed through 80 L of exhaust gas at 4 L/min for approximately 20 min using a mini-pump (SIBATA MP-ΣNⅡ, Japan). To remove the moisture contained in the gas discharged from the exhaust gas outlet of the sintering furnace, a desiccator bottle containing silica gel was connected in front of the NH3 sample collection device. The schematic for NH3 sample collection is shown in Figure 1. The absorptivity of the absorption solution used to collect NH3 at 640 nm wavelength was measured by through spectrophotometry (Shimadzu 17A, Japan). The accuracy of the spectrophotometry equipment is 0.5% of reading, and the linearity test results using ammonia standard samples 0.0009 mM/L, 0.0022 mM/L, 0.0134 mM/L, 0.0538 mM/L, and 0.0627 mM/L, R 2 values showed high linearity with 0.9992.
Development of NH3 Emission Factor
The NH3 emission factor calculation is shown in Equation (1). The flow data needed for the development of the NH3 emission factor of sintering furnaces using COG in steel mills were taken from the CleanSYS data of the target facility, and the flow rate was measured by the daily integrated flow data. CleanSYS is an air pollution measurement network managed by the Ministry of Environment of Korea and measures air pollutants such as NOx, SOx, and PM, and the flow rate and temperature of the exhaust gas in real-time. In the case of COG fuel usage, data from the target facility were used.
Development of NH 3 Emission Factor
The NH 3 emission factor calculation is shown in Equation (1). The flow data needed for the development of the NH 3 emission factor of sintering furnaces using COG in steel mills were taken from the CleanSYS data of the target facility, and the flow rate was measured by the daily integrated flow data. CleanSYS is an air pollution measurement network managed by the Ministry of Environment of Korea and measures air pollutants such as NOx, SOx, and PM, and the flow rate and temperature of the exhaust gas in real-time. In the case of COG fuel usage, data from the target facility were used.
where EF is emission factor (ton NH 3 /m 3 ); C NH 3 is NH 3 concentration in exhaust gas (ppm); M w is molecular weight of NH 3 (constant) = 17.031 g/mol; V m is one mole ideal gas volume in standardized condition (constant) = 22.4 × 10 −3 m 3 /mol; Q day is daily accumulated flow rate (Sm 3 /day) (based on dry combustion gas); and FC day is the daily fuel consumption (m 3 /day).
Uncertainty Analysis by Monte Carlo Simulation
In this study, Monte Carlo simulation was used for the uncertainty analysis of the NH 3 emission factor of the sintering furnace using COG from the steel works. Monte Carlo simulation is a method used to analyze uncertainties from the probability density function through the generation of random numbers. It is widely used for uncertainty analysis in the environmental science, especially for greenhouse gas [19,20]. Monte Carlo simulation can be analyzed in four stages, as shown in Figure 2. The first step is the selection of the model and the construction of the NH 3 emission factor calculation worksheet. Second, the probability density functions of the input variables required for the development of the NH 3 emission factor were tested through a suitability test. The significance level for hypothesis testing was selected at 95%, and each probability density function was calculated through a conformity test of data such as the NH 3 emission concentration and the emission flow needed for NH 3 emission factor development. The third step is to perform a Monte Carlo simulation, and a random sampling simulation was performed using the Monte Carlo simulation software, ver. 11.1.2.4 (Oracle Crystal ball, Oracle, Redwood City, California). The fourth step is to calculate the uncertainty range of the 95% confidence interval through the simulation results.
Sustainability 2020, 12, x FOR PEER REVIEW 4 of 8 In this study, Monte Carlo simulation was used for the uncertainty analysis of the NH3 emission factor of the sintering furnace using COG from the steel works. Monte Carlo simulation is a method used to analyze uncertainties from the probability density function through the generation of random numbers. It is widely used for uncertainty analysis in the environmental science, especially for greenhouse gas [19,20]. Monte Carlo simulation can be analyzed in four stages, as shown in Figure 2. The first step is the selection of the model and the construction of the NH3 emission factor calculation worksheet. Second, the probability density functions of the input variables required for the development of the NH3 emission factor were tested through a suitability test. The significance level for hypothesis testing was selected at 95%, and each probability density function was calculated through a conformity test of data such as the NH3 emission concentration and the emission flow needed for NH3 emission factor development. The third step is to perform a Monte Carlo simulation, and a random sampling simulation was performed using the Monte Carlo simulation software, ver. 11.1.2.4 (Oracle Crystal ball, Oracle, Redwood City, California). The fourth step is to calculate the uncertainty range of the 95% confidence interval through the simulation results.
NH3 Emission Concentration at Iron and Steel Production Facility
For the NH3 concentration analysis, the sintering furnace of a steel production facility was visited three times, and 10 samples were collected. The NH3 concentration analysis results are shown in Table 2. As a result of this a using COG in the steel production facility, the concentration range was measured between 0.02 ppm and 0.12 ppm, and the average concentration was 0.06 ppm. The standard deviation of the NH3 concentration was 0.04 ppm. The daily average concentration was 0.05 ppm in the first and 0.07 ppm in the second and third days, so the concentration difference by daily average was not large.
NH 3 Emission Concentration at Iron and Steel Production Facility
For the NH 3 concentration analysis, the sintering furnace of a steel production facility was visited three times, and 10 samples were collected. The NH 3 concentration analysis results are shown in Table 2. As a result of this a using COG in the steel production facility, the concentration range was measured between 0.02 ppm and 0.12 ppm, and the average concentration was 0.06 ppm. The standard deviation of the NH 3 concentration was 0.04 ppm. The daily average concentration was 0.05 ppm in the first and 0.07 ppm in the second and third days, so the concentration difference by daily average was not large.
NH 3 Emission Factor at Iron and Steel Production Facility
The NH 3 emission factor of the sintering furnace using COG in the steel production facility was developed by combining the NH 3 concentration measured in this study with data received from the measurement facility, and the results are shown in Table 3. The development result of the NH 3 emission factor of the sintering furnace using COG in the steel production facility was 0.0061 ton NH 3 /m 3 . Table 3. NH 3 emission factor of the investigated iron and steel production facility. Currently, Korea's national statistics do not apply an NH 3 emission factor depending on COG usage, and there is no comparable data as no relevant overseas studies were found. Therefore, to confirm the level of the developed emission factor value, we compared it with the LNG-applied NH 3 emission factor from the EPA among the emission factors currently applied to sintering furnaces in Korean steel mills. By comparison, the value was about 20% higher than the EPA emission factor, which is 0.0051 ton NH 3 /m 3 .
NH 3 Emission Factor of COG (This Study) ton NH
Based on these results, it is necessary to develop NH 3 emission factors according to the use of COG, and they should be included in the NH 3 inventory as a missing source.
Uncertainty of the NH 3 Emission Factor at the Iron and Steel Production Facility
The Monte Carlo simulation developed in this study was used to estimate the uncertainty of the NH 3 emission factor in the sintering furnace using COG in the steel production facility, and the calculation results are shown in Figure 3.
The probability density function of the NH 3 emission factor developed in this study was analyzed with a log-normal distribution. The analysis result was an average of 0.0613 ton NH 3 /m 3 , and at the 95% confidence level, the bottom 2.5% corresponded to 0.0543 ton NH 3 /m 3 , whereas the top 97.5% corresponded to 0.0692 ton NH 3 /m 3 . The uncertainty range for the NH 3 emission factor calculated using this value was analyzed to range from −11.4% to +12.89% at the 95% confidence level. As the NH 3 uncertainty is not currently presented in numerical values or ranges, the comparison of related cases is difficult. However, greenhouse gases are presented in terms of uncertainty range and value.
Conclusion
Korea's PM 2.5 concentration level was 24 µ g/m 3 in 2018, the second poorest air quality, after Chile among the OECD countries. One of the causes is due to the secondary fine dust generation materials, and NH3 is one of the main secondary PM 2.5 generation factors. If inventory reliability is improved and managed properly, it may contribute to the reduction of PM 2.5.
As the by-product gas generated in the steel industry has recently been utilized as a means for reducing greenhouse gases and air pollution, NH3 emissions can occur from by-product gas combustion. Currently, there are only a few studies on NH3 emissions from the use of by-product gas, and no relevant statistics have been collected. Therefore, in this study, uncertainties related to NH3 concentration, emission factor, and emission factor estimation in the exhaust gas of the steel sintering furnace using COG among the by-product gas generated in steel production was estimated in order to identify the missing source.
By measuring the NH3 concentration in the exhaust gas of the steel sintering furnace using COG, a concentration between 0.02 ppm and 0.12 ppm was found, with an average concentration of 0.06 ppm, confirming the emission of NH3. Using this measurement of the NH3 concentration, an NH3 emission factor of 0.0061 ton NH3/m 3 was derived. Comparing the developed COG NH3 emission factor with the LNG NH3 emission factor obtained by using the EPA, the value was approximately 20% larger than the EPA's emission factor(0.0051 ton NH3/m 3 ), indicating the need for the development of an NH3 emission factor for the use of COG.
The uncertainty of the developed NH3 emission factor of the sintering furnace using COG was analyzed using a Monte Carlo simulation. Consequently, the uncertainty range of the NH3 emission factor of the sintering furnace using COG was derived to be −11.4% to +12.89% at the 95% confidence level. Comparing the uncertainty of the NH3 emission factor with the IPCC greenhouse gas uncertainty range level, it is found to be lower than ±25%, which is the uncertainty range for the basic emission factor of the steel production facility. In Korea, the uncertainty of air pollutants is evaluated by using the rank method based on expert opinion from the US EPA. If the uncertainties of air pollutants are presented in the same way as the uncertainty ranges of greenhouse gases, we can quantitatively evaluate the uncertainties of air pollutants.
According to the results of this study, NH3 emissions occur from the use of COG, yielding values higher than the NH3 emission factor resulting from the use of LNG in combustion facilities, thus, there is the possibility of a missing source. Therefore, it is necessary to develop emission factors by measuring the NH3 concentrations in more facilities and for other by-product gas fuels in order to establish statistics related to the use of by-product gas fuels. When comparing the uncertainty of the NH 3 emission factor of the steel production facility in this study with the Intergovernmental Panel on Climate Change(IPCC)greenhouse gas uncertainty range level, the value is lower than ±25%, which is the uncertainty range for the basic emission factor of steel production facilities proposed by the 2006 IPCC Guidelines. The uncertainty of Korean air pollutants is being evaluated by using the rank method, as suggested by the US EPA, and is being evaluated by experts [21,22]. Air pollutants such as greenhouse gases can be evaluated quantitatively if a range of uncertainty is provided.
Conclusions
Korea's PM 2.5 concentration level was 24 µg/m 3 in 2018, the second poorest air quality, after Chile among the OECD countries. One of the causes is due to the secondary fine dust generation materials, and NH 3 is one of the main secondary PM 2.5 generation factors. If inventory reliability is improved and managed properly, it may contribute to the reduction of PM 2.5.
As the by-product gas generated in the steel industry has recently been utilized as a means for reducing greenhouse gases and air pollution, NH 3 emissions can occur from by-product gas combustion. Currently, there are only a few studies on NH 3 emissions from the use of by-product gas, and no relevant statistics have been collected. Therefore, in this study, uncertainties related to NH 3 concentration, emission factor, and emission factor estimation in the exhaust gas of the steel sintering furnace using COG among the by-product gas generated in steel production was estimated in order to identify the missing source.
By measuring the NH 3 concentration in the exhaust gas of the steel sintering furnace using COG, a concentration between 0.02 ppm and 0.12 ppm was found, with an average concentration of 0.06 ppm, confirming the emission of NH 3 . Using this measurement of the NH 3 concentration, an NH 3 emission factor of 0.0061 ton NH 3 /m 3 was derived. Comparing the developed COG NH 3 emission factor with the LNG NH 3 emission factor obtained by using the EPA, the value was approximately 20% larger than the EPA's emission factor (0.0051 ton NH 3 /m 3 ), indicating the need for the development of an NH 3 emission factor for the use of COG.
The uncertainty of the developed NH 3 emission factor of the sintering furnace using COG was analyzed using a Monte Carlo simulation. Consequently, the uncertainty range of the NH 3 emission factor of the sintering furnace using COG was derived to be −11.4% to +12.89% at the 95% confidence level. Comparing the uncertainty of the NH 3 emission factor with the IPCC greenhouse gas uncertainty range level, it is found to be lower than ±25%, which is the uncertainty range for the basic emission factor of the steel production facility. In Korea, the uncertainty of air pollutants is evaluated by using the rank method based on expert opinion from the US EPA. If the uncertainties of air pollutants are presented in the same way as the uncertainty ranges of greenhouse gases, we can quantitatively evaluate the uncertainties of air pollutants.
According to the results of this study, NH 3 emissions occur from the use of COG, yielding values higher than the NH 3 emission factor resulting from the use of LNG in combustion facilities, thus, there is the possibility of a missing source. Therefore, it is necessary to develop emission factors by measuring the NH 3 concentrations in more facilities and for other by-product gas fuels in order to establish statistics related to the use of by-product gas fuels.
In addition to by-product gas, discovering and supplementing missing emission sources related to NH 3 emissions would significantly improve the reliability of the NH 3 inventory in Korea. Furthermore, if the reliability of NH 3 inventory were improved, it would be of great help in establishing PM 2.5 reduction policies.
In the case of Korea, studies on ammonia that contribute to the secondary generation of PM 2.5, which had been insufficient in related studies, have been conducted recently as the problem of PM 2.5 has become serious. Recently, studies have been conducted to evaluate the ammonia emission factor and the uncertainty of bituminous coal-fired power plants, and these studies have developed an emission factor of ammonia generated by steel companies among the ammonia emission sources and quantitatively calculated the uncertainty. In addition, in the case of COG, even though ammonia is currently being discharged, it is necessary to collect related activity data and establish an inventory related to emissions. In addition to this study, if studies on ammonia emission sources that country-specific emission factors of Korea are conducted in the future, it is believed that inventory reliability and quantitatively evaluated uncertainty data can be collected. | 5,759.6 | 2020-04-25T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Machine learning for chemical discovery
Discovering chemicals with desired attributes is a long and painstaking process. Curated datasets containing reliable quantum-mechanical properties for millions of molecules are becoming increasingly available. The development of novel machine learning tools to obtain chemical knowledge from these datasets has the potential to revolutionize the process of chemical discovery. Here, I comment on recent breakthroughs in this emerging field and discuss the challenges for the years to come.
Chemical discovery and ML are bound to evolve together, but achieving true synergy between them requires solving many outstanding challenges. The potential of using ML for increasing the accuracy and efficiency of molecular simulations has been established beyond any doubt [3][4][5][6] . Data-driven high-throughput materials discovery has also been established as a field of its own 7 . Physically inspired ML algorithms can identify new drug candidates 8 , find new phases in amorphous materials 9 , carry out molecular dynamics with essentially exact quantum forces 10 , and offer unprecedented statistical insights into chemical environments 11,12 . Up to now, most of these applications were done under idealized conditions. Future work should concentrate on enabling tighter embedding of molecular simulations and ML methods, combining QM and statistical mechanics via ML algorithms, developing universal ML approximations for covalent and non-covalent molecular interactions, and developing algorithms for targeted exploration of large chemical spaces. Obviously, all of these advances should be continuously assessed on growing community-curated datasets of microscopic and macroscopic molecular properties.
From molecular big data to chemical discovery The quality and reliability of ML models in any scientific domain depends on the increasing availability of data. The first applications of ML to molecular and materials modeling in 2010-2012 relied on small datasets containing QM properties for 10 2 -10 3 systems. The development of physics-inspired ML models and sophisticated atomistic descriptors have been crucial for increasing the predictive power of ML models by at least two orders of magnitude in the past 8 years 3 -an incredible scientific progress. Today, advanced ML models are capable of achieving predictive accuracy in QM properties of large molecular datasets by learning from just 1 to 2% of the data 3 . Such data efficiency and accuracy are essential for enabling in silico chemical discovery.
Recently, focus has been shifting towards constructing and exploring increasingly larger chemical spaces. Datasets such as QM9 13 , ANI-1x 14 , and QM7-X 15 contain QM properties for up to 10 7 molecular structures and enable essentially complete coverage of the chemical space of small drug-like molecules. These data has been used in many applications, for exampling to construct fast-to-evaluate neural network potentials for small molecules 11,16 , develop improved semiempirical quantum methods 17,18 , and obtain new insights into partitioning of molecular quantum properties into atomic and fragment-based contributions 11,12 .
Another unique application of ML for molecular modeling is ML-driven molecular dynamics simulations. ML force fields are able to combine the accuracy of high-level QM with the efficiency of classical force fields. For example, the gradient-domain ML force fields enable MD simulations of small molecules with essentially exact quantum treatment of both electrons and nuclei 10 -a task which was considered unattainable just a few years ago. For elemental solids, Gaussian approximation potentials (GAP) 19 are nowadays used to carry out MD simulations of unit cells with thousands of atoms and to obtain new insights into, for example, amorphous states of matter 9 .
Both wide exploration of chemical space and long time-scale MD simulations for single molecules are enabling tools for chemical discovery. Another important application of ML is inverse design of molecules with targeted properties. Ultimately, ML should also enable in silico guided discovery of novel molecules and materials and confirm such discoveries with experimental data. Indeed, successful ML-driven discoveries have been made in the search for organic light-emitting diodes 20 , redox-flow batteries 21 , and antibiotics 22 , among many other examples. The most remarkable aspect of ML for chemical discovery is that the corresponding statistical view on chemical space often enables asking new questions and obtaining novel insights. The holistic analysis of large swaths of chemical space leads to discoveries of molecules with unexpected properties 12 , offers hints for new chemical reaction mechanisms 23 , or even suggests new physicochemical relations 24,25 . Such novel discoveries are often made by interdisciplinary teams of researchers that are able to synergetically combine their knowledge of physical laws and constraints, chemical intuition, and sophisticated ML algorithms.
Future of ML for chemical discovery Current successful applications of ML for chemical discovery have only scratched the surface of possibilities. There are many conceptual, theoretical, and practical challenges waiting to be solved to enable the "chemical discovery revolution". Here I discuss the challenges that I consider to be the most pressing and interesting at this moment.
A universal ML approach should have the capacity to accurately predict both energetic and electronic properties of molecules. In addition, such an approach should uniformly describe compositional (chemical arrangement of atoms in a molecule) and configurational (physical arrangement of atoms in space) degrees of freedom on equal footing. Most existing ML approaches only describe a restricted subset of relevant degrees of freedom and physicochemical observables. Further progress in this field requires developing universal ML models for a diverse set of systems and physicochemical properties shown in Fig. 1.
From the perspective of atomic interactions, current ML representations are successful in describing local chemical bonding, but they completely miss long-range electrostatics, polarization, and van der Waals dispersion interactions. Combining intermolecular interaction theory with ML is an important direction for future progress towards studying complex molecular systems.
An emerging idea is to combine ML with approximate Hamiltonians for electronic interactions based on densityfunctional theory, tight-binding, molecular orbital techniques, or the many-body dispersion method. The ML approach is used to predict Hamiltonian parameters and the quantum-mechanical observables are calculated via diagonalization of the corresponding Hamiltonian. The challenge is to achieve tighter integration between ML and approximate Hamiltonians and to find an appropriate balance between prediction accuracy and computational efficiency.
Validation of ML predictions ultimately requires comparison to experimental observables, such as reaction rates, spectroscopic observations, solvation energies, melting temperatures, among other relevant quantities. Calculating these observables demands a tight integration of QM, statistical simulations, and fast ML predictions, all integrated in a comprehensive molecular simulations framework 6 .
Solving many of the challenges posed above will require coming up with creative interdisciplinary approaches combining quantum and statistical mechanics, chemical knowledge, and sophisticated ML tools, firmly based on growing datasets that cover increasingly broader domains of the vast chemical space. | 1,538.6 | 2020-08-17T00:00:00.000 | [
"Computer Science"
] |
Chemical-neuroanatomical organization of peripheral sensory-efferent systems in the pond snail (Lymnaea stagnalis)
Perception and processing of chemical cues are crucial for aquatic gastropods, for proper elaboration of adaptive behavior. The pond snail, Lymnaea stagnalis, is a model species of invertebrate neurobiology, in which peripheral sensory neurons with different morphology and transmitter content have partly been described, but we have little knowledge regarding their functional morphological organization, including their possible peripheral intercellular connections and networks. Therefore the aim of our study was to characterize the sensory system of the tentacles and the lip, as primary sensory regions, and the anterior foot of Lymnaea with special attention to the transmitter content of the sensory neurons, and their relationship to extrinsic elements of the central nervous system. Numerous bipolar sensory cells were demonstrated in the epithelial layer of the peripheral organs, displaying immunoreactivity to antibodies raised against tyrosine hydroxylase, histamine, glutamate and two molluscan type oligopeptides, FMRFamide and Mytilus inhibitory peptide. A subepithelial plexus was formed by extrinsic serotonin and FMRFamide immunoreactive fibers, whereas in deeper regions axon processess of different origin with various immunoreactivities formed networks, too. HPLC–MS assay confirmed the presence of the low molecular weight signal molecules in the three examined areas. Following double-labeling immunohistochemistry, close arrangements were observed, formed by sensory neurons and extrinsic serotonergic (and FMRFamidergic) fibers at axo-dendritic, axo-somatic and axo-axonic levels. Our results suggest the involvement of a much wider repertoire of signal molecules in peripheral sensory processes of Lymnaea, which can locally be modified by central input, hence influencing directly the responses to environmental cues.
Introduction
The capturing and subsequent interpretation of external signals from the surroundings are pivotal for optimal adaptation in the animal kingdom, including invertebrates. After arthropods, mollusks are the second most important phylum among invertebrates, represented by about hundred thousand different species, in which the largest class, gastropods involve a number of species, such as the opistobranchs Aplysia californica and Tritonia diomedea or the pulmonates Lymnaea stagnalis, Helix pomatia and Limax maximus, which are well-known model animals for neuroscience research (Chase 2002). These species possess well-developed sensory organs, primarily the tentacles (rhinophores) and lip, although other anatomical regions such as the mantle edge and the body surface are also supplied with sensory elements (Chase 2001). These organs are set for chemo-and/or mechanosensation. In addition, aquatic pulmonate species possess a specific chemosensory organ, the osphradium, whereas the statocysts serve in all gasptropod species for gravireception. Because of their poor visual perception, the role of chemoreception in collecting even distant information from the surrounding is essential. Most of the sensory elements responsible for catching chemical cues, and so playing an initiative role in elaborating different behaviors like food-finding, foraging, mating, escape or avoidance, are concentrated in the tentacles and lip.
There are a number of reports dealing with peripheral information processing by sensory neurons located in the gastropod central nervous system (CNS) (Kandel 1976(Kandel , 1979Chase 2001). Furthermore, early and recent neuroanatomical studies including ultrastuctural investigations have described the presence, distribution, morphology and organization of sensory cells in the periphery of several gastropod species. In Helix, bipolar sensory neurons were visualized in the epithelial and subepithelial layers of the anterior tentacles, lips and foot Benedeczky 1978, 1994;Hernádi 1981Hernádi , 1982. Zaitseva and Bocharova (1981) classified six different types of sensory neurons in the head regions of Helix and Viviparus, based on their location, anatomy, presence and number of cilia and microvilli, and their central projections. In the tentacles of Limax, Ito et al. (2000) identified three sensory neuron subtypes, namely the round, the spindle-shaped and the small ones. In terrestrial snails (e. g. Helix, Achatina) and Limax, sensory cells were shown to project mainly to the tentacular ganglion, then entered to the procerebrum, meanwhile a small part of them reached directly the CNS (Chase 2001(Chase , 2002Chase and Tolloczko 1993;Ierusalimsky and Balaban 2010). Nevertheless, the organization of the rhinophores in Aplysia was found to be slightly different. The bipolar sensory cells projected first to olfactory glomeruli located beneath the sensory epithelium, which were connected thereafter to the rhinophore ganglion (Wertz et al. 2006). The peripheral nervous system of the pond snail Lymnaea stagnalis has also been studied in details, regarding the types of sensory neurons, the cellular organization of the sensory system and the possible role of the peripheral structures in forwarding information to the CNS. Several types of sensory dendrites were distinguished, depending on the presence or lack, as well as the number and position of cilia (Zylstra 1972a, b;Roubos and Van der Wal Divendal 1982;Dorsett 1986;Chase 2002). Recently, four different types of ciliated bipolar sensory neurons were distinguished by Wyeth and Croll (2011) in the cephalic sensory region containing the lip and tentacles, based on the position and form of the sensory dendrites, the clustering of the cell bodies, and presence or lack of the sensory axon. By applying histo-and immunohistochemical methods, dopamine (DA/tyrosine hydroxylase [TH]), histamine (HA), glutamate (Glu), nitric oxide (nitrogen monoxide synthase ([NOS]/dihydronicotinamide adenine dinucleotide phosphate diaphorase [NADPHd]) and the oligopeptide FMRFa (Fa) were demonstrated in the sensory neurons of different peripheral regions of gastropods, including the cephalic sensory organs (tentacles, lip), the foot and the mantle (TH/DA: Lymnaea, Voronezhskaya et al. 1999;Croll et al. 1999;Wyeth and Croll 2011;Aplysia, Croll 2001;Phestilla, Croll et al. 2003;Pleurobranchea, Faller et al. 2008;Brown et al. 2018;Biomphalaria, Vallejo et al. 2014;HA: Lymnaea, Helix, Hegedűs et al. 2004;Lymnaea, Wyeth and Croll 2011;Biomphalaria, Habib et al. 2015;Glu: Lymnaea, Hatakeyama et al. 2007;NOS/NADPHd: Lymnaea, Elphick et al. 1995;Serfőző et al. 1998;Wyeth and Croll 2011;Aplysia, Moroz 2006;Fa: Limax, Suzuki et al. 1997;Aplysia, Wollesen et al. 2007). The possible transmitter content was also correlated with the morphology of the sensory cells in the cephalic sensory organs of Lymnaea (Wyeth and Croll 2011).
Although the sensory cell types and a good part of their neurotransmitter content have been described in the cephalic region of gastropods, including Lymnaea (see above), the spatial organization and the functional-anatomical relationship between neurochemically different peripheral sensory structures and efferent elements originating from the CNS has not yet been investigated. Therefore, the aim of our present study was, first, to perform a detailed chemical-neuroanatomical analysis following both single and double-labeling immunohistochemistry to widen our knowledge on the presence of signaling molecules in sensory cells of the pond snail, Lymnaea stagnalis. In the course of this, aminergic (DA, HA), amino acidergic (Glu) and peptidergic (Fa, MIP) afferent components were visualized in the sensory epithelium. Next, the possible functional anatomical relationship of the sensory elements to other neuronal components of extrinsic (central) origin was analyzed, focusing on the serotonin (5-HT) immunoreactive (5-HT-IR) elements. 5-HT is perhaps the most studied signaling molecule in the gastropod nervous system (Walker 1986;Walker et al. 2009;Gillette 2006). 5-HT containing processes have been regarded as a major extrinsic component of central origin in the gastropod periphery, which were identified in both somatic and visceral regions. As to the the innervation of the cephalic area, the 5-HTergic cerebral giant neuron was shown to play a key role (Pentreath and Cottrell 1974;Gillette 1991;Chase 2002). The role of 5-HTergic pedal neurons was also demonstrated in the innervation of the foot, including the ciliary movement of Lymnaea (Syed et al. 1988;Syed and Winlow 1989;McKenzie et al. 1998). The presence of 5-HT in peripheral nerve cells was only demonstrated in transient apical (sensory) cells of gastropod embryos (Marois and Croll 1992;Kempf et al. 1997;Marois and Carew 1997;Page and Parries 2000;Voronezhskaya et al. 2004). Our investigations were also coupled with HPLC-MS assay, to quantify and support the presence of small molecular weight neurotransmitters (5-HT, DA, HA, Glu) in the cephalic organs.
Animals
Adult specimens of the pond snail, Lymnaea stagnalis were used for our experiments. The animals were collected from the Kis-Balaton reservoir and other inlets, then maintained in aquaria supplied with oxygenated Balaton-water under 16-8 h light-dark cycle at room temperature (~ 16-20 °C) and fed on lettuce ad libitum.
All procedures were performed according to the protocols approved by the Scientific Committee of Animal Experimentation of the Balaton Limnological Institute (VE-I-001/0189010/2013).
Immunohistochemical procedure
Sixteen µm thick serial sections were cut on a cryostat (Leica Jung 1800) and mounted on chromalum-gelatinecoated slides. The immunolabeling was accomplished by two-step indirect immunofluorescent technique as follows. Sections were rinsed several times in PBS, followed by blocking in PBS containing 0.25% bovine serum albumin (BSA, Sigma) and 0.25% Triton-X 100 (Sigma) (PBS-TX-BSA) for 1 h at 4 °C. After that, sections were incubated for 24 h at 4 °C with different primary antibodies (Table 1) diluted in PBS-TX-BSA. Following washing in PBS-TX, the sections were incubated with a secondary antibody (Table 1) diluted in PBS-TX-BSA overnight at 4 °C, then mounted in a 3:1 mixture of glycerol and PBS. In case of double labeling, the sections were incubated for 24 h at 4 °C with the appropriate combination of the monoclonal anti-5-HT and one of the polyclonal primary antibodies. It was then followed first with an incubation with anti-mouse secondary antibody, then with a second incubation with anti-rabbit secondary antibody, each for 6 h at 4 °C. Finally, the sections were mounted in a 3:1 mixture of glycerol and PBS. The sections were viewed in a Leica TCS SP8 confocal laser scanning microscope (Leica Microsystems, Germany) equipped with appropriate wavelength-filter configuration settings. The necessary number of optical dections (15-30) with 0.5-0.8 μm step-size were made to capture of all visualized details. Image processing was performed by LasX (Leica Microsystems, Germany) software.
Sample preparation
The transmitter (5-HT, DA, HA, Glu) content of different peripheral tissues (tentacle, lip, foot) and, for comparison, that of the whole CNS was measured. For extraction of signal molecules acetonitrile (50 µL/mg) was applied containing 0.1% formic acid and 0.01 m/v% dithiotreitol. Dopamine-1,1,2,2-d4 HBr (Sigma-Aldrich) internal standard was added to the tissues. The final concentration of internal standard was 1 3 100 ng/mL. Thereafter, tissues were homogenized and were explored with a high energy ultrasonicator UIS250V (Hielsher Ultrasound Technology) at 6 × 10 s, applying ice cooling between cycles. Samples were then vortex mixed and centrifuged (Heraeus Biofuge Pico, Thermo Fisher Scientific) at 10,000 rpm for 5 min. Spinning supernatants were loaded in pure tubes and the solvents were evaporated with a SpeedVac concentrator (Eppendorf Life Sciences) at room temperature. The samples were dissolved in 150 µL ultra-pure water containing 0.1% formic acid and loaded into autosampler vials for HPLC-MS measurements.
Measurements
Analyses were performed with a complex Ultimate 3000 (Dionex, Sunyvale, USA) micro HPLC system and a Qexactive UHR mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA). For the separations, gradient elution (Maasz et al. 2017) were performed on a security guard column equipped Kinetex PFP column (100 mm × 2.1 mm i.d., particle size 2.6 µm, Phenomenex, Torrance, USA). The mass spectrometer equipped with a HESI source was used in the positive ion mode for mass detection. Filters of parent ion scan (SIM-single ion monitoring) and fragment ion scan (MS/ MS) modes were used for selective and sensitive detection of analytes. The most intense precursor-to-fragment transitions were used for quantitative analysis; for DA: 154.
Quantification
Five-point calibration curves were made for the quantitative analysis, using 10.0; 50.0; 100.0; 500.0; 1000.0 pmol/ml monoamines and Glu as standards. Correlation coefficients (r2) were between 0.9674 and 0.9950 for all acceptable calibration curves both in parent ion scan (MS) and fragment ion scan (MS/MS) modes (not shown). The limit of detection and the limit of quantification were between 2.9-6.5 pmol/ ml and 5.8-9.4 pmol/ml, respectively. The signal molecules were identified by their exact molecular weight and by their fragments (mentioned above) from the tissue homogenates. The quantification of data was made parallel both in MS and MS/MS modes.
Immunohistochemical demonstration of signal molecules in the lip, tentacle and foot of Lymnaea
Following the application of single labeling immunohistochemistry 5HT-IR, TH-IR, HA-IR, Glu-IR, Fa-IR and MIP-IR neuronal elements were present throughout the sensory region and subepithelial layer of the investigated peripheral organs, however, with partly different localization and density of occurrence (Fig. 1). 5-HT-immunoreactivity was bound exclusively to efferent axon processes forming a dense network at subepithelial level and below that in deeper regions (Fig. 1A). 5-HT-IR cell bodies did not occur at all. The localization of 5-HTergic neurons is restricted to the CNS in Lymnaea stagnalis (and other gastropod species as well). The cerebral giant cell (CGC) is responsible for the 5-HTergic innervation of the cephalic regions (see "Introduction"), whereas in case of the foot the 5-HT-containing efferent neurons are located in the pedal ganglia, although McKenzie et al. (1998) described the presence of a few 5-HT-IR cells in the foot. Other immunolabeled (TH-IR, HA-IR, Glu-IR, Fa-IR, MIP-IR) elements represented partly as bipolar sensory neurons, and partly as axon processes, forming either a subepithelial plexus or running in deeper regions beneath the plexus (Figs. 1B-H, 2C, 3A, 4A, E). In case of the lip and tentacle, the surface epithelium displayed a wavy form along certain segments, where the epithelial protrusions were supplied by parallel running immunolabeled fibers ( Fig. 2A).
The sensory axons were collected first in smaller, later beneath the subepithelial layer in larger, thick bundles (Figs. 3A, 4E). However, whether the latter contained only afferent (sensory) elements or efferent fibers as well could only partly be determined unequivocally (see also below). The distribution pattern and organization of the IR sensory and other neuronal elements were similar in the lip and tentacle, meanwhile the picture in the foot showed certain differences. In the foot, TH-IR sensory neurons occurred less frequently, whereas a TH-IR subepithelial plexus and numerous TH-IR fibers could be observed beneath the plexus (Figs. 1B, 2C). Numerous Glu-IR and MIP-IR bipolar neurons were found in the anterior region of the foot, whereas only few HA-and Fa-IR sensory cells were present (Fig. 1E, G, H). Along certain surface segments perpendicularly running Glu-IR and MIP-IR dendrites were lined up closely near to each other (Figs. 1E, H, 3D, 4F). Both Fa-IR and MIP-IR subepithelial plexi, as well as Fa-IR and MIP-IR fiber networks could be observed in the deeper levels of the foot (Figs. 1G, 4G).
Relationship of neuro-chemically different sensory (afferent) elements to 5-HTergic efferent innervation of central origin in the lip, tentacle and foot of Lymnaea
Double immunostaining was performed in the three peripheral organs to study the putative functional-anatomical relationship established between the sensory elements containing different signaling molecules and the 5-HT-IR efferent innervation. Following double labeling, a distinct, separated intracellular localization of the different immunolabelings could be observed. Co-localization of 5-HT-IR with other immunolabeled elements was not found. In the epithelial and subepithelial layers, efferent 5-HT-IR innervation was found in four different positions, related to the other five neurochemically different (TH-IR, HA-IR, Glu-IR, Fa-IR, MIP-IR) sensory structures. In all cases, the 5-HT-IR subepithelial plexus running parallel with the surface of sensory epithelium was found in close localization with different parts (dendrite, cell axon) of labeled sensory neurons (e.g. Figs. 2F, 3A, D, 4G). In details, (1) the plexus was crossed perpendicularly by sensory dendrites (Figs. 2A, B, D, E, F, 3A, 4F); (2) a single 5-HT-IR fiber projected to a sensory dendrite in the outermost layer of the sensory epithelium (Figs. 2A, inset; 4E, inset); (3) labeled sensory cell bodies were embedded in the plexus (Fig. 4E, inset), and finally (4) 5-HT-IR and other IR fibers appeared together, mixed, either running parallel ( Fig. 2A) or organized in an intimate network-like arrangement of varicose fibers at the wavy protrusions of the subepithelial region in the lip and tentacle (Fig. 4B). However, in these cases, the sensory and/ or efferent character of the varicose fibers occurring together in close arrangement with the efferent 5-HT-IR axons could not be defined unequivocally. When certain segments of the epithelium was characterized by a wavy appearance in the tentacle and lip, 5-HT-IR and other immunolabeled processes invaded the subepithelial area of the protrusions ( Fig. 2A). Finally, 5-HT-IR elements located in the deeper regions beneath the subepithelial layer appeared in various interaction-like arrangements with any of the other five immunolabeled systems (Figs. 2C, 4A D, E).
Close relationship between 5-HT-IR and other (sensory and non-sensory) IR structures could also be observed both in the subepithelial and deeper regions, suggesting interaction between these neuronal elements. Examples for sites of possible close intercellular contacts occurred Also, in deeper regions, beneath the subepithelial layer, 5-HT-IR varicose fibers were found located over or HA-IR (D-F, red) sensory elements and their relationship to 5-HT-IR elements (green) in the tentacle, lip and foot of Lymnaea. A Innervation of a subepithelial segment in the lip by parallel running 5-HT-IR and TH-IR varicose fibers (asterisks). Note TH-IR sensory dendrites (arrowheads) in the epithelial layer (e). Inset: A TH-IR sensory neuron (arrow) projects with its dendrite (arrowhead) to the surface. Note a 5-HT-IR varicose fiber nearby (short arrow). B In the tentacle both TH-IR dendrites (arrowheads) and perikarya (arrows) are seen among scattered 5-HT-IR elements. e-epithelial layer. C Immense innervation of the subepithelial layer by 5-HT-IR (asterisks) and by perpendicularly running TH-IR (stars) fibers in the foot. Note TH-IR processes projecting further in the deeper region (arrows). D HA-IR sensory cells (arrows) with their dendrites (arrowheads) embedded in a network of varicose 5-HT-IR fibers (green) in the sub-epithelial layer of the tentacle. E HA-IR sensory neurons (arrow) displaying full morphology with dendrites (arrowheads) and axons (double arrowheads), the latter crossing 5-HT-IR varicose fibers. e-epithelial layer. f HA-IR sensory neurons (arrows) project with their dendrites (arrowheads) toward the surface of the epithelial layer (e) through the sub-epithelial 5HT-IR system (asterisk). Scale bars: A, D, E, F 20 μm; B 25 μm; C 30 μm. Inset: 10 μm running in close vicinity of other immunolabeled thick axon bundles (Fig. 4E).
Concentration of signal molecules in peripheral organs (lip, tentacle and foot) of Lymnaea
Associated with our immunohistochemical investigations, HPLC-MS measurements were carried out to identify and quantify the concentration of different signal molecules (5HT, DA, HA, Glu) in the foot, lip and tentacles. All the signal molecules assayed were detected in the peripheral organs, although, with markedly different concentrations (ng/mg) as follows (see also 667;136;369;683,443,222) > DA (lip-5568, foot-6095, tentacles-5971) > HA (0193-lip, foot-0082, tentacles-0297). As it can be seen, among the concentrations of Glu and 5-HT versus DA and HA there were orders of magnitude difference.
Discussion
Our present results obtained on three different peripheral organs (lip, tentacle, foot) of the pond snail, Lymnaea stagnalis, indicate the following. (1) A broad range of (partly) still not identified signaling molecules is present in the neuronal elements of the epithelial and sub-epithelial area of these organs. (2) The signaling molecules visualized are possibly involved both in forwarding sensory information to the CNS and in the efferent innervation of the subepithelial and even deeper layers. (3) It is suggested that the subepithelial and deeper layers are the site of local processing, and modification of the sensory information by elements of central origin, although the primary site of interactions seems to be the subepithelial layer containing the plexus, (4) The subepithelial and deeper layers may also be the site of the innervation of muscular and glandular elements, as well as the source of 5-HTergic innervation of the ciliary cells, hence contributing to both exploring the source of the chemical and physical stimuli of the surrounding, and also elaborating responses Fig. 3 Glu-IR (red) sensory elements and their relationship to 5-HT-IR elements (green) in the tentacle, lip and foot of Lymnaea. A Glu-IR sensory dendrites (arrowheads) running to the surface of the epithelial layer (e) through the subepithelial 5-HT-IR network (asterisk) in the lip. Open arrows-thick Glu-IR sensory axon bundles, arrows-5-HT-IR fibers. B, C Glu-IR sensory neurons in the tentacle with their sensory dendrites (arrowheads) in the epithelium (e), and perikarya (arrows) and sensory axons (double arrowheads) in the subepithelial layer (asterisk). D Series of Glu-IR sensory dendrites (arrowheads) running perpendicularly to the surface of the epithelium (e) across the 5-HT-IR subepithelial plexus (asterisks). Arrow-sensory perikarya. Scale bars: A, D 30 μm; B 40 μm; C 20 μm The presence of several signal molecules (DA [TH], HA, Glu, NO) in neuronal elements located in the sensory epithelium of different peripheral regions of adult and developing Lymnaea has been described earlier using fluorescence histochemistry and/or immunohistochemical staining. Glu containing bipolar sensory cells were described by Hatakeyama et al. (2007) and HAergic sensory elements were demonstrated by Hegedűs et al. (2004) in the lip and foot, whereas earlier and more recent data also indicated the presence of NO (NADPHd) in the lip, tentacle and foot (Serfőző et al. 1998;Wyeth and Croll 2011), and DA [TH], among others, in the foot, lip and tentacle Wyeth and Croll 2011). In the current study, the presence of two additional signal molecules, the neuropetides, Fa and MIP, was demonstrated in sensory cells. Consequently, the pool of chemical substances involved in sensory signaling processes is wider than it has been known before (Wyeth and Croll 2011;Wyeth 2019). Analyzing the neuronal localization of the Fa gene encoded peptides (Benjamin and Burke 1994) in the developing, post-metamorphic Lymnaea, Voronehskaya and Elekes (2003) demonstrated sensory neurons in the lip, mantle and foot, displaying immunoreactivity to antibody raised against EFLRIamide, a member of the encoded peptide family, but no Fa-IR sensory neurons were found. At the same time, Fa-containing sensory cells were described in the tentacular olfactory system of other gastropod species like Limax (Suzuki et al. 1997) and Aplysia (Wollesen et al. 2007). In a preliminary study MIP-IR bipolar sensory cells were demonstrated in the mantle edge of late, post-metamorphic Lymnaea embryos (Elekes, unpublished), whereas in a detailed immunohistochemical study performed on adults only efferent MIP-IR fibers could be visualized in different peripheral regions . Regarding other small molecular weight transmitters found in the Lymnaea periphery, HA and Glu were demonstrated in insect photoreceptor cells (Sarthy 2013;Nässel 2013).
Although qualitative immunohistochemstry cannot provide any information on the quantity of the signal molecules visualized in the periphery, certain consequences can though be drawn based on the results of our HPLC-assay. Out of the transmitters visualized both in the sensory elements and other neuronal components of the three peripheral organs studied, Glu was detected in the highest concentration, followed by DA. The high Glu concentration appear to be in good correlation with literature data, reporting that Glu is important neurotransmitter at the gastropod periphery (Walker 1986;Walker and Holden-Dye 1991;Kononenko and Zhukov 2005), whereas the relative high concentration of DA may correspond well to the frequent occurrence of TH-IR elements both in the sensory and lower subepithelial and deeper levels. It is also to be noted that out of the three peripheral regions, the highest concentration of the immunohistochemically visualized neurotransmitters, DA, HA and Glu, were detected in the tentacles, raising the role of these substances in transmitting sensory stimuli. The high concentration of 5-HT detected in the lip, tentacle and foot corresponds to the overall presence of the dense 5-HT-IR innervation. According to earlier literature data, there is a rich 5-HTergic projection system of central 5-HTergic neurons to the periphery, including the cephalic regions and the foot (Walker 1986;Syed et al. 1988;Walker and Holden-Dye 1991;Chase 2002;Balog et al. 2012). McKenzie et al. (1998) also detected significantly high 5-HT and DA concentrations in the Lymnaea foot and they correlated the presence of 5-HT with the demonstration of 5-HT-IR The ciliated epithelial cells of the Lymnaea foot were shown to stand under 5-HTergic regulation (Audesirk et al. 1979;Syed et al. 1988). The role of 5-HT in the early embryonic rotation perfomed by ciliary cells in Lymnaea was also demonstrated (Diefenbach et al.1991).
In a detailed IHC analysis, Wyeth and Croll (2011) demonstrated sensory cells containing HA, NO and catecholamines (DA), respectively, in the cephalic sensory organs (lip, tentacle) of Lymnaea. The study revealed that the three afferent signaling systems formed a network beneath the sensory epithelium, too. In our present study, it has been shown that both the subepithelial layer contains a dense 5-HT-IR nerve plexus running parallel with the surface epithelium, whereas another differently organized labeled network is located below. In case of double-labeling experiments, close, near-by localization of 5-HT-IR varicose elements, known to mark elements of exclusively extrinsic (central) origin, and labeled sensory axons could be observed. This anatomical arrangement raises a possible modulatory role of central (5-HTergic) input influencing the sensory information en route, before reaching the CNS. However, an inverse action cannot be excluded either, in which the sensory signal may exert a modulatory effect on elements of central origin. The morphological background of the possibility of the peripheral modulation has earlier been revealed for example in the snail (Helix) visceral nerve (Elekes et al. 1985) and the crayfish (Orconecctes) stretch receptors (Elekes and Florey 1987). In other marine gastropod species, e. g. Aplysia, Tritonia, Pleurobranchea, Phestilla, 5-HT-IR elements were shown to form glomerular structures in the the rhinophores (Moroz et al. 1997;Wertz et al. 2006;Wollesen et al. 2007;Faller et al. 2008), while in a detailed study on different peripheral tissues of Pleurobranchea and Tritonia Moroz et al. (1997) described both glomerular and transversally running organization of 5-HT-IR processes in sensory areas proposing for the latter a direct modulatory role of afferent pathways. At ultrastructural level, Gobbeler and Klussmann-Kolb (2007) have also demonstrated the presence of glomeruli in the cephalic sensory organs of different opistobranch species, responsible for processing the sensory information, meanwhile, like us, neither Zylstra (1972a, b) nor Zaitseva and Bocharova (1981) reported the presence of subepithelial glomerular structures or peripheral ganglia in Lymnaea.
In our present investigations, Fa-IR elements were also observed contributing to the subepithelial plexus, even if the density of this plexus was considerably lower compared to that established by 5-HT-IR fibers. The presence of Facontaining axon processes and the role of Fa (and other oligopeptides) were demonstrated in the periphery of Lymnaea and closely related species (Planorbis, Helisoma) (Schot and Boer 1982;Sonetti et al. 1988;Bulloch et al. 1988;Buckett et al. 1990;Voronezhskaya and Elekes 2003 and see also Walker et al. 2009). In addition, Glu, HA and MIP are not to be excluded either to function as signal molecules in central efferents, forming connection with the sensory and other components (muscle fibers and gland cells) of the cephalic organs and the foot. Previous immunohistochemical studies delivered evidences that HA-IR, Glu-IR, and MIP-IR fibers, innervated several peripheral organs, including lip and foot of Lymnaea Hegedűs et al. 2004;Hatakeyama et al. 2007). Although not in Lymnaea, but in Helix, immunogold electron microscopic studies demonstrated close but unspecialized neuromuscular and neuroglandular membrane contacts established by Fa-IR (Elekes and Ude 1994) and MIP-IR (Elekes 2000) axon varicosities. It seems that the efferent innervation of the subepithelial and deeper regions in the studied peripheral organs consists of a rich combination of signaling systems, supposedly involved in a complex form of local regulatory processes.
Recently, Wyeth (2019) has reviewed the neuronal (sensory-motor) background of the olfactory navigation in aquatic gastropods. Two ways of processing sensory information were suggested (see Fig. 7 in Wyeth 2019). One entering directly the CNS and another connected to a peripheral ganglion (or glomeruli) from where then, by inserting interneurons, efferent (motor) output is sent to the muscle system and the cilary epthelial surface as well. In contrast, based on our observations, summarized in Fig. 6, local, subepithelial and deeper networks seem to be potential sites of the sensory information processing in the tentacle, the lip and the foot of Lymnaea, without the involvement of a peripheral ganglion or subepithelial glomeruli. These types of local networks including interaction between the sensory and efferent elements may ensure fast and definitive responses to different sensory stimuli. In a recent study on Lymnea, Vehovszky et al. (2019) have demonstrated that following the application of the allelochemical tannic acid, both the afferent and efferent peripheral functions were affected. Feeding activity was reduced by blocking the sensory pathway and the locomotion activity was also inhibited, supposedly also through the sensory afferents. These observations may also refer to the important role of local (peripheral) elements in processing and/or modulating sensory information and the execution of appropriate efferent responses to various environmental cues. Peripheral circuits involved in sensory-motor prosesses were also suggested by Peretz et al. (1976) and Croll (2003) in case of the Aplysia gill withdrawal reflex. Croll (2003) also presented a scheme for the complex, central and peripheral innervation of the syphon and gill, in which two types of catecholaminergic neurons were demonstrated. One was a bipolar sensory cell and another a multipolar. Each projected to CNS motoneurons, but the multipolar cell ended also on the catecholaminergic sensory cells, hence forming a subepithelial modulatory level of the sensory information. The scheme, however, differs from that suggested by us in Lymnaea, in which no peripheral multipolar cells of any kind of signal molecule content was detected. Schot and Boer 1982;Walker 1992;Elekes et al. 2000;Hegedűs et al. 2004;Hatakeyama et al. 2007) | 6,707.4 | 2020-09-20T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Infrared and Visible Image Fusion Based on Iterative Control of Anisotropic Diffusion and Regional Gradient Structure
To improve the fusion performance of infrared and visible images and e ff ectively retain the edge structure information of the image, a fusion algorithm based on iterative control of anisotropic di ff usion and regional gradient structure is proposed. First, the iterative control operator is introduced into the anisotropic di ff usion model to e ff ectively control the number of iterations. Then, the image is decomposed into a structure layer containing detail information and a base layer containing residual energy information. According to the characteristics of di ff erent layers, di ff erent fusion schemes are utilized. The structure layer is fused by combining the regional structure operator and the structure tensor matrix, and the base layer is fused through the Visual Saliency Map. Finally, the fusion image is obtained by reconstructing the structure layer and the energy layer. Experimental results show that the proposed algorithm can not only e ff ectively deal with the fusion of infrared and visible images but also has high e ffi ciency in calculation.
Introduction
In recent years, UAVs have played an increasingly important role in many fields due to their high flexibility, low cost, and easy operation, which are often used for battlefield reconnaissance, battle situation assessment, target recognition, and tracking in the military. Now, image sensors in UAVS can acquire multiple types of images such as multispectral images, visible images, and infrared images [1]. However, due to the limitation of environmental conditions such as light, imaging with only one sensor will be affected by certain factors and cannot meet the requirements of practical applications. The combination of multiple imaging sensors can overcome the shortcomings of a single sensor and obtain more reliable and comprehensive information. The imaging sensors commonly used in UAVs are infrared sensors and visible sensors. The infrared sensors use the principle of thermal radiation to obtain images with larger infrared targets, but the targets are not clear and the edges are blurred [2]. The visible sensors use the principle of light reflection to obtain clear images with clear details, but under low-visibility conditions, the images have limitations. Research has found that the effective combination of infrared images and visible images can result in a more comprehensive and accurate scene or target, which provides strong support for subsequent task processing [3].
The more widely used methods in the field of infrared and visible image fusion can be roughly classified into MST-based methods [4], sparse representation-based methods [5], spatial domain-based methods [6], and deep learning-based methods [7]. At present, the most researched and applied methods are MST-based methods, including wavelet transform [8], Laplacian pyramid transform [9], nonsubsampled shear wave transform [10], and nonsubsampled contourlet transform [11]. These methods decompose the source images in multiple scales, then fuse them separately according to certain fusion rules, and finally get the fusion result through inverse transformation, which can extract the salient information in the images and get better performance. For example, nonsubsampled contourlet transform is utilized by Huang et al. [11] to decompose the source images to obtain precise decomposition. However, due to the lack of spatial consistency in the traditional MST methods, structural or brightness distortion may appear in the result.
In addition, image fusion methods with edge preserving filtering [12] are also receiving attention. Edge-preserving filtering can effectively reduce the halo artifacts around the edges in the fusion results while retaining the edge information of the image contour and has a good visual performance. Popular methods are mean filtering [13], bilateral filtering [14], joint bilateral filtering [15], and guided filtering [16]. These methods complete decomposition according to the spatial structure of the images to achieve spatial consistency, so as to achieve the purpose of smoothing the texture and preserving edge detail information. For example, Zhu et al. [16] proposed a novel fast single-image dehazing algorithm by using guided filtering to decomposition the images, and it obtained good performance. The edge-preserving fusion algorithms maintain spatial consistency and effectively improve the phenomenon of fusion image distortion or artifacts, but there are certain limitations: (1) it will introduce detail "halos" at the edges; (2) when the input images and the guide images are inconsistent, the filtering will be insensitive or even fail; and (3) it is difficult to meet the requirements of fusion performance, time efficiency, and noise robustness simultaneously.
Inspired by the previous research, this article focuses on reducing "halos" at the edges to retain the edge structure information and obtaining better decomposition performance in both noise-free and noise-perturbed images. In this paper, a new infrared and visible image fusion method based on iterative control of anisotropic diffusion and regional gradient structure operator is proposed. Anisotropic diffusion is utilized to deconstruct the source image into a structure layer and a base layer. Then, the structure layer is processed by using the gradient-based structure tensor matrix and the regional structure operator. Due to the weak detail and high energy of the base layer, the Visual Saliency Map (VSM) is utilized to fuse the base layer. By reconstructing the two prefusion components, the final fusion image can be obtained.
The main contributions of the proposed method can be summarized as follows: (1) A novel method of infrared and visible image fusion is proposed. The anisotropic diffusion model with a control iteration operator is proposed to adaptively control the number of iterations, so the image is decomposed adaptively into a structure layer with rich edges and detail information and a base layer with pure energy information. Especially, the computational efficiency is greatly improved (2) The regional structure operator is proposed into the structure tensor matrix, which can effectively extract information such as image details, contrast, and structure. It can also greatly improve the detection ability of weak structures and obtain structure images with good prefusion performance (3) Since anisotropic diffusion can effectively deal with noise, the proposed method also has a good perfor-mance on noisy image fusion. In addition, the algorithm is widely used and it is also suitable for other types of image fusion The paper is organized as follows. Section 2 briefly reviews the anisotropic diffusion and structure tensor theory and introduces new operators. Section 3 describes the proposed infrared and visible image fusion algorithm in detail. Section 4 introduces related experiments and compares with several current advanced algorithms. Finally, the conclusion is discussed in Section 5.
Related Theories
2.1. Anisotropic Diffusion Based on Iterative Control. Anisotropic diffusion [17] can be utilized to smooth the image and maintain the image details and edge information. Compared with other filtering methods, it is more suitable for image decomposition processing. The anisotropic diffusion equation is expressed as where vðx, y, tÞ is the flux function or diffusion rate of diffusion, ∇ is the Laplacian operator, Δ is the gradient operator, and t is the time or scale or iteration. Equation (1) can be regarded as a discrete square matrix, and the four nearest neighbor discretizations of Laplacian can be used: where I t+1 i,j is the coarser resolution image at t + 1 scale, which is influenced by I t i,j . μ is a constant with 0 ≤ μ ≤ 1/4. D N , D S , D W , and D E are the nearest difference values in the four directions of North, South, West, and East, respectively, which can be defined by and v N , v S , v W , and v E are the conduction coefficients or flux functions in the four directions of North, South, West, and Journal of Sensors where gðj•jÞ is a monotonically decreasing function with g ð0Þ = 1 and gð•Þ is the "edge stop" function or the differential coefficient, which has a very important influence on the noise suppression and edge retention ability of anisotropic diffusion. The image format in this paper is obtained by image processing technology.
The scale space weighed by these two functions is different. The first function is for the abrupt areas with large gradients, namely, the edge and detail areas. The second function is for flat areas with small gradients. Both functions consist of a free parameter k.
The anisotropic diffusion is a differential iterative process, in which the number of iterations is a key issue. If it is overiterated, it will lead to oversmoothing; but if the number of iterations is not enough, the detail components cannot be separated effectively. Moreover, the number of iterations for noisy images and the number of iterations for noisefree images are also uncertain. Therefore, an iterative control operator θ is introduced to control k, thereby adaptively controlling the number of iterations and reasonably separating structural information such as gradients and details. And it can also be improved in computational efficiency.
where K 0 is the empirical value for controlling the diffusion strength, which usually is set by 30. It can be seen from Equation (6) that the value of k is related to the edge strength of the region boundary, and the value of k is updated through positive and negative excitation by θ to obtain the optimal number of iterations. Get the most effective and accurate separation results.
The anisotropic diffusion of the image I is simply represented by anisoðIÞ. After the image is diffused through anisotropy, since the iterative control operator can precisely control the number of iterations, almost all the vibration and repetitive context can be effectively preserved in the structure layer, while the energy information and weak edges are preserved in the base layer. Figure 1 shows the base layer and structure layer images obtained after anisotropic diffusion decomposition. It can be clearly seen that the images are basically consistent with the theoretical analysis.
Gradient-Based Structure Tensor
Matrix. Gradient is the rate of change, which is reflected by the difference between a central pixel and surrounding pixels. It can be used to accurately reflect the texture details, contour features, and structural components in the image. The structure tensor is an effective method to analyse the gradient problem, and it has been applied to a variety of image processing tasks.
The gradient operator [18] is described as follows. For a local window Θðx, yÞ of any ε ⟶ 0 + in the direction β, the square of the change of the image Iðx, yÞ at the point ðx, yÞ is In any direction β at the point ðx, yÞ, the change rate C ðβÞ of the local features of the image Iðx, yÞ is To make better analysis of gradient features and effectively realize image processing, the structure tensor matrix S is introduced. And CðβÞ can be expressed as where 3
Journal of Sensors
The two extreme values of the structure tensor S can be expressed as The structural characteristics of the local area of the image are related to the extreme value of the matrix. Generally, if the two extreme values are relatively small, it indicates that the region does not have gradient characteristics; that is, the region is located in the isotropic part. Otherwise, it means that the local area of the area has obvious changes and contains certain structural information, because in the image area saliency measurement, a wide range of structure types are involved. Finally, the structural saliency operator SSO is defined according to [19] as
Fusion Framework
Based on the above theories, a new image fusion framework is constructed, as shown in Figure 2. Different from the tra-ditional decomposition scheme, in order to make better use of the useful information in the original image, first, the iterative control anisotropic diffusion is utilized to decompose the source image into base and structure components. At this time, most of the gradients and edges can be effectively preserved in the structure layer, and the base layer contains the remaining energy information. Then, according to the characteristics of each layer, different fusion rules are introduced to acquire the prefusion of each layer. Among them, for the fusion of the structure layer, the prefusion is effectively realized through the regional gradient structure; for the base layer, the prefusion is performed through the VSM. Finally, the fusion result is obtained by reconstructing the two prefusion layers.
3.1. Anisotropic Decomposition. Let the source imagesfI n ðx, yÞg N n=1 be all coregistered. The base layer is obtained through the anisotropic diffusion model in the previous section with smooth edges: where I B n ðx, yÞ is the nth base layer and anisoðI n ðx, yÞÞ represents the anisotropic diffusion process on the nth source After anisotropic decomposition, a structure layer with rich outline and texture details and a base layer with intensity information can be obtained.
Fusion of Structure Layers.
Since the structure saliency operator (SSO) in the previous section can effectively detect the gradient structure information of the images, SSO can be used to prefuse the structure layers. However, due to the lack of intensity variables, SSO cannot accurately detect the weak feature information in the images. In order to improve the structure detection ability, the regional structure operator (RSO) is introduced to improve the performance of SSO. RSO is the regional structural component with ðx, yÞ as the center position; then, the regional gradient structure (RGS) can be expressed as where SS I ðx, yÞ is the salient image produced by SSO, and RS I ðx, yÞ represents the regional structure feature at position ðx, yÞ, which can be expressed as where N controls the size of the region and influences the efficiency and effect of fusion. Through comparing the RGS of the input image, the structure saliency map M 1 ðx, yÞ of the image I S 1 ðx, yÞ is calculated:
Journal of Sensors
where Θ is a central local area in ðx, yÞ whose size is T × T. Therefore, the prefusion structure layer image F S I ðx, yÞ can be expressed by 3.3. Fusion of Base Layers. Since the base layers contain less details, the weighted average technology based on VSM [20] is used to fuse the base layer F B I . First, VSM is constructed; let I P represent the intensity value of a pixel p in the image I. The saliency value VðpÞ of pixel p is defined as where j represents the pixel intensity, M j represents the number of pixels whose intensity is equal to j, and L represents the number of gray levels (in this case, 256 After obtaining these two prefusion components, the final fusion image F I is
Experimental Analysis and Results
In order to verify the effectiveness and reliability of the algorithm in this paper, multiple pairs of images are utilized for experimental verification, and the results are analysed through subjective vision and objective quantitative evaluation. After setting the algorithm parameters, the experimental results are displayed and discussed. 7 Journal of Sensors fusion through infrared feature extraction and visual information preservation (IFEVIP) proposed by Zhang et al. [24], and multisensor image fusion based on fourth-order partial differential equations (FPDE) proposed by Bavirisetti et al. [25]. In addition, the fusion performance is quantitatively evaluated by six indicators, including entropy (EN) [26], edge information retention (Q AB/F ) [27], Chen-Blum's index (Q CB ) [28], mutual information (MI) [29], structural similarity (SSIM) [30], and peak signal-to-noise ratio (PSNR) [31]. Although the structure is better preserved, the details are relatively weakened and lost. The IFEVIP method maintains a good contrast, but the visual effect is too enhanced, especially in the partially enlarged areas, resulting in obvious error in the result. The FPDE method has the phenomenon of blurred internal features. The CNN method has obtained a relatively good fusion result, but its image is somewhat unnatural, and the colour of the result in Figure 5(c4) contains errors. Therefore, the proposed method can effectively separate the component information of different images, preserve the useful information of the source images into the fusion images, and obtain the best visual performance in the aspect of edge and detail preservation.
Objective Evaluation.
Except for subjective evaluation, the fusion results are quantitatively evaluated, and the results are shown in Table 1, in which the best results are labelled in bold. According to the data in the table, it can be seen that the objective evaluation of the proposed method is significantly higher than other methods. In all quantitative evaluations, only a few places are not optimal, but they do not affect the advantages of the method in this paper. In addition, Figure 6 shows the bar chart comparison of EN, Q AB/F , Q CB , MI, SSIM, and PSNR values of various fusion methods for the car example.
In summary, for infrared and visible fusion, the method in this paper has a good performance both subjectively and objectively. Figure 7, the fusion results have both high spatial resolution and high spectral resolution, and the fused images have a strong ability to express structure and details. The objective evaluation results are shown in Figure 8. It can be seen from the visual and objective results that this algorithm can effectively retain high-spatial and hyperspectral information and can improve the accuracy of subsequent processing of remote sensing images.
4.4.
Computational Efficiency. The methods tested in this paper are all carried out in the same experimental environment. The average implementation time of six pairs of images is compared as shown in Table 2. It can be seen that the calculation efficiency of the proposed algorithm has a considerable advantage over the comparison algorithms.
Conclusions
In this paper, an infrared and visible image fusion algorithm based on iterative control of anisotropic diffusion and regional gradient structure is proposed. The algorithm makes full use of the advantages of anisotropic diffusion and improves the decomposition efficiency and effect through iterative control operators. The regional gradient structure operator is intro-duced to fully extract the detailed information in the structure layer to obtain a better fusion performance. Many experimental results show that this algorithm is significantly better than existing methods in terms of subjective and objective evaluation. In addition, higher calculation efficiency and stronger antinoise performance can be obtained, and the algorithm can be effectively applied to other types of image fusion situations.
Data Availability
The data used to support the findings of this paper are available from http://imagefusion.org/.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. | 4,288.2 | 2022-04-11T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Tracing the origins of Plasmodium vivax resurgence after malaria elimination on Aneityum Island in Vanuatu
Background Five years after successful malaria elimination, Aneityum Island in Vanuatu experienced an outbreak of Plasmodium vivax of unknown origin in 2002. Epidemiological investigations revealed several potential sources of P. vivax. We aimed to identify the genetic origin of P. vivax responsible for the resurgence. Methods Five P. vivax microsatellite markers were genotyped using DNA extracted from archived blood samples. A total of 69 samples from four P. vivax populations was included: 29 from the outbreak in 2002, seven from Aneityum in 1999 and 2000, 18 from visitors to Aneityum in 2000, and 15 from nearby Tanna Island in 2002. A neighbour-joining phylogenetic tree was constructed to elucidate the relationships among P. vivax isolates. STRUCTURE and principal component analysis were used to assess patterns of genetic structure. Results Here we show distinct genetic origins of P. vivax during the outbreak on Aneityum. While the origin of most P. vivax lineages found during the outbreak remains unidentified, limited genetic diversity among these lineages is consistent with a rapid expansion from a recent common ancestor. Contemporaneous P. vivax from neighboring Tanna and potential relapse of P. vivax acquired from other islands in 1999 and 2000 are also identified as minor contributors to the outbreak. Conclusions Multiple reintroductions of P. vivax after elimination highlight the high receptivity and vulnerability to malaria resurgence in island settings of Vanuatu, despite robust surveillance and high community compliance to control measures.
The article presented by the authors offers a compelling case study of a malaria outbreak on an island long after the disease had been eliminated from that location.The authors have clearly described their aim to investigate the origins of the malaria parasites responsible for the 2002 outbreak in Anyeitum, Vanuatu, alongside their hypotheses.Given the increasing trend of malaria elimination across many countries, it is indeed crucial to grasp how we can sustain these successes, making this topic particularly relevant and pressing.
While I understand that some aspects of the data and the context of this outbreak have been explored in previous publications, this paper provides a fresh lens by incorporating genetic analysis and additional data from a neighbouring island.I'd like to note that while my expertise does not extend to the nuances of genetic analysis and the specific methods applied in this manuscript, I am keen to offer my comments on the epidemiological aspects of the paper: • In Figure 3, I noticed an intriguing observation where a sample from the 2002 Aneityum samples closely aligns with the 1999-2000 Aneityum and 2000 non-Aneityum clusters.This particular observation doesn't seem to be emphasised in the manuscript.Do the authors consider the proximity of this sample to be insignificant?If so, elaborating on the reasons might be important, given this observation might explain the connections between the 2002 and 2000 populations.
• While the authors have underscored the significance of relapses in P. vivax control in the introduction, there is limited discussion on the potential role of relapses in seeding the 2002 outbreak.The 2000 study [1] indicated that relapses could be observed up to 5 years after the initial infection.Could these long-latency relapses from prior infections have been a contributing factor to the onset of the outbreak?• Taking into account the immunity resulting from previous infections (predominantly in adults), the frequency of travel (primarily by adults, I assume), which together might result in importation of asymptomatic cases, and coupled with the potential long-latency of relapse: How probable is it, in the authors' opinion, that the outbreak's seeding occurred in the past, possibly linked to travels beyond Tanna (explaining the minimal connection with Tanna samples) and was potentially triggered by a recent relapse?
• What is the level of receptivity of Aneityum to malaria transmission?An outbreak resulting in 10% of the population being infected at its tail end suggests that the island remains highly receptive to transmission.Were there subsequent outbreaks in the following years?It's curious that while the introduction of cases seemed to occur somewhat frequently, and the island appeared highly receptive to transmission, actual outbreaks were comparably infrequent.Delving deeper into this discrepancy could provide valuable insights and lessons from this outbreak and how Aneityum managed to prevent and control them.
• Finally, exploring how many samples could have originated from relapses might be worthwhile.This direction could present an intriguing line of inquiry for subsequent research if it is technically feasible.For clarity, I'm saying this as a possibility of future research, not to be addressed in this work.
A minor comment: • In line 208, the authors wrote about 'intervening period'.I'm not sure which specific period was referred to by this.
Reviewer #3 (Remarks to the Author):
This study typed samples from a Plasmodium vivax outbreak on Aneityum Island, Vanuatu, after malaria had been eliminated.
Of 219 samples positive, 69 were successfully typed for 7 microsatellite markers (out of an initial panel of 14 markers).The genotyping methods used are outdated.The P. vivax population in the south Pacific is well characterized using microsatellites.More advanced methods such as genomewide SNP panels, multiplexed amplicon sequencing, or selective whole-genome sequencing have been published.Such methods would also allow for more advanced analytical approaches in addition to those included in the manuscript (which includes Fst values, PCA, STRUCTURE analysis, etc.).Given the same authors types the same samples before using amplicon sequencing of two markers, it is unclear what can be gained with a panel of 7 microsatellites .
In line with previous studies which typed the same samples using amplicon sequencing of two other markers, and in line with travel histories of infected indiduals, the study found reduced genetic diversity and high similarity to parasites on neighboring Tanna island.No new insights into the origin of the outbreak are presented.
Minor comments:
Line 45: Is there a typo/missing word?I don't understand this sentence.
Line 100 hints that other species than P. vivax were found.This is crucial, if indeed multiple species were present, then certainly in can't be introduction of a single parasite.
Dear Dr. Walker and reviewers, We thank you for your careful and thorough review of our manuscript and insightful comments on important points that needed to be included or addressed in our original submission.We greatly appreciate the opportunity to revise and resubmit our manuscript.Below are our point-by-point responses to the reviewers' comments.
Response to the Reviewer 1 >> PCR amplification of at least two nuclear loci used for DNA quality verification failed in 72 of the 182 mitochondrial cox3 PCR-positive P. vivax samples: does this raise any concern about the performance (false positives?)pf the cox3 PCR?
Gene copy number is a main determinant of molecular diagnostic performance.We target the mitochondrial cox3 gene to detect P. vivax infections because of the high gene copy number (20 to 150 copies in each parasite) that greatly enhances diagnostic sensitivity, especially for samples with low parasitemia.In contrast, the nuclear microsatellite markers used in this study are singlecopy loci.Since we intended to use multiple microsatellite markers to characterize parasite genetic diversity, we set the threshold of successful amplification of at least two single-copy nuclear loci in the DNA quality check step.Of the 72 samples that did not meet such a threshold, 23 successfully amplified one nuclear locus.Of the 182 cox3 positive samples, 133 (73.1%) were also positive by PCR of one or more nuclear loci.While we cannot definitively prove that there were no false positives with our cox3 PCR, the proportion (26.9% or 49/182) of cox3 positive but nuclear marker negative samples in this study is in line with the 30% PCR efficiency we reported previously in a comparison of the cox3 and the standard 18S PCR 31 .
We added the following in the discussion to explain the difference in PCR efficiencies due to the difference in gene copy numbers between cox3 and microsatellite markers (lines 218-227).
"Of the 182 P. vivax-positive samples by the mitochondrial cox3 PCR, 49 (26.9%)failed to amplify any nuclear loci while 23 (12.6%) amplified only one nuclear locus.These 72 samples were deemed to have low DNA concentrations and were excluded from microsatellite marker analysis (Fig. 2).Molecular diagnostic yields are mainly determined by blood volume analysed and the gene copy number of the amplified molecular marker 30 .The discrepancy in PCR amplification efficiencies in our study likely reflected such a difference in gene copy numbers between the mitochondrial cox3 and the nuclear microsatellite markers.Within each parasite, there are 20 to 150 copies of the mitochondrial cox3 gene 31 , but only one copy of nuclear microsatellite markers, making the former an ideal target for PCR detection of Plasmodium We added standard deviation as a dispersion measure of the mean number of allele differences between paired samples in table 3 and We were less confident about the relationship among these three lineages than the relationship between haplotypes 26 and 42 shared between Aneityum and Tanna (they share the same alleles in four of five loci genotyped).Nonetheless the observation that these three lineages showed genetic affinity in both neighbor-joining tree and PCA warrants further examination.We added the following in the discussion (lines 173-180), "Phylogenetic analysis (Fig. 3A) and PCA (Fig. 3C This is an expanded and more nuanced discussion of the point raised in the previous comment. acknowledge that the scenario presented by the reviewer is possible, though we have no way of quantifying that probability given the data we have.We added in the discussion the following, (lines 245-248), "Since P. vivax infections in adults were sub-microscopic due to persisting immunity acquired before malaria elimination and undetectable by diagnostics available locally, coupled with the potential of long-latency relapse, the 2002 outbreak's seeding event could have occurred years ago and the outbreak itself could have been triggered by a recent relapse."
>> Finally, exploring how many samples could have originated from relapses might be worthwhile. This direction could present an intriguing line of inquiry for subsequent research if
it is technically feasible.For clarity, I'm saying this as a possibility of future research, not to be addressed in this work.
We agree with the reviewer that the ability to distinguish between primary infections and relapses in P. vivax cases will be tremendously useful for control and elimination.We added in the discussion the following (lines 249-252), "Determining the proportion of infections resulting from relapse during P. vivax outbreaks can inform the most optimal response strategy, however distinguishing between primary infections and relapses in settings such as Aneityum or Vanuatu, where conditions conducive to transmission remain, is difficult." >> What is the level of receptivity of Aneityum to malaria transmission?An outbreak resulting in 10% of the population being infected at its tail end suggests that the island remains highly receptive to transmission.Were there subsequent outbreaks in the following years?It's curious that while the introduction of cases seemed to occur somewhat frequently, and the island appeared highly receptive to transmission, actual outbreaks were comparably infrequent.Delving deeper into this discrepancy could provide valuable insights and lessons from this outbreak and how Aneityum managed to prevent and control them.
Receptivity is a difficult measure to quantify.We preface our response by stating that we have no data on vector density, sporozoite rate, and biting rate during the outbreak in 2002.We believe that the level of receptivity to malaria transmission was fairly high during our survey in July 2002.
Our genetic data showed that most parasites were genetically similar, which is consistent with the rapid expansion of a limited number of parasite clones via mosquito transmission.We detailed the subsequent community responses to the outbreak, and how high ITN usage among community members might have curtailed the extent of the outbreak (lines 253-263).
"Although we lack entomological data, the observation that most P. vivax isolates during the 2002 outbreak were genetically very similar suggests that transmission by Anopheles mosquitoes played a major role in sustaining the outbreak.Therefore, it can be inferred that the level of receptivity was probably very high in 2002.In response, a second round of MDA was implemented, targeting individuals under 20 years of age.At the same time, provisions of ITNs were strengthened, coupled with a robust malaria health promotion programme to raise awareness of the continuous risk of malaria resurgence among local communities 15 .Subsequent annual cross-sectional surveys revealed low prevalence (1.9% to 3.9%) of P. vivax infections by PCR between 2003 and 2007.Since 2010, no Plasmodium infections have been detected by microscopy and PCR 32 .Nonetheless, 74.1% and 67.4% of the island's residents reported ITN use in 2016 and 2019, respectively (unpublished data).We believe that high ITN usage likely suppressed malaria receptivity on Aneityum Island.">> In line 208, the authors wrote about 'intervening period'.I'm not sure which specific period was referred to by this.
We meant the period between the start of the outbreak in early 2002 and our survey in July 2002.
For clarity, we revised the sentence to the following (lines 234-236), "Additional but undetected parasite reintroduction events between the onset of the outbreak in early 2002 and our survey in July 2002 could have obscured or even replaced the original P. vivax clone that triggered the outbreak." Response to Reviewer 3 >> More advanced methods such as genome-wide SNP panels, multiplexed amplicon sequencing, or selective whole-genome sequencing have been published.Such methods would also allow for more advanced analytical approaches in addition to those included in the manuscript (which includes Fst values, PCA, STRUCTURE analysis, etc.).Given the same authors types the same samples before using amplicon sequencing of two markers, it is unclear what can be gained with a panel of 7 microsatellites.
We fully acknowledge the reviewer's concern.More advanced methods and analytical approaches using high-resolution data may be helpful.However, most of our samples were from asymptomatic and submicroscopic infections mixed with human DNA because they were extracted from DBS.Our previous experience with DBS samples from submicroscopic P. falciparum infections in Kenya #1 suggests that our samples from Vanuatu are unlikely to be amenable to existing advanced genomic methods to characterize parasite genetic diversity.Given these constraints, we chose to genotype some of the most polymorphic markers using technologies and methods available to us.
Although we fully understand the importance of further advancing malaria genomics from an academic standpoint, there is some discussion that advanced malaria genomics may have several gaps for "real-world malaria control and elimination strategy" #2 and may not be necessary for malaria control or even its elimination #3 .
We also want to point out that the samples analysed in this manuscript differ those used in the previous paper.In our previous work, we obtained DNA sequences from only microscopically positive samples, whereas in the current study the majority of data comes from submicroscopic (PCR positive but microscopy negative) infections.Previously we speculated that infected church meeting attendees in 2000 could represent the sources of imported parasites that triggered the outbreak two years later.Those samples were not analysed in the previous paper but are included here."Low P. vivax nuclear DNA concentrations in most samples precluded using advanced methods and analytical approaches to gain a more comprehensive understanding of the population genomic diversity of P. vivax parasites.Nevertheless, there is some discussion that advanced malaria genomics may have several gaps for an immediate "real-world malaria control and elimination strategy" 32 and may not be necessary for malaria control or even its elimination 33 .">> In line with previous studies which typed the same samples using amplicon sequencing of two other markers, and in line with travel histories of infected individuals, the study found reduced genetic diversity and high similarity to parasites on neighboring Tanna Island.No new insights into the origin of the outbreak are presented.
We believe our findings are valuable, at the least, in building a baseline that would be useful for formulating malaria control and elimination policies in Vanuatu and beyond.
>> Line 45: Is there a typo/missing word?I don't understand this sentence.
See response to comment by reviewer 1.
>> Line 100 hints that other species than P. vivax were found.This is crucial, if indeed multiple species were present, then certainly in can't be introduction of a single parasite.
The statement in line 100 refers to the detection of Plasmodium spp. in all samples collected from different islands in Vanuatu from 1999 to 2002.On Aneityum Island, only P. vivax was detected during the entire study period.We added the following for clarity, (lines 102-105) "Throughout the study period, neither microscopy nor PCR detected any species other than P. vivax on Aneityum Island, and the prevalence of P. vivax infection by PCR increased from 0.8% (6/709) in 1999 to 10.1% (77/759) in 2002 (Table 1)."
*Note to editor
In the original submission, the colours representing samples from 1999/2000 Aneityum and 2000 non-Aneityum in Fig. 3C were reversed and have been corrected in this revised submission.Minor edits in the text are highlighted in green.
REVIEWERS' COMMENTS:
Reviewer #1 (Remarks to the Author): My comments and questions were correctly addressed.I do not have any further comment.
Re-review in response Reviewer 3's comments to authors: I would be considered myself satisfied with the responses given to comments by reviewer 3, which were mainly about: -The use of more advanced genetic methods: authors argue are not feasible given low parasite densities in samples) -no new insights into the origin of the outbreak are presented: authors argue that findings are valuable in building a baseline for formulating malaria control and elimination policies in Vanuatu and beyond.
Reviewer #2 (Remarks to the Author): I would like to thank the authors for their efforts in addressing my comments.I am happy with authors' comments and their additions/revisions to the manuscript in regards to my initial review, and, hence, have no additional comments to add.
infections.">> Adding a dispersion measure of the mean number of allele differences between paired samples would make more robust the claims of lower mean in the 2002Tanna and 2002 Aneityum populations than in the 1999-2000 Aneityum and the 2000 non-Aneityum populations.
the result section (lines 133-134)."The mean was lower in the 2002 Tanna (mean ± SD: 2.857 ± 1.267) and 2002 Aneityum (2.091 ± 1.562) populations than in the 1999-2000 Aneityum (4.238 ± 0.865) and the 2000 non-Aneityum (4.562 ± 0.538) populations.">> Check line 45 (some letters are missing) Some words were accidentally deleted during the formatting of the manuscript.The sentences have been revised as follows (p.2, lines 46-48), "Islands provide natural ecological experiments with considerable potential for malaria intervention studies 8 and some have demonstrated early success towards malaria elimination 9,10 .Vanuatu is an archipelago of 82 islands spanning 1,176 km in the South Pacific."Response to Reviewer 2 >> In Figure 3, I noticed an intriguing observation where a sample from the 2002 Aneityum samples closely aligns with the 1999-2000 Aneityum and 2000 non-Aneityum clusters.This particular observation doesn't seem to be emphasised in the manuscript.Do the authors consider the proximity of this sample to be insignificant?If so, elaborating on the reasons might be important, given this observation might explain the connections between the 2002 and 2000 populations.
) also indicated genetic affinity among one 2002 Aneityum sample and one 1999/2000 Aneityum and one 2000 non-Aneityum samples.The 1999/2000 Aneityum sample was derived from a male child who had moved to Aneityum from Pentecost.The child was diagnosed with P. vivax in May 1999 and again in July 1999 during our survey.The 2000 non-Aneityum sample was derived from a male adult church meeting attendee from Tanna.Not only did these infections exemplify the vulnerability of Aneityum to P. vivax importation, but they also suggested the possibility that one of the seeds for the 2002 outbreak may have been introduced in 1999 or 2000.">>While the authors have underscored the significance of relapses in P. vivax control in the introduction, there is limited discussion on the potential role of relapses in seeding the 2002 outbreak.The 2000 study [1] indicated that relapses could be observed up to 5 years after the initial infection.Could these long-latency relapses from prior infections have been a contributing factor to the onset of the outbreak?Yes, long-latency relapses from infections acquired years ago on Aneityum or other islands in Vanuatu could have contributed to the outbreak in 2002.We added such possibility in the discussion (lines 162-165), "Notably we previously indicated the potential of P. vivax relapses occurring up to five years after the initial infection 16 , raising the possibility that the 2002 outbreak was partially seeded from long-latency relapses from infections initially acquired years before.">> Taking into account the immunity resulting from previous infections (predominantly in adults), the frequency of travel (primarily by adults, I assume), which together might result in importation of asymptomatic cases, and coupled with the potential long-latency of relapse: How probable is it, in the authors' opinion, that the outbreak's seeding occurred in the past, possibly linked to travels beyond Tanna (explaining the minimal connection with Tanna samples) and was potentially triggered by a recent relapse? | 4,839.8 | 2024-05-18T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Biology"
] |
Root-Associated Endophytic Bacterial Community Composition of Pennisetum sinese from Four Representative Provinces in China
Pennisetum sinese, a source of bio-energy with high biomass production, is a species that contains high crude protein and will be useful for solving the shortage of forage grass after the implementation of “Green for Grain” project in the Loess plateau of Northern Shaanxi in 1999. Plants may receive benefits from endophytic bacteria, such as the enhancement of plant growth or the reduction of plant stress. However, the composition of the endophytic bacterial community associated with the roots of P. sinese is poorly elucidated. In this study, P. sinese from five different samples (Shaanxi province, SX; Fujian province, FJ; the Xinjiang Uyghur autonomous prefecture, XJ and Inner Mongolia, including sand (NS) and saline-alkali land (NY), China) were investigated by high-throughput next-generation sequencing of the 16S rDNA V3-V4 hypervariable region of endophytic bacteria. A total of 313,044 effective sequences were obtained by sequencing five different samples, and 957 effective operational taxonomic units (OTUs) were yielded at 97% identity. The phylum Proteobacteria, the classes Gammaproteobacteria and Alphaproteobacteria, and the genera Pantoea, Pseudomonas, Burkholderia, Arthrobacter, Psychrobacter, and Neokomagataea were significantly dominant in the five samples. In addition, our results demonstrated that the Shaanxi province (SX) sample had the highest Shannon index values (3.795). We found that the SX (308.097) and NS (126.240) samples had the highest and lowest Chao1 richness estimator (Chao1) values, respectively. Venn graphs indicated that the five samples shared 39 common OTUs. Moreover, according to results of the canonical correlation analysis (CCA), soil total carbon, total nitrogen, effective phosphorus, and pH were the major contributing factors to the difference in the overall composition of the bacteria community in this study. Our data provide insights into the endophytic bacteria community composition and structure of roots associated with P. sinese. These results might be useful for growth promotion in different samples, and some of the strains may have the potential to improve plant production in future studies.
Introduction
In both natural and anthropic ecosystems, plants interact with a wide range of microorganisms, including bacteria. Recently, authors in [1] described endophytes as "all microorganisms which for all
Sample Collection
The roots of P. sinese were collected from August to October in 2018 from specimens growing at five distinct sites in four eco-regions (>400 km apart) in China; the geographic locations of the sites are presented in Table 1. Sites were chosen based on their different bioclimatic conditions. To ensure that the experiment was representative, we randomly selected five plants from each geographic location at the same growth phase, and undamaged, healthy roots were sampled in the field. For example, we first selected five plants randomly in this position according to the five-point sampling method. All samples were cut down with sterile scissors. Then, we collected upper, middle, and lower root equivalents from one plant and mixed them for a total weight of 300 g. We placed them in a sterile plastic bag and transported them to the lab, and they were processed within 24 h. The materials from the other regions and altitudes were also collected as described above. Meanwhile, in order to remove other microbial interference on the surface of roots, a surface sterilization procedure was conducted: roots from the plants were carefully rinsed free of soil under running water and then wiped off with filter paper and surface-sterilized by immersion in 95% ethanol for 30 s, then in 5% sodium hypochlorite for 5 min, and finally rinsed eight times with sterile distilled water. To confirm that the surface sterilization process was successful, the surface sterilized nodules were rolled on a potato dextrose agar (PDA) medium containing (in grams per liter) potato, 200, glucose, 20, and agar, 18. The aliquots of the sterile distilled water from the final rinse solutions were plated onto PDA plates as controls to detect possible contaminants. Roots without growth on the control plates were considered to be effectively surface-sterilized. All samples were immediately put on ice and then stored at −80 • C as soon as possible until total DNA extraction.
The physicochemical characteristics of soil samples from the collection sites were analyzed for their chemical composition according to the procedure described by the USDA (1996) [17]. The altitudes and geographical coordinates of the sampling sites were determined (Table 1).
Genomic DNA Extraction and PCR Amplification
All five root samples from the same site were pooled as one sample and mixed thoroughly. Approximately 300 g of roots were used for each individual DNA extraction. Finally, six samples were generated for genomic DNA extraction. Genomic DNA was extracted by DNA quick plant system kit (Tiangen, China) after maceration in liquid nitrogen following the manufacturer's instructions. After extraction, DNA concentration and purity were determined using 1% agarose gel electrophoresis. According to the concentration, each DNA sample was diluted to a final concentration of 1 ng/µL using sterile distilled water and was then used as a DNA template. PCR amplification of the 16S rDNA V3+V4 region was conducted. PCR experiments were performed with Phusion* High-Fidelity PCR master mix with GC buffer (New England Biolabs) to ensure amplification efficiency and accuracy, and this process was run in an Eppendorf Gradient Thermocycler (Brinkman Instruments, Westbury, NY). Using diluted genomic DNA as the template, the 16S rDNA V3+V4 region was amplified with the specific primers 341F(5 -CCTAYGGGRBGCASCAG-3 ) 3 and 806R (5 -GCCAATGGACTACHVGGGTWTCTAAT-3 ) with the barcode [18,19].
Library Construction and Sequencing
Following the above amplification, the PCR products were mixed with the same volume of 1× loading buffer (containing SYB green), and the PCR amplicons were detected using 2% agarose gel electrophoresis. After that, all of the amplicons were pooled in equimolar ratios into a single tube. Then, the target sequences were extracted using a Qiagen Gel Extraction Kit (Qiagen, Germany). The libraries were constructed using a TruSeq ® DNA PCR-Free Sample Preparation Kit (Illumina, USA), following the manufacturer's recommendations, and index codes were added. The library quality was assessed on the Qubit ® 2.0 Fluorometer (Thermo Scientific) and Agilent Bioanalyzer 2100 system. At last, the library was sequenced using the Illumina HiSeq 2500 platform, and 250 bp paired-end reads were generated.
Statistical Analysis
To perform an accurate taxonomic assignment for each sequence, quality control and length trimming for the raw reads were needed. The paired-end reads obtained by sequencing were divided into six groups according to their unique barcodes and truncated by cutting off the barcodes and primer sequences. The remaining reads of each sample were then assembled to generate raw tags [20]; quality filtering on the raw tags was performed to obtain high-quality clean tags [21,22]. Clean tags were compared with the reference database to detect and remove chimera sequences to generate tags [23,24]. Finally, the effective tags were obtained.
Uparse (Uparse v7.0.1001, http://drive5.com/uparse/) [25] was used to cluster all of the effective tags. The effective tags with ≥97% identity were clustered into the same operational taxonomic unit (out). The OTUs with the highest frequencies were selected as representatives of the OTU sequences. We removed OTUs with only one sequence from the dataset, since these unique OTUs could result from sequencing errors. The representative sequence for each OTU was annotated by the GreenGene database based on the RDP classifier, and multiple sequence alignment was performed by MUSCLE software [26][27][28].
Alpha diversity and beta diversity analyses; the observed species, including chao1; the Shannon index; the Simpson index; abundance-based coverage estimator (ACE); good-coverage; rarefaction analysis; rank abundance analysis; principal component analysis (PCA); principal coordinate analysis (PCoA); unweighted pair-group method with arithmetic means (UPGMA); nonmetric multi-dimensional scaling (NMDS) analysis; and T-test analysis were performed by QIIME and displayed with R software [22].
Sequencing Results
Illumina Miseq sequencing generated a total of 434,468 raw tags representing five samples, with individual reads ranging from 81,329 to 92,592 bp. After quality control, the remaining high-quality reads in the dataset, with an average of 419 bp, were presented. After qualification and removal of chimeras from raw tags, 313,044 effective tags were finally obtained by HTS. The Q20 values were from 98.24 to 98.41, indicating that the databases were of high quality (Table 2). In order to study the species diversity of the sample, the effective tags of samples were grouped into OTUs based on 97% identity. As shown in Figure 1, after removing singletons, the number of valid OTUs was 957, with an average of 62,129 sequences of annotated information. The top 10 microorganism populations from five samples were enumerated. The 10 largest phyla are shown in Figure 2. Proteobacteria dominated the observed sequences at the phylum level, representing 84.8%, 82.5%, 44.1%, 96.5%, and 39.1% of the total number of species in SX, FJ, XJ, NY, and NS, respectively. In addition, Actinobacteria were found to be the predominant phylum in NS (38.56%) and XJ (28.15%). Meanwhile, Firmicutes were also high in the NS and XJ samples, accounting for 27.00% and 21.89%, respectively. This was followed by Cyanobacteria, which accounted for 16.29% in FJ.
Alpha Diversity Analysis
The trend of rarefaction curves suggested that there was sufficient sampling of the microbial communities and indicated that each sample was different ( Figure 6). Good's coverage estimator values ranged from 99.9% to 100% (Table 3), indicating that the sequence numbers per sample were high enough to capture the majority of the 16S rRNA gene sequences to show bacterial diversity.
Alpha Diversity Analysis
The trend of rarefaction curves suggested that there was sufficient sampling of the microbial communities and indicated that each sample was different ( Figure 6). Good's coverage estimator values ranged from 99.9% to 100% (Table 3), indicating that the sequence numbers per sample were high enough to capture the majority of the 16S rRNA gene sequences to show bacterial diversity.
The alpha diversity parameters of each sample are displayed in Table 3. The observed species were highest in the SX sample at 295 and lowest in the NS sample at 110. Moreover, the Shannon index of the SX sample was the highest (3.795). In contrast, that of the FJ sample was the lowest (2.165). We found that SX had the highest Chao1 (308.097), ACE (309.216), and PD_Whole Tree indices The alpha diversity parameters of each sample are displayed in Table 3. The observed species were highest in the SX sample at 295 and lowest in the NS sample at 110. Moreover, the Shannon index of the SX sample was the highest (3.795). In contrast, that of the FJ sample was the lowest (2.165). We found that SX had the highest Chao1 (308.
Beta Diversity Analysis
A heat map of the Beta diversity index was constructed (Figure 7). The results revealed that the samples collected from FJ shared highest level correlation rates of species with other sampling sites: 0.538 in NY, 0.585 in NS, 0.537 in XJ, and 0.449 in SX, respectively.
Meanwhile, the principal coordinate analysis (PCoA), the unweighted pair-group method with arithmetic (UPGMA), and the canonical correlation analysis (CCA) were performed to visualize and compare the relationships of the microbial communities among different samples. The results of the PCoA based on unweighted Unifrac distances demonstrated that XJ and NS samples tended to cluster together according to PC1 (50.18%) and PC2 (32.14%), representing a strong separation based on the different samples (Figure 8). For the diversity analysis, a UPGMA tree was constructed, and the results showed that samples from XJ and NS were clustered together. Moreover, they and SX clustered separately as compared to other samples (FJ and NY). The results of the UPGMA clustering tree confirmed those of PCoA. At the phylum level, FJ contained the lowest abundances of Actinobacteria and Firmicutes, but the highest abundance of Proteobacteria (Figure 9).
Beta Diversity Analysis
A heat map of the Beta diversity index was constructed (Figure 7). The results revealed that the samples collected from FJ shared highest level correlation rates of species with other sampling sites: 0.538 in NY, 0.585 in NS, 0.537 in XJ, and 0.449 in SX, respectively.
Meanwhile, the principal coordinate analysis (PCoA), the unweighted pair-group method with arithmetic (UPGMA), and the canonical correlation analysis (CCA) were performed to visualize and compare the relationships of the microbial communities among different samples. The results of the PCoA based on unweighted Unifrac distances demonstrated that XJ and NS samples tended to cluster together according to PC1 (50.18%) and PC2 (32.14%), representing a strong separation based on the different samples ( Figure 8). For the diversity analysis, a UPGMA tree was constructed, and the results showed that samples from XJ and NS were clustered together. Moreover, they and SX clustered separately as compared to other samples (FJ and NY). The results of the UPGMA clustering tree confirmed those of PCoA. At the phylum level, FJ contained the lowest abundances of Actinobacteria and Firmicutes, but the highest abundance of Proteobacteria (Figure 9).
The canonical correlation analysis (CCA) indicated that the total nitrogen content was the major factor contributing to the differences between the endophytic bacterial communities and environmental factors. The first ordination axis was strongly correlated with the soil effective phosphorus and total carbon and nitrogen contents and explained 37.98% of the total variability. The second ordination axis was unrestricted (29.13% of the contribution rate) and was mainly associated with pH. According to results of the CCA analysis, the soil total carbon content, total nitrogen, effective phosphorus, and pH were the major factors explaining the variations in the overall structure in the study (Figure 10).
T-tests were used to reveal statistically significant different species (p < 0.05) in different samples at distinct taxonomy levels. As a result, significant differences were found in the bacterial community composition among all sampling locations, except between NY and XJ (p = 0.045). The effect of the sample origin was significant.
Microorganisms 2019, 7, x; doi: www.mdpi.com/journal/microorganisms phosphorus and total carbon and nitrogen contents and explained 37.98% of the total variability. The second ordination axis was unrestricted (29.13% of the contribution rate) and was mainly associated with pH. According to results of the CCA analysis, the soil total carbon content, total nitrogen, effective phosphorus, and pH were the major factors explaining the variations in the overall structure in the study ( Figure 10). T-tests were used to reveal statistically significant different species (p < 0.05) in different samples at distinct taxonomy levels. As a result, significant differences were found in the bacterial community composition among all sampling locations, except between NY and XJ (p = 0.045). The effect of the sample origin was significant. www.mdpi.com/journal/microorganisms second ordination axis was unrestricted (29.13% of the contribution rate) and was mainly associated with pH. According to results of the CCA analysis, the soil total carbon content, total nitrogen, effective phosphorus, and pH were the major factors explaining the variations in the overall structure in the study (Figure 10). T-tests were used to reveal statistically significant different species (p < 0.05) in different samples at distinct taxonomy levels. As a result, significant differences were found in the bacterial community composition among all sampling locations, except between NY and XJ (p = 0.045). The effect of the sample origin was significant.
Discussion
The information gathered in this study provides a baseline of information on the composition of endophytic microbial communities in P. sinese roots in five samples. In addition, this information could provide a starting point for future investigations directed toward developing a better understanding of the role of each member within these microbial communities and optimizing plant growth promotion for endophytic microbial communities with the aim of improving production and quality.
In the current study, it was suggested that endophytic fungi provide essential nutrients for their hosts' growth and defend hosts from biotic and abiotic stresses. In return, the host plant alters the composition of the microbial community to a large extent [4][5][6][7][8][9]. Although several investigations have already revealed many important aspects of P. sinese endophytic bacteria [10][11][12], little information exists about indispensable functions in P. sinese.
Discussion
The information gathered in this study provides a baseline of information on the composition of endophytic microbial communities in P. sinese roots in five samples. In addition, this information could provide a starting point for future investigations directed toward developing a better understanding of the role of each member within these microbial communities and optimizing plant growth promotion for endophytic microbial communities with the aim of improving production and quality.
In the current study, it was suggested that endophytic fungi provide essential nutrients for their hosts' growth and defend hosts from biotic and abiotic stresses. In return, the host plant alters the composition of the microbial community to a large extent [4][5][6][7][8][9]. Although several investigations have already revealed many important aspects of P. sinese endophytic bacteria [10][11][12], little information exists about indispensable functions in P. sinese.
Discussion
The information gathered in this study provides a baseline of information on the composition of endophytic microbial communities in P. sinese roots in five samples. In addition, this information could provide a starting point for future investigations directed toward developing a better understanding of the role of each member within these microbial communities and optimizing plant growth promotion for endophytic microbial communities with the aim of improving production and quality.
In the current study, it was suggested that endophytic fungi provide essential nutrients for their hosts' growth and defend hosts from biotic and abiotic stresses. In return, the host plant alters the composition of the microbial community to a large extent [4][5][6][7][8][9]. Although several investigations have already revealed many important aspects of P. sinese endophytic bacteria [10][11][12], little information exists about indispensable functions in P. sinese.
In our study, we surveyed the endophytic bacteria composition and diversity in P. sinese based on the high-throughput sequencing method, which can provide a large amount of data with high accuracy and a low cost. Many endophytic bacteria were found to exist in the roots associated with P. sinese. A total of 313,044 effective sequences and 957 OTUs were yielded from five samples. The geographic conditions have a certain impact on the endophytic bacteria diversity among P. sinese from different sampled sites.
P. sinese, used as a feedstock in graziery and agroforestry for biomass production, reforestation, or site restoration, was introduced from FJ into other sites in recent years. We found that a given plant genotype apparently selects a particular microbiome, and the structure of endophytic bacteria is correlated with the host plant. For example, Proteobacteria was the dominant phylum in all samples, followed by Actinobacteria, Firmicutes, and Cyanobacteria. This result agrees with a previous study; these phyla present in many environments [29,30]. Moreover, these results agree with those obtained by Lin et al. (2018) [31], who detected that Proteobacteria was the main phylum. Gammaproteobacteria and Alphaproteobacteria were the two main classes within Proteobacteria. Findings from [30,32] were similar; these classes were also found in some plant species. Cyanobacteria are ubiquitous microorganisms and constitute a high portion of soil microbes [33]. The genera Pantoea, Pseudomonas, Burkholderia, Arthrobacter, Psychrobacter, and Neokomagataea were significantly dominant in five samples. Furthermore, in agreement with a previous study [34], these P. sinese root-associated microbes are beneficial to the plants.
Additionally, for the endophytic bacteria diversity analysis in P. sinese, Lin et al. [31] screened and analyzed the dynamic endophytic bacteria in roots, stems, and leaves at different growth stages of P. sinese. The results revealed various diversities of endophytic bacteria in P. sinese and found that Ralstonia and Lactococcus were dominant at the genus level. However, in our study, Pantoea, Pseudomonas, Burkholderia, Arthrobacter, and Neokomagataea were the most dominant genera. This difference may be a result of different methods having different detection capabilities for endophytic bacteria. The results of our study indicate that these top genera may play pivotal roles in maintaining and shaping the structures and functions of bacterial communities in P. sinese (the results were not published).
On the other hand, evidence was found that the strains restricted to an ecological niche generally hold genetic characteristics and delineate according to their geographical origins [35]. Study of this phenomenon could give important information about the abundance of bacterial species on the planet and their ecological roles. In our present study, the endophytic bacteria structure was different among the five samples. Interestingly, samples from SX contained the most unique OTUs. The genera Pantoea and Pseudomonas were mainly present in samples from SX, Burkholderia in FJ, Pseudomonas and Arthrobacter in XJ, Arthrobacter and Psychrobacter in NS, and Neokomagataea and Acinetobacter in NY. Thus, different ecological types of endophytic bacteria exist in distinct geographic regions, and this should be an important consideration for the selection of endophytic bacteria inoculation. Although NY accounted for a lower proportion of the Shannon index, it contained 20.58% OTUs-more than the other samples-due to its different environmental factors. Additionally, compared to the other sampled sites, we found that all of the alpha indices of the SX sample were higher, suggesting that the SX sample possessed a higher bacterial richness and diversity. SX contains over 17.7 times more unique OTUs than NS. In addition, the Venn graphs of the five samples supported this conclusion. The heatmaps of the PCoA, UPGMA, and CCA analyses evidently demonstrated that the bacterial diversity was different among the five samples and found that the particular environmental factors affected variation among microbial communities.
In general, our results demonstrated clear differences in the relative abundances of certain species among the five samples examined, suggesting that some endophyte species may preferentially proliferate in a certain eco-region and play ecological roles that are distinct from those of other endophytes.
Previously, most researchers [36][37][38][39] demonstrated that both the abiotic conditions (temperature, soil pH, rainfall, etc.) and biotic conditions (genotypes of host plants and their distribution) might affect the diversity and composition of the endophytic bacteria species. Consistent with these reports, our results showed that the endophytic bacteria communities can be grouped into two bigger ecological regions (Fujian province and the northwest region) according to their geographic origins (Figure 9). The high correlation between the geographic regions and the bacterial genotypes may be attributed to the different environmental factors and the soil characteristics of the sampling sites. The physical and chemical properties of the soil, such as soil texture, mineral composition, and organic matter, affect the community structure of endophytic bacteria in plants [40]. Our results are similar to the results obtained by this study. We found that the majority of genotypes were associated with a certain geographic location ( Figure 4); this might be explained by the fact that these sites had diverse soil characteristics. The content of available phosphorus in soil physico-chemical factors of SX samples was significantly higher than that in the other four soil samples. The CCA analysis showed that the endophytic bacterial communities of the SX samples were mainly affected by the content of available phosphorus in the soil. This indicates that the content of available phosphorus in soil had a certain effect on the structure of the endophytic bacteria community in the roots of P. sinese.
Additionally, previous studies have demonstrated that the physical and chemical properties of soil change with a decrease in soil fertility and that this is associated with a decrease in microbial community abundance and diversity [40,41]. In contrast to other sites, the lowest abundance of endophytes was detected in the NY sample due to its minimal Shannon index. Similarly, the Chao1 index was the lowest in the NS sample, which may be related to the soil factors in the growing area. Moreover, our results showed that the total nitrogen, available phosphorus, and total carbon content in sandy land and saline-alkali land are obviously lower in NY compared with the other three samples; this unique property is correlated with the genotypes, as the NY formed a single distinct group in the cluster (Figure 9). These results are consistent with earlier reports and demonstrate the fact that the soil characteristics might be more important than the climate factors in the determination of bacterial diversity or community composition.
Overall, our results imply that the bacterial community might be determined by the geographic origin. Furthermore, these results also support the hypothesis that soil properties and climate factors may drive the structure and composition of bacterial populations. The living environment was the major factor contributing to the difference; the relationship between endophytic bacterial diversity and the others environmental factors, such as saline-alkali and light, needs further study.
In conclusion, from the alpha diversity analysis and beta diversity analysis, many endophytic bacteria were found in the roots of P. sinese from the five samples. The endophytic bacterial structure and composition differed in different samples, and the geographic conditions and climate factors had certain impacts on the endophytic bacterial diversity and abundance of P. sinese from the different samples. Further studies on the roles of these endophytic bacteria are required for characterization. This study might be useful for growth improvement and might be useful for improving the production and quality of P. sinese.
Conclusions
It is essential to investigate the endophytic bacterial diversity in the roots of P. sinese. In this study, the composition diversity and differences in endophytic bacteria in roots in different growth eco-regions associated with P. sinese were analyzed using high throughput sequencing technology. Similar to many studies, we found that Proteobacteria was the most abundant phylum in all samples; Gammaproteobacteria and Alphaproteobacteria accounted were dominant at the class level in five samples, indicating that there is host selection of host-specific endophytes in host species in distinct eco-regions. However, our investigation also revealed the compositions of the endophytic bacterial communities, and the diversity was distinctly different among these samples. The different soil characteristics can provide an important contribution to the understanding of their effects on the bacterial communities associated with P. sinese. Our results demonstrated that P. sinese modulates the bacterial microbiota composition by recruiting specific endophytic bacteria, which may help to improve its protection and growth, and further provides new opportunities for exploring their | 6,027 | 2019-02-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
MAPK/ERK overrides the apoptotic signaling from Fas, TNF, and TRAIL receptors.
The tumor necrosis factor (TNF), Fas, and TNF-related apoptosis-inducing ligand (TRAIL) receptors (R) are highly specific physiological mediators of apoptotic signaling. We observed earlier that a number of FasR-insensitive cell lines could redirect the proapoptotic signal to an anti-apoptotic ERK1/2 signal resulting in inhibition of caspase activation. Here we determine that similar mechanisms are operational in regulating the apoptotic signaling of other death receptors. Activation of the FasR, TNF-R1, and TRAIL-R, respectively, rapidly induced subsequent ERK1/2 activation, an event independent from caspase activity. Whereas inhibition of the death receptor-mediated ERK1/2 activation was sufficient to sensitize the cells to apoptotic signaling from FasR and TRAIL-R, cells were still protected from apoptotic TNF-R1 signaling. The latter seemed to be due to the strong activation of the anti-apoptotic factor NF-kappaB, which remained inactive in FasR or TRAIL-R signaling. However, when the cells were sensitized with cycloheximide, which is sufficient to sensitize the cells also to apoptosis by TNF-R1 stimulation, we noticed that adenovirus-mediated expression of constitutively active MKK1 could rescue the cells from apoptosis induced by the respective receptors by preventing caspase-8 activation. Taken together, our results show that ERK1/2 has a dominant protecting effect over apoptotic signaling from the death receptors. This protection, which is independent of newly synthesized proteins, acts in all cases by suppressing activation of the caspase effector machinery.
sponse to external and internal signals. Proapoptotic signals may be transduced through a subset of cytokine receptors of the tumor necrosis factor (TNF) 1 family, termed death receptors (DRs). Members of the TNF receptor family are characterized by similar extracellular domains containing cysteine-rich repeats (2). The DRs also share a common intracellular domain, the death domain (DD), which confers to them the ability to induce apoptosis. By way of DD interaction, proteins of the death-inducing signaling complex (DISC) will be recruited to the receptor, and the apoptotic machinery will be activated. In parallel to this, other adaptor molecules may bind to the complex and modulate the response, some of them inducing survival. The number of known DRs has been growing since the first one was discovered, and it seems that additional receptors are yet to be discovered (for reviews, see Refs. [3][4][5]. The Fas receptor (FasR) or CD95/APO-1, TNF receptor 1 (TNF-R1) and TRAIL receptors 1 and 2 (TRAIL-R1 or DR4, TRAIL-R2 or DR5) are members of this family of proteins (6 -8). Although the four receptors share some common features in their structures, they also have specific characteristics. The FasR binds the adaptor protein FADD (9), which in turn recruits and activates procaspase-8 (7,10). TNF-R1, however, does not bind FADD directly, but TRADD has to be engaged before FADD (11) and procaspase-8 (12) can be recruited to the receptor. Defining the components of the TRAIL-R DISC is still a controversial matter. Both caspase-8 and caspase-10 have been implicated as crucial mediators of TRAIL-induced apoptotic signaling, and both caspases have been identified as part of the TRAIL DISC (13)(14)(15). It has been proposed that TRAIL would induce apoptosis through FADD-dependent and -independent pathways (16,17). However, recent studies suggest that both TRAIL-R1 and TRAIL-R2 recruit FADD and caspase-8, although TRAIL-R1 could still induce FADD-independent apoptosis in some situations (14,15,18). In all cases, recruitment and activation of caspase-8 leads to induction of effector caspases and ultimately to apoptosis (19).
Besides the FADD/caspase-8 signaling cascade, a number of other signaling pathways are activated by the DRs, most likely involving adaptor/regulator proteins specific to each receptor.
Especially the TNF-R has been implied to have several important signaling functions apart from apoptotic signaling. TRAF2 and RIP, the latter which was first identified as a component of the Fas DISC, have been shown to bind to TRADD and are thus recruited to the TNF-R, both of them contributing to JNK and NF-B activation (11,20). Likewise, new signaling functions are emerging for the FasR. Among the molecules involved in Fas-mediated signaling, RIP (21), Daxx (22,23), and FAP-1 (24,25) have been shown to bind to the FasR, modulating its signal. Some inhibitor proteins act by mimicking one or another protein of the apoptotic cascade, diverting it into inactivation. Among them, FLIP, a caspase-8-like protein lacking proteolytic activity has been shown to block caspase-8 activation (26,27). Members of the mitogen-activated protein kinase (MAPK) family have also been shown to be involved in the signaling downstream of the Fas or TNF receptors (28 -31). Particularly, the MAPK ERK1/2, which is known to induce cell growth and differentiation, has been shown to promote survival in a number of situations (32,33). It has also been suggested that TRAIL-R could, with the intermediary of FADD, recruit TRADD and activate NF-B in this way (17). However, TRADD and RIP were reported absent from the TRAIL-induced DISC in vivo (14), although it could exist in different cell lines than the ones investigated.
We have previously shown that ERK1/2 activation is able to suppress Fas-induced apoptosis in activated T-cells (34,35). Therefore, we wanted to examine whether the same protective effect could be seen in cancer cells. As members of the TNF receptor family are involved in removal of tumor cells by the immune system, development of resistance to such killing would impair the defense mechanism. In this context, we recently demonstrated that ERK1/2 has a role in rendering cells insensitive to FasR killing, as inhibition of ERK1/2 activation sensitizes HeLa cells to FasR-mediated apoptosis (36). Since HeLa cells also express TNF and TRAIL receptors (37), we wanted to investigate in this cell model whether ERK1/2 could protect against apoptosis induced through these other DRs in the same way as in Fas signaling, especially because the apoptotic signaling pathways induced by the three cytokines present common features in term of their DISC composition. In this study we show that inhibition of ERK1/2 is able to enhance the sensitivity of HeLa cells to TRAIL, as well as to Fas, whereas TNF treated cells remain insensitive. The insensitivity in the case of the TNF-R seems to be caused by the additional protection provided by NF-B activation. However, when protein synthesis was blocked and cells were sensitized to Fas, TNF, and TRAIL, activation of ERK1/2 was able to protect the cells in all the cases. Furthermore, all three receptors were able to activate ERK1/2. The protection seems to occur at the level or upstream of caspase-8. Thus, ERK1/2 activation has a dominant-negative effect on the apoptotic signaling of all receptors, although differences could be detected in the degree of ERK1/2 activation as well as anti-apoptotic effect on the different DRs.
Adenovirus-mediated Gene Transfer-Recombinant replication-deficient adenovirus RAdlacZ (RAd35) (38), which contains the Escherichia coli -galactosidase (lacZ) gene under the control of cytomegalovirus (CMV) IE promoter was kindly provided by Gavin W. G. Wilkinson (University of Cardiff, Wales). Construction and characterization of replication-deficient adenoviruses RAdMEK1ca containing the coding region of constitutively active MKK1 with the HA-tagged (RAd-CA-MKK1-HA) gene driven by the CMV IE promoter (a kind gift from Marco Foschi, University of Florence, Italy) has been described previously (39). In experiments, adenovirus RAdLacZ was used to determine the multiplicity of infection (MOI) required to infect 100% HeLa cells. RAdLacZ virus was serially diluted at different MOI in culture medium, and HeLa cells were incubated for 16 h with different concentrations as described previously (40). Cells were then washed with PBS, fixed with glutaraldehyde and stained for LacZ expression with X-gal. For experiments with RAd-CA-MKK1-HA, we chose 300 plaque-forming units per cell, which gives 100% transduction efficiency in this cell line. Cells were incubated for 16 h with RAd-CA-MKK1-HA, then washed with PBS and further incubated in medium with the treatments described.
DNA Fragmentation Analysis-For microscopy, cells grown on coverslips were fixed with 3% paraformaldehyde in PBS for 30 min, permeabilized with 0.1% Triton-100 in PBS for 5 min, and blocked with 3% bovine serum albumin in PBS overnight at 4°C. Staining of MKK1-HA positive cells and nuclei was performed by 90-min incubation with anti-HA, three times PBS wash, and further incubation for 60 min with fluorescein isothiocyanate-conjugated anti-mouse secondary antibody (Zymed Laboratories Inc., San Francisco, CA) and Hoechst 33342 (Molecular Probes, Eugene, OR). After three washes, cells were mounted in Mowiol (Sigma) and visualized with a fluorescence microscope (Leica, Wetzlar, Germany). Flow cytometry of propidium iodide (PI; Molecular Probes) stained nuclei was as described (41,42). Briefly, cells were harvested in PBS, 5 mM EDTA, washed with PBS, and the cell pellet (1 ϫ 10 6 cells) was gently resuspended in 1 ml of hypotonic fluorochrome solution (PI 50 g/ml in 0.1% sodium citrate plus 0.1% Triton X-100). The cells were incubated at 4°C overnight before analyzing on a FACScan flow cytometer (Becton Dickenson). The subdiploid peak was considered to be apoptotic cells as previously described (42).
Electrophoretic Mobility Shift Assay-Cells were harvested in cold PBS/EDTA, lysed by freeze-thaw in buffer C (25% glycerol, 0.42 M NaCl, 1.5 mM MgCl 2 , 0.2 mM EDTA, 20 mM HEPES) containing phenylmethylsulfonyl fluoride and dithiothreitol (0.5 mM each), and supernatant was recovered by centrifugation at 4°C. Whole cell extract (15 g) was incubated with a 32 P-labeled oligonucleotide reproducing the consensus NF-B binding sequence. The protein-DNA complexes were then resolved on 4% polyacrylamide native gel electrophoresis.
Inhibition of MAPK kinase 1 (MKK1) Sensitizes HeLa Cells to
Fas and TRAIL-mediated Apoptosis but Not TNF-We compared the kinetics of apoptosis induction by Fas, TNF, or TRAIL receptor, after sensitization with either the protein synthesis inhibitor CHX or the MKK1 inhibitor PD98059. In the absence of any specific treatment, HeLa cells were resistant to FasR and TNF-R1 stimulation, but showed some sensitivity to TRAIL-Rmediated apoptosis (Fig. 1), although to a lesser degree than many other tested TRAIL-responsive cell lines (data not shown). As previously shown, HeLa cells were markedly sensitized to Fas-induced apoptosis by cotreatment with PD98059. Cotreatment with PD98059 rendered HeLa cells even more sensitive to TRAIL-R-mediated apoptosis, showing that ERK1/2 signaling has a role in the protection against both TRAIL-R-and FasRinduced apoptosis. In contrast, cells remained resistant to TNF␣induced apoptosis after PD98059-mediated inhibition of ERK1/2 activation, revealing that the TNF-R has additional survival mechanisms in action. To confirm the presence of functional DRs, we treated the cells with CHX, as many cells resistant to DR stimulation have been found to be sensitized by treatment with CHX. Cotreatment with CHX was efficient in sensitizing the cells to FasR-, TNF-R1-, and TRAIL-R-mediated cell killing, respectively, demonstrating that the receptors were present and functional, and that receptor stimulation was able to promote cell death (Fig. 1).
Adenovirus-mediated Expression of Constitutively Active MKK1 Rescues HeLa Cells from Fas, TRAIL, and TNF-induced Apoptosis-Whereas ERK1/2 inhibition was not sufficient to sensitize the cells to stimulation by TNF␣, there is still the possibility that ERK1/2 could act as a secondary survival pathway in the TNF-R1 signaling. Because HeLa cells can be sensitized to Fas, TNF, and TRAIL-mediated apoptosis by treatment with CHX, we asked whether the survival mechanism activated by ERK1/2 would be able to override the CHX-induced sensitization. For this purpose, we used the RAd-CA-MKK1-HA adenovirus for expression of hemagglutinin (HA)tagged constitutively active (CA) MKK1, the ERK1/2 activator, into HeLa cells. The amount of virus particles necessary for infection of 90 -100% of the cells was determined by using an adenovirus construct containing the LacZ gene, followed by X-gal in situ staining. Surprisingly, we observed that expression of CA-MKK1-HA was not as efficient under the same conditions of infection because only 25-35% of the cells expressed CA-MKK1-HA (Table I). However, a higher penetrance of CA-MKK1-HA-positive cells could be obtained by longer incubation after infection (Table I). As the aim was to study direct signaling effects without involvement of ERK-mediated transcriptional activation, a short time period after expression of the initiating signaling protein was used. Despite the relatively low penetrance of CA-MKK1-HA expression at these early time points, the percentage of positive cells in the sample was sufficient to detect the effects of CA-MKK1-HA by microscopy and Western blot analysis (Figs. 2, and 5). Cells were sensitized to apoptosis by CHX, subjected to Fas, TNF, and TRAIL treatments, respectively and analyzed by fluorescence microscopy. This method enabled us to segregate the cells that actually express CA-MKK1-HA from the negative cells. Representative micrographs clearly show that the apoptotic cells did not express MKK1 ( Fig. 2A), whereas the HA-positive cells were not apoptotic. Quantitative data obtained by manual counting of apoptotic cells, as determined by Hoechst staining, confirmed that only a minute fraction of apoptotic cells (Ͻ2%) was found to be HA-positive (Table I). As a comparison to this assay, we counted surviving cells remaining on equal areas of each coverslip (Fig. 2B). Because TNF␣ alone did not induce apoptosis in HeLa cells, we used TNF-treated CA-MKK1-HA expressing cells as a positive control to calculate the percentage of survival after the respective treatments (Fig. 2B). The results show that expression of CA-MKK1-HA rescued the cells from CHX-induced sensitization. Taken together, because the protection occurred shortly after infection when the expressed kinase did not have time to induce newly synthesized proteins and also in the presence of CHX, these data show that ERK1/2-mediated protection is independent of protein synthesis. Furthermore, the effect of CA-MKK1-HA expression not only counteracted the CHX sensitization, but it also protected against the normally occurring apoptosis induced by TRAIL alone (Fig. 2B). Interestingly, activation of ERK1/2 was also able to rescue the cells from TNF-induced apoptosis in CHX-sensitized cells. The survival data also provided a control showing that the adenovirus by itself did not affect our results because the RAdLacZ-transfected cells reacted in a similar way as the nontransfected population in Fig. 2B as well as untransfected cells (data not shown).
The ERK1/2 Pathway Is Independent from the NF-B Antiapoptotic Pathway-Triggering of the TNF-R1 is generally known to activate the NF-B signaling cascade, a major anti- apoptotic pathway of the TNF-R1. TRAIL-R1, TRAIL-R2, and FasR have also been suggested to activate NF-B under specific conditions and treatments (44 -46). The activation steps involve induction of a kinase cascade comprising NIK and IKK, resulting in phosphorylation of the NF-B inhibitor IB and release of the transcription factor (47). Some reports suggest that NIK could in some situations be replaced by MEKK1 (48,49). We wanted to know whether the difference in sensitivity toward stimulation of the respective receptors was because of activation of NF-B. As expected from earlier studies (6,20), TNF induced a marked binding activity of the NF-B transcription factor with rapid kinetics (Fig. 3). However, neither Fas nor TRAIL activated NF-B, thereby providing an explanation of why inhibition of ERK1/2 activation did not sensitize HeLa cells to TNF-induced apoptosis. Furthermore, the results suggest that the ERK1/2 survival mechanism would be independent from NF-B activation. All Death Receptors Can Activate ERK1/2-Because the resistance mechanism rendering a tumor cell line insensitive to DR-induced apoptosis could be caused by a high basal level of ERK1/2 in those cells, we studied the activation state of ERK1/2 in HeLa cells after receptor triggering. We treated the cells with anti-Fas, TNF␣, or TRAIL for different time periods. The activation of ERK1/2 was determined by Western blot using a phospho-ERK1/2 antibody, which specifically recognizes the activated form of the kinase (Fig. 4A). Stimulation of FasR, TNF-R1, as well as TRAIL-R rapidly induced ERK1/2 phosphorylation. The activation of ERK1/2 appeared 5 min after treatment and was down-regulated after 1 h. The overall results show that the survival mechanism is not constitutive to the cell but rather activated by the apoptotic stimuli in itself. Because ERK1/2 activation could be mediated by caspases in the same way as indicated for the stress-activated protein kinases (SAPK; Refs. 37, 50), we tested the effect of the general caspase inhibitor Z-VAD-fmk on Fas-induced ERK1/2 phosphorylation. Although Z-VAD-fmk was an efficient inhibitor FasRinduced cleavage of caspase-8 (Fig. 4B), it clearly did not affect FasR-mediated activation of ERK (Fig. 4A).
Activation of ERK1/2 Blocks the Apoptotic Cascade Above the Level of Caspase-8 -Cleavage and activation of caspase-8 is an early step in the apoptotic process triggered by the FasR, TNF-R1, and TRAIL-R, occurring at the level of DISC recruitment (10,12,15). The full-length procaspase is first cleaved once, releasing an intermediate 43-kDa fragment, which is further processed into the active 18-kDa fragment (51). To find out at what level activation of the ERK1/2 pathway would stop the cell death cascade, we infected HeLa cells with RAd-CA-MKK1-HA and observed whether procaspase-8 would still be cleaved into the active form. In nontransfected cells, the amount of cleaved caspase in the absence of sensitization is consistent with the amount of apoptotic cells measured earlier in the same conditions (Figs. 5 and 2A), suggesting that in resistant cells the apoptotic signal does not reach the activation of caspase-8 but is stopped at an earlier stage. Sensitization of the cells to DR-induced apoptosis by CHX induced a massive appearance of p18 and disappearance of the full-length procaspase-8. However, when CA-MKK1 is expressed, the amount of active fragment diminishes, and more procaspase-8 can be detected (Fig. 5). DISCUSSION The ERK1/2 pathway is of major importance in controlling cellular differentiation and growth (52)(53)(54), and it has also 3. Only TNF strongly activates NF-B. HeLa cells were treated for 15 min, 1 h, 3 h, and 6 h with anti-Fas, TNF␣, or TRAIL-L (100 ng/ml each). The NF-B activity was measured by electrophoretic mobility shift assay with a specific oligonucleotide probe. A representative autoradiograph is shown. As reported previously, TNF rapidly activates NF-B, whereas neither Fas nor TRAIL has any effect on the transcription factor. been shown to act as an important modulator of various apoptosis-inducing signals in different systems (32,55,56). Our interest was to study whether this signaling cascade is involved in the signaling from other DRs, in addition to its established role as a regulator of FasR signaling. Although we and others described the ERK1/2 pathway as a mechanism preventing cell death induced by the FasR (34 -36, 57, 58), the involvement of ERK1/2 in protection against TNF or TRAIL receptor-induced cell death has not been established so far, except for a study suggesting that ERK1/2 is involved in FGF-2-mediated protection against TNF␣-induced apoptosis (33). We now provide evidence that ERK1/2 controls the responses from the other DRs too. ERK1/2 is likely to represent a mode of apoptosis regulation, which would be important especially during dynamic situations, when cells have to rapidly switch off the apoptotic signaling machinery. This inhibitor mechanism would then act in concert with inhibitor proteins, such as FLIP. The involvement of several regulatory pathways provides a multifaceted control system to direct the signals from these receptors.
Differences between Death Receptors in the Response to Receptor Activation and Inhibition of Mitogenic Signaling-HeLa cells have been shown to express the FasR, the TNF-R1, the TRAIL-R1, and the TRAIL-R2, each of them containing a death domain and able to induce apoptosis (45). However, there are some fundamental differences between the signaling responses elicited by activation of the respective receptors. As we show in this article, triggering of the receptors induce different responses in HeLa cells. Whereas HeLa cells were completely resistant to both Fas and TNF, they showed some sensitivity toward TRAIL-mediated apoptosis, although the degree of survival was still higher than in other tested TRAIL-sensitive cell lines (data not shown). In addition, ERK1/2 inhibition could sensitize HeLa cells to Fas and TRAIL killing, whereas it did not affect the resistance of the cells to TNF-mediated apoptosis. These results suggest differences in the modulation of DR responses. Interestingly, the responses we observed to correlate with the known physiological functions of each receptor. The primary function of the FasR has long been characterized as induction of apoptosis in different situations that include the immune response and regulation of the immune system. It has been suggested that tumor cells have developed several strategies to escape the immune surveillance by turning down this apoptotic pathway (59,60), one strategy clearly being ERK-dependent protection. The TNF-R1, however, seems to be mainly involved in inflammatory responses, through activation of the NF-B transcription factor (61), which diverts its signaling from the death pathway in most cell lines (6). Recent studies suggest that the TRAIL-Rs, the more novel of the four DRs, are especially important in removing virus-infected cells, as well as tumor cells (62)(63)(64), which would explain the higher sensitivity of HeLa cells toward this receptor. The existence of two DRs both activated by TRAIL renders the interpretation more delicate, as the resulting effect observed is most likely a combination of different responses from the respective TRAIL-Rs, rather than one single and redundant effect from both receptors or an effect from one receptor alone.
Several Survival Pathways Can Modulate/Reinforce the Sensitivity of the Cells at a Given Time-TNF-R1 has been known more generally to direct its signaling toward transcription rather than apoptosis, as mentioned above, mainly by activating the transcription factor NF-B, whereas both FasR and the TRAIL receptors are considered to be primarily apoptosis-directed, although they have also been suggested to have the capacity to activate NF-B (17,65,66). It has been suggested that a Fas and TRAIL-mediated FADD-dependent activation of NF-B in HeLa cells could be triggered by CHX, combined to caspase inhibition to prevent apoptosis, by way of an unknown CHX-sensitive factor. This experimental setup is, however, far from the physiological situation (45,46). Our system reflects the more common responses of these receptors, as only TNF␣ was able to induce activation of the transcription factor NF-B. This would explain why inhibition of ERK1/2 activation was not sufficient to induce TNF-mediated apoptosis, whereas it did sensitize cells to Fas and TRAIL killing. However, TNF-induced apoptosis could be triggered by CHX cotreatment, and expression of CA-MKK1-HA was able to protect the cells in the same way as it could protect against Fas and TRAIL-mediated apoptosis. This is interesting, because it shows that ERK1/2 has a generally protective effect on DR-induced cell death, which could be used under specific conditions, when DR responses have to be rapidly modulated. It is interesting to speculate that, in case of failure of the main anti-apoptotic pathway, when NF-B is not functional, the ERK1/2 pathway could take over the protection system. Additional protein synthesisdependent anti-apoptotic factor(s) seem to be involved in DR signaling, considering the fact that CHX was able to sensitize the cells to Fas and TRAIL. FLIP has been suggested as a candidate for such a factor (46). The protecting effect of ERK1/2, however, is independent of protein synthesis, as cells can be rescued by CA-MKK1 even in the presence of CHX.
Mechanism and Function of ERK1/2-mediated Survival-We have shown earlier that the ERK1/2 activation, which protects cells from apoptotic signal generated by the FasR, originated from activation of the receptor itself (36). We show here that the same applies to TNF-R1 and TRAIL-R, i.e. each receptor rapidly activates ERK1/2 upon stimulation. The fact that Fas-induced ERK1/2 activation was more rapid and transient in the current study than in the previous one, is likely to be explained by the difference in the Fas antibody used in the experiment.
The mechanism of DR-mediated ERK1/2 activation is not yet understood. A recent study suggests that FLIP could mediate rapid activation of ERK1/2 by recruitment of Raf-1 to the Fas DISC (67). Additional candidates for a role in ERK1/2 activation are the 14-3-3 proteins. It has been reported that 14-3-3 would be necessary for serum-stimulated ERK1/2 activation, and that dominant-negative 14-3-3 would sensitize NIH3T3 cells to TNF-R1-mediated apoptosis (68). Moreover, 14-3-3 has been implicated in Raf-1-mediated mitotic response (69). Concerning the downstream targets of ERK1/2 activation in this context, our data suggest that the apoptotic cascade is stopped either at an early stage before activation of the initiating factor caspase-8, or at the level of caspase-8. In a previous study (35), we showed that although the ERK1/2-mediated protection is active, the FasR-induced DISC can be assembled without overall activation of caspase-8. Therefore, the apoptotic signal is inhibited either at the level of caspase-8 or at some upstream step of the feedback amplification loop, such as that involving cleavage of Bid and activation of cytochrome c release. The exact mechanism remains, however, to be investigated.
Intervention by ERK1/2 in the proapoptotic signaling cascades mediated by each of the three DRs adds to the similarities observed between the members of this family of proteins. It is interesting to note that the strong resemblance between the complexes assembled around the receptors do not prevent the differences in final functions. The complexity of the signals involved at the level and downstream of the respective DISCs allow precise modulation of the outcome and differences in the functions with possibility of redundancy. Although many interacting pathways related to DR stimulation have been deciphered, much remains to be resolved, both in normal cells as well as in cancer cells, in which disruption of some apoptotic signaling pathways have reduced the susceptibility to be killed by the immune system. Understanding the mechanism of ERK1/2-mediated protection both in normal and in pathological situations could lead to a better understanding of pathological conditions related to defects in the apoptotic machinery. | 5,832.4 | 2001-05-11T00:00:00.000 | [
"Biology"
] |
Utilization of ferronickel slag in hot mix asphalt
The growing concern in minimizing the disorderly disposal of waste in nature has been influencing measures that seek to give new environmentally sustainable and economically viable purposes for these materials. The use of steel slag aggregate in road paving emerges as an alternative for reducing the storage of this material in industrial yards, as well as contributing to a significant reduction in the cost of building a flexible road pavement. This study aimed to verify the technical feasibility of using ferronickel slag as an aggregate in the composition of hot mix asphalt. To this end, physical, chemical, mineralogical and environmental characterization tests of ferronickel slag were performed. The asphalt mixtures were dosed in accordance with the Marshall methodology, using the DNIT Range C with the use of ferronickel slag in the granulometric portions corresponding to the coarse aggregate, fine and filler, and the petroleum asphalt cement (PAC) 50/70. Based on the results, it can be stated that ferronickel slag demonstrates technical feasibility to be used as an aggregate in the composition of hot mix asphalt, meeting the requirements established by Brazilian standardization. In addition, it is an excellent environmental alternative because it uses a material previously treated as an environmental liability, avoiding the exploration of new natural deposits of stone aggregates.
Introduction
Brazil is a country of continental dimensions, and has as its main transportation modal, the road system that is responsible for more than 61% of the cargo transportation matrix and 95% of the passenger transportation. The transportation infrastructure has a fundamental function in the process of economic development of a nation, being able to influence even its productivity and foreign relationships through the activities of imports and exports that are directly linked to the situation of the transportation infrastructure of the country.
The Brazilian road network is composed of 1,720,700.30 km of highways, of which only 213,452.80 km are paved, representing 12.4% of the total extension. The expansion of the paved highway network also does not keep pace with the growth of the vehicle fleet (CNT, 2019). The engineering techniques cur-rently used in road projects, as well as the methods applied in the sizing of flexible pavements, follow the so-called "general prescriptions" determined by specific standards. These standards favor the use of granular and stone materials for use in the asphalt coating layer. Such materials use granite and basalt crushed stone, having high cost and causing associated environmental impacts in all stages of production. In this sense, the high demand for natural aggregates in pavement projects and the high cost associated with this product has caused researchers from all over the world to look for alternative materials that have physical, mechanical and environmental characteristics equal or superior to conventional aggregates.
In recent decades, interest in the application of industrial waste, steel slag aggregates in particular, in other areas has grown widely. This can be attributed to both economic and environmental issues. It should also be noted that the additional revenues, the reduction of costs involved in storage, in addition to the possibility of adding value through reuse, are reasons that enhance the proposal for reuse. (Fernandes, 2016).
In this context, there is a need to continue the search for new materials that in addition to mitigating the damage caused to the environment, offer a significant reduction in the cost of a paving project. Based on the above, the world's nickel reserves are estimated at 78,000,000 tons, with a production in 2016 of 2,250,000 tons. Brazil has the third largest nickel reserve in the world, with reserves estimated at 11,000,000 tons and production in 2019 of 67,000 tons. (USGS, 2020).
Ferronickel slag is a co-product generated from the production process of ferronickel alloys that are usually used in the manufacture of stainless steel and iron alloys. It can be classified into two different types, according to the cooling method, which can be aircooled slag or water-cooled (granulated) slag. Air cooled slag is slowly cooled outdoors in an open well, while water cooled slag is quickly cooled using water (Choi and Choi, 2015;Saha et al., 2018).
Also, according to Saha and Sarker (2016), the chemical composition of ferronickel slag consists mainly of SiO 2 , MgO and Fe 2 O 3 . This material is formed by amorphous silica, as well as crystalline minerals such as enstatite, forsterite and dropsied. The chemical compositions of the slag may be different depending on its source, processing, and cooling methods (Lemonis et al., 2015;Maragkos et al., 2009;Komnitsas et al., 2007;Saha and Sarker, 2016).
It is important to highlight that the properties of ferronickel slag do not respect a general rule. Therefore, for each new deposit, its properties should be determined because they can vary depending on the origin and the method of processing the ore (Saha et al., 2018).
It is important to highlight the differential of this research because it involves a new use of iron-nickel slag within the state of Pará, since almost all production is stored in industrial yards and therefore contributes to a significant environmental impact. In addition, data from the Highway Department indicate that predominantly natural aggregates are used to produce the asphalt mixture. This data highlights the importance of this study, which may lead to the reduction of stocks allocated in industrial yards.
Materials and methods
The air-cooled ferronickel slag used in this research comes from a large mine located in the state of Pará. The material was sampled and transported to the Laboratory of Railways and Asphalt (LFA), in the Federal University of Ouro Preto (UFOP), where the tests were carried out. Figure 1 shows the ferronickel slag.
The testing campaign of ferronickel slag was based on the physical, chemical, and environmental characterization of the slag.
In this study, the chemical characterization was performed using X-Ray Fluorescence to quantify the oxides present in ferronickel slag, while the X-Ray Diffraction and Scanning Electron Microscopy (SEM) with EDS analyzer All the laboratory tests of physical characterization and the project range are presented below. Figure 2 shows the granulometry test of ferronickel slag. Table 1 shows the final granulometric distribution of the mixing proj-ect in accordance with DNIT Range C with a composition of 12% coarse aggregate, 45% fine aggregate and 43% filler.
Physical characterization
tests sought the microstructural and morphological characteristics. These analyses were carried out with crushed and passed ferronickel slag samples in a sieve of either 200 or 75µm. In the case of chemical characterization tests there is no specific standard for conducting the tests. For this research, some references that proposed the use of steel slag for paving were the basis for the execution of the characterization. The main references were Fernandes (2016), Graffitti (2002), and Castelo Branco (2004). It is also necessary to differentiate the chemical characterization tests from those that intend to determine morphological and microstructural characteristics. In the environmental characterization, Leaching (NBR 10005/2004) and Solubility (NBR 10006/2004) tests were performed.
After the characterization, the specimens were molded and the stability and flow of the mixture was determined following the Marshall mix design according to DNER-ME 043/95 specification. Tensile strength by static diametral compression at 25 °C was done in accordance with the specification DNER-ME 138.
Through experimental procedures, the design of an asphalt mixture provides an optimal content of asphalt binder from a predefined particle size range. The selection of the particle size range used was within the limits of the range C of DNIT for asphalt concrete, which is specified by DNIT-ES 031/2006.
The percentages of asphalt binder used in the mixtures were 4.0%, 4.5%, 5.0%, 5.5%, 6.0% and 6.5%. Three specimens were made for each content of asphalt mixture. The samples were used to determine the bulk specific gravity (Gmb), voids in the mineral aggregate (VMA), voids filled with asphalt (VFA), air voids (V a ), stability and flow for each specimen.
For the calculation of optimum binder content, a graph is drawn with values in the mixture's binder content abscissa (%) and in the ordinates the values of air voids (V a ) and Voids Filled with Asphalt (VFA). Through the specification of the limits determined by DNIT for these evaluated parameters (V a between 3% and 5% and VFA between 75% and 82%), four values of asphalt binder content are obtained through vertical lines originated from the specification. The design content is calculated as the average of the two central values of PAC (Bernucci et al. 2008). Table 3 -Chemical species and their mass percentages.
Utilization of ferronickel slag in hot mix asphalt
It is possible to observe that the chemical composition of the ferronickel slag studied is composed mainly of SiO 2 (silica), MgO (magnesium oxide) and Fe 2 O 3 (iron oxide), which are the same components found by Santos (2013), Wang (2016) and Saha and Sarker (2016).
The results obtained for the X-ray diffraction test for ferronickel slag are shown in Figure 3 and represent the occurrence of the mineralogical phases of the diffractometric standards.
X-Ray fluorescence analysis
The result of the quantitative chemical analysis by X-ray fluores-cence performed on the ferronickel slag sample is shown in Table 3. Scanning electron microscopy (SEM) for the analysis and characterization of the different mineral phases in samples is used on a timely basis. Figure 4 shows the morphological distribution obtained in the analysis of the ferronickel slag sample powder, so that it is possible to observe the heterogeneity of the fines of the sample by the different contrast in the staining of the particles, an observation that can be confirmed by the analysis of its chemical composition performed in this same test. Regarding the result presented in Figure 5, the predominance of the pink color is highlighted, which represents the predominance of silica in the sample. This data corroborates the results already presented of x-ray dif-
Scanning electron microscopy with EDS analyzer
The X-ray diffractogram obtained for the ferronickel slag sample shows a high incidence of peaks, indicating that the slag structure is predominantly crystalline. The diffractogram shows the presence of the mineral enstatite. Enstatite has a chemical formula (Mg,Fe) 2 Si 2 O 6 , and is a mineral composed of silicon dioxide (SiO 2 ) and magnesium oxide (MgO). Its hardness is approximately 5.5 and its bulk density is from 3.26 to 3.28. According to several authors, ferronickel slag basically consists of amorphous silica, as well as crystalline minerals such as enstatite (Lemonis et al., 2015;Maragkos et al., 2009;Komnitsas et al., 2007).
The presence of the mineral enstatite in ferronickel slag is extremely beneficial from a mechanical perspective in asphalt mixtures, since, this mineralogical compound does not present expansive potential, such as steel slag. Therefore, its use in asphalt mixtures does not compromise the integrity of the pavement.
Marshall mix design
The results of the quantitative analysis by scanning electron microscopy performed with EDS ana-lyzer on ferronickel slag were very close to the results found in the quantitative analysis performed by X-ray fluorescence, reaffirming the compounds previously found. This result is extremely positive for the use of ferronickel slag in asphalt mixtures. Since the material does not leach compounds and does not solubilize, it will not present impacts to the environment.
To determine the design content of the research, all volumetric parameters of mixtures with asphalt content of 4.0%, 4.5%, 5.0%, 5.5%, 6.0% and 6.5 % were calculated.
With the determination of the volumetric parameters V a and VFA for each specimen, it was possible to construct a graph with the abscissa represented by the percentage of asphalt binder by weight and the ordinates being the values of V a and VFA, as shown in Figure 6.
Based on this graph, the optimum asphalt binder content was defined, and the chosen value should be among the central values of asphalt binder content. Therefore, the optimum asphalt binder content defined for the mixture was 6.00%, which is the lowest whole number within the range indicated by the Va and VFA limits (5.95% to 6.35%).
After determining the volumetric mea-surements and choosing the design content, the mechanical stability and flow parameters for optimum asphalt binder content of 6.0% were obtained through the Marshall press, which are presented in Table 5. fraction that pointed out mineralogical compounds related to silica (enstatite). Table 4 shows the result of the quantitative analysis by microscopy with EDS analyzer considering the entire sample. From the results of the volumetric parameters obtained for the mixture of hot mix asphalt with the addition of ferronickel slag aggregate, it is possible to state that the mixture can be executed because it fits all the parameters recommended by DNIT 031/2006.
As a proposal for future study, the researchers intend to analyze the same ferronickel slag using dynamic tests such as the resilient modulus and fatigue. This change aims to study the elastic behavior of the material in the face of traffic demands, instead of the plastic behavior.
The study presents the characterization of the ferronickel slag produced in the state of Pará for reuse in asphalt mixtures and the corresponding analysis of its performance against the limits set by Brazilian standards. One of the main differentials of this research is the reutilization of ferronickel slag in asphalt mixtures in a Brazilian state with a large part of its unpaved road network and a lack of natural inputs for the execution of the works. Therefore, an attempt is being made to solve a problem in the expansion of the state highway network with the use of waste that had the industrial yards as its main destination, causing environmental problems.
Based on the analysis of the results presented, it can be stated that the physical characterization of the aggregates obtained conformity in almost all the tests performed. The exception was the water absorption test which resulted in 2.3% when the limit is 2%. This result is explained by the high porosity of the slag, however, does not compromise its mechanical competence as evidenced in the tests.
Concerning the chemical and morphological characterization, it can be reported that there is a clear predominance of SiO 2 (silica), MgO (magnesium oxide) and Fe 2 O 3 (iron oxide). In addition, the presence of the mineral enstatite was observed in the morphological analysis. These compounds do not present an expansive potential for ferronickel slag, so their use in asphalt mixtures will not affect the integrity of the pavement under saturated conditions. Through the environmental characterization, it was possible to perceive that the aggregate does not represent risks to the environment, as it is a class IIB (Non Hazardous / Inert) waste. From the results of the volumetric parameters obtained for the mixture of hot machined bituminous concrete with the addition of ferronickel slag aggregate, an optimum asphalt binder content of 6.0% was obtained.
In an economic and technical analysis, the higher consumption of asphalt binder by ferronickel slag compared to other natural aggregates should be mitigated by its final cost being much lower than the natural aggregate. Therefore, in order to pave roads in the state of Pará, ferronickel slag has a lower cost than conventional aggregate.
Conclusions
References Table 5 -Results of stability and flow tests for the optimum asphalt binder content of 6.0%. | 3,544.8 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
The eighth SINQ Target Irradiation Program, STIP-VIII
,
Introduction
Since 1996 the SINQ (the Swiss Spallation Neutron Source) Target Irradiation Program (STIP) has received continuous interest from nuclear materials communities for spallation, fusion and fission applications [1][2][3][4].After seven irradiation experiments and irradiating large numbers (>8000) of specimens at a wide range of doses (5-30 dpa) and temperatures (80-600 • C), this interest is waning.Nonetheless, STIP-VIII has been implemented to meet some requirements from Europe, China and Japan.
Due to increased safety requirements following the incident occurred in SINQ Target-11 in 2016 [5], the configuration of SINQ targets was changed, especially in the high proton and neutron flux region where previous STIP specimens were located.Instead of zircaloy-2 cladded lead rods or STIP specimen rods, solid zircaloy-2 rods were used in this area.STIP specimens had to be moved to the upper part of the target where the proton and neutron fluxes are lower.In this case, the maximum irradiation dose of the STIP-VIII specimens is significantly lower, around 10 dpa (in steels), instead of >20 dpa in previous STIP experiments.
In STIP experiments, the production rate of spallation transmutation elements in specimens, especially helium (He) and hydrogen (H), depends strongly on the local proton and neutron spectra [4,5].Compared to low-dose specimens, in high-dose specimens, the contribution of high-energy protons to displacement damage, He and H production is higher, which results in high He-to-dpa ratios typically 60-80 appm He/dpa.In STIP-VIII specimens, the He-to-dpa ratio is much lower, about 35 appm He/dpa, which is closer to the 10-15 appm He/dpa in the first wall of a fusion reactor.Therefore, STIP-VIII is attractive for fusion materials research.Similarly, fission materials research also appreciates a lower production rate of transmutation elements, since the rate is generally low in fission neutron irradiations.Consequently, STIP-VIII is dominated by materials for fusion and fission applications.
Materials and specimens
In STIP-VIII, specimens were packed in zircaloy-2 tubes with inner/ outer diameters of 8.8/10.05mm.The space between specimens, specimens and tube-wall was filled with spacers of T91 steel and the gaps were filled with pure He gas.941 specimens from 43 kinds of materials were irradiated in 7 specimen rods.Among the specimens, 587 specimens are from 25 steels, including various feeritic-martensitic (FM) steels and ODS FM steels.The composition of the FM steels is typically: 9-12Cr, 0-1Ni, 1-2 W, 0.5-1Mn, 0.2-0.3V, 0.1-0.2Si,0.1C, balanced by Fe.The ODS FM steels contain about 0.3 % Y 2 O 3 .Specimens are to be used for tensile, bend, small punch (SP) testing and transmission electron microscopy (TEM) observations.232 specimens from 8 zircaloys were cut from cladding tubes, which were either used in SINQ targets or The dimensions of STIP-VIII specimens are shown in Fig. 1.Tensile specimens are in two sizes named as "small tensile (ST)" and "large tensile (LT)".In addition to these specimens, some 16×2×0.75mm 3 tungsten mini-bend bars (MBB), 20×4×2 mm 3 SiC bend bars and 4×4×1 mm 3 SiC thermal conductivity specimens (TC) were also included.It should be noted that the LT specimens of zircaloys were directly cut from cladding tubes with an outer diameter of 10.05 mm and an inner diameter of 8.8 mm and were therefore curved in the width direction.
To increase comparability across the materials, all specimens were manufactured by the same company.Bars of different specimen shapes were first cut from the received materials by electron discharge machining (EDM) and then cut into pieces approximately 0.1 mm larger than the required sizes shown in Fig. 1.Finally, the 0.1 mm was removed from the pieces by milling 0.05 mm from each surface.The surfaces were ground to ensure that the surface quality meets the N6 requirement, namely an average surface roughness of 0.8 µm.
Specimen rods
The design of the STIP-VIII specimen rods is similar to that of previous STIP experiments.For example, Fig. 2 shows the specimens packed in Rod 1.This rod includes 12 bend bars (BB), 84 small tensile (ST) specimens, 137 TEM discs and 8 large discs (LSP).The specimens were distributed in 8 parts from part A to part H.Each type of specimens were packed into 2-3 parts to obtain different irradiation conditions.To tune the irradiation temperature, 0.1 mm deep grooves were introduced on the outer surface of spacers between the specimens and the zicarloy tube.The temperature was adjusted by the width of the grooves.Laser engraving was used to mark IDs on specimen surfaces.
One of the difficulties with such irradiation experiments is to achieve the required irradiation temperature for specimens.The previous STIP experiments have demonstrated that this is even more difficult compared to irradiation experiments in fission reactors, because both the beam current and the intensity distribution profile of the proton beam injected to a SINQ target vary greatly during the 2-year operation time [1][2][3].During the design phase, the temperature of each specimen package was carefully modeled.Due to the extensive modeling work, only 2-dimensional (2D) temperature distribution over the cross-section of a specimen package was simulated using the ANSYS code.For STIP-VIII, the energy deposition values obtained from STIP-VI [4] were used to model the irradiation temperature.These modeling results have to be revised and finalized after irradiation when the neutronic simulation is done with the actual proton beam parameters of STIP-VIII.Fig. 3 shows two examples of temperature modeling results by using the neutronic simulation results described in Section 8. Fig. 3(a) depicts the temperature distribution of specimen part A of Rod 1, where 6 BB specimens were included (Fig. 2), while Fig. 3(c) is the temperature distribution of TEM specimens part B of Rod 1, where 4 groups of ϕ3 × 0.25 mm discs were packed.Fig. 3(b and d) are the temperature distribution along the paths indicated in Fig. 3(a and c), respectively.The asymetric temperature distribution in Fig. 3(d) is due to different materials.The large temperature jumps at the interfaces of specimen/ specimen, specimen/spacer, and spacer/outer tube are caused by helium gaps with different thicknesses from about 10 µm up to 110 µm.It is worth noting that, based on the experience from previous STIP experiments, a gap thickness of 10 µm between specimens seems to be a good approach, although the average specimen surface roughness is only 0.8 µm.
After specimens inserted into zircaloys-2 tubes, two end-plugs were welded using electron beam welding at one end, the same as normal target rods, and laser welding at the other end.For the laser welding, a specimen rod was loaded into a specially prepared chamber, which was first evacuated to low vacuum and then filled with 2-bar helium.Laser beam penetrates through a glass window onto the specimen rod.In order to check the quality of the laser welding, metallographic inspection (Fig. 4(a)) and a burst test (Fig. 4(b)) were performed.The results demonstrate the good quality of the laser welding.
The 7 specimen rods were inserted into the target in the middle part of rows 20 to 24, as shown in Fig. 5.Among the 7 rods, two of them are equipped with thermocouples, namely Rod 3 in row 20 and Rod 6 in row 24, for mornitoring the termperature during irradiation.
It can be seen in this figure that, a large number of full zircaloy rods were used in the high proton and neutron flux area.This was a test conducted to improve the safe operation of the SINQ target after the incident happened in 2016 [6].The two 75 % Pb-filled rods were used for the same reason.
Irradiation at SINQ
The irradiation took place in 2018 and 2020, with an interruption in 2019 due to an upgrade of some neutron beam lines.The overall Y. Dai et al. situation is similar in both years.Fig. 6 shows the maximum proton beam current and the average beam current for each irradiation week in 2018 and 2020.In the first 1-3 weeks, there were many beam tuning tests.During this period, the beam current gradually increased.Afterwards, the beam current was maintained at the desired level.It is clear that, a smaller difference between the maximum and the average values means that the proton beam is more stable during this week.For example, Fig. 7 presents the beam status in two weeks, week 37 and week 38 in 2020.Week 37 was normal, while week 38 had more than 3 days dedicated the accelerator service.In a service week, the proton bean is less stable due to beam tests, resulting in a larger difference between the maximum and the average beam current value, as shown in Fig. 6.It can also be seen from Fig. 9 that this kind of service (indicated on the X-axis) was performed every month in 2018 and 2020.
In Fig. 7, the temperature measured at the center of the central target rod in row 12 (CT009) is also presented.As usual, the temperature changed with the proton beam current [1].The temperature excursions were introduced by proton beam excursions.The majority of the beam excursions were introduced by the operation of the UCN (Ultra Cold Neutron source) facility [7].UCN is operated in pulsed mode using the full proton beam from the accelerator at a repetition rate of one pulse every 5 min, as shown in Fig. 8.The duration of each beam excursion is about 30 s.Such a beam excursion consists of a sudden beam interruption of 8 s and followed by a slow rise of beam current in the rest 22 s.Accompanied by the absence of the proton beam at the SINQ target, the temperature drops to ~50 • C during the 8 s of the beam interruption.There were 38,362 beam excursions for Target-13, 22,299 in 2018 and 16,063 in 2020.This caused the same number of temperature excursions in the target, in all the irradiated specimens as well.
The irradiation temperature of the STIP specimens was monitored by the two thermocouples in row 20 (Rod 3) and row 24 (Rod 6) (Fig. 5).The measurement results are presented in Fig. 9, where each data point is averaged over one hour.Therefore, the short temperature excursions (Fig. 8) cannot been seen here.During a long beam excursion, the temperature drops to 30 • C, the temperature of the cooling water.
In 2018, the temperature was relatively stable, except for the first week, as shown in Fig. 9.The average temperature for the full-beam-on period was 229 • C in row 20 and 179 • C in row 24.It should be noted that the thermocouple in row 20 was placed at 8 mm away from the central line and that in row 24 is 28 mm away from the central line of the target.Therefore, the temperatures were not the highest ones in the corresponding rods.
The temperature measurement results of 2020 are similar to those in 2018.However, the overall temperature values are slightly higher.The reason for this small increase is not clear.It could be due to a slight change in the proton beam intensity profile, which was adjusted at some point by the SINQ operation group.The data was missing in the period from September 9 to October 10 due to a technical problem.However, the irradiation during this period was normal, as shown by the beam current measurement (Fig. 6).
Although some temperature data are missing, it should be mentioned that the online temperature measurement in four target rods indicated that no overfocused beam experienced during the entire irradiation period of STIP-VIII.
Extraction of specimen rods and target beam window
In May 2022 Target-13 was loaded into the hotcell (ATEC) near the SINQ target station.The 7 specimen rods were removed from the target.As an example, Fig. 10 shows Rod 4 when it was pulled out of the target.All 7 rods were pulled out smoothly.Video inspections did not reveal any damages such as pits and cracks.The proton beam trace can be seen from the color variation on the surface of the rod.
The target beam window, namely the calotte of the target aluminum (Al) container, was cut with a band saw, as shown in Fig. 11.The appearance of the beam window looks different from the previous ones with a dark oval proton beam footprint on the outer surface [1,2].For this target, the central area of the calotte only looks slightly brown.This indicates the better vacuum in the proton beam line.
Gamma mapping of the Al calotte
In order to accurately evaluate the irradiation dose of the irradiated specimens, it is essential to obtain the distribution profile of the accumulated proton fluence.For STIP irradiation experiments, except for STIP-I, this is done by conducting gamma mapping on the calottes of the Al containers in PSI Hot Laboratory (Hotlab).An area of 160×160 mm 2 was scaned in 4 mm steps [2][3][4].1681 points were measured and each measurement lasted 10 mintues.The activity distribution of 22 Na was determined.Since 22 Na is mainly produced by protons with energies above about 30 MeV [8], the distribution of 22 Na reflects the distribution of the accumulated proton fluence.The derived proton fluence distribution is shown in Fig. 12.The maximum proton fluence is 4.1×10 25 p/ m 2 .Comparison with the 2D plots in Ref. [2][3][4] shows that the distribution of accumulated proton fluence of STIP-VIII is different from all prvious ones.This indicates the necessarity of the gamma mapping for determining the irradiation dose of STIP specimens.However, it should be noted that the gamma mapping provides only a best estimation of the proton beam distribution accumulated throughout the irradiation period.The intensity profile of the proton beam at SINQ is not stable at all.Unfortunately, during operation there is no precise measurement of the actual beam intensity profile, although the beam position and beam over-focus are monitored.
Unpacking irradiated specimens
The irradiated specimens were unpacked in a hotcell of Hotlab, which was very time-consuming.First, the endplugs of the specimen rods had to be cut off (Fig. 13(a)).Then, cutting along the rod is needed (Fig. 13(b)).A diamond disc saw was used for the cutting.
More than 98 % of specimens were recovered.Only a few small SiC bend bars were broken during unpacking.Fig. 14 presents two photos of some unpacked tensile specimens of zircaloy cladding and some TEM/SP specimens.These specimens were contaminated during the above cutting process.They were further cleaned with ethanol in an ultrasonic bath.After cleaning, most of them were identified and stored in a lead-shielded cupboard, and some specimens taken directly for testing.
Neutronic calculation of irradiation parameters
Using the proton distribution profile (Fig. 12) as input data, the irradiation parameters such as displacement damage dose (dpa), He and H concentrations and energy deposition in specimens were calculated using the MCNPX code.Representative results are shown in Fig. 15-19 for three rods in the central column, namely Rod 3, 1 and 6 in rows 20, 22 and 24 (Fig. 5).
The proton and neutron spectra at the center of Rod 1, 3 and 6 are presented in Fig. 15.As the penetration depth increases from row 20 (Rod 3) to row 24 (Rod 6), both the energy and intensity of proton beam decrease.The maximum proton fluxes are 9.1×10 13 , 8.3×10 13 and 7.8×10 13 p/cm 2 /s for the three rods, decreased about 15 % from Row 20 (Rod 3) to Row 24 (Rod 6).
The distribution of proton and fast neutron fluences of the three rods is presented in Fig. 16.The difference between the values of the three rods becomes smaller at the edge of the rods.The same trend can also be expected for the other irradiation parameters.
The energy deposition is determined directly from the MCNP simulation.In order to avoid the differences caused by the different densities of the materials in the specimen rods, the energy deposition in the zircaloy cladding tubes was calculated.The results of the three rods are shown in Fig. 17, which are for the case of the average proton beam current of 1.25 mA.It can be seen that the trend looks similar to that of the proton fluences in Fig. 16.This is because the energy deposition is primarily generated by protons.
In the same way as the previous STIP experiments, the values of dpa, He and H concentrations were calculated from the above proton and neutron spectra and the corresponding cross-section data [2][3][4]9,10].Fig. 18 presents the distribution of irradiation dose (dpa) as well as He and H concentrations for steels, zircaloys and tungsten specimens in the three rods.From this data, the He-to-dpa and H-to-dpa ratios can be easily calculated.Since the results are almost the same for the three rods, the results of Rod 1 are ploted as an example in Fig. 19.As expected, the Heto-dpa ratio of steel specimens varies between 25 and 40 appm He/dpa.
As gas release measurements show [11,12], the H content of a specimen depends strongly on its material, irradiation temperature and other specimens/materials in the same rod.The difference between the calculated and measured H concentrations can be very large.Therefore, the calculated H concentrations are for reference only.
Table 1 presents a brief summary of irradiation parameters of the specimens in different rods.From the irradiation parameters one can see the importance and limitations of the irradiated specimens for fusion, fission and spallation applications.For fusion applications, the helium and hydrogen content of the specimens is about 2-3 times that of structural materials in fusion reactors.Nevertheless, these specimens are still among the best specimens that have relatively high doses and Y. Dai et al. reasonable good He-to-dpa ratios and can provide bulk mechanical and thermal properties.In nuclear fission applications, the helium concentrations of STIP-VIII specimens are very high as compared to materials irradiated in fission reactors.However, at least the results of ferritic/ martensitic steels irradiated in STIP show that the helium effects are not pronounced when helium concentration is below about 500 appm [13,14].In this case, the mechanical properties of STIP specimens are consistent with those of fission neutron irradiated specimens.Since the helium concentrations of current specimens are below 450 appm.Therefore, it is believed that the results of these specimens will not differ significantly from those after fission neutron irradiation at similar irradiation doses and temperatures.In any case, the results of STIP specimens, even if not realistic, are conservative for nuclear fission and fusion application due to the higher helium and hydrogen content.For spallation applications, STIP-VIII covers the irradiation conditions of most spallation targets worldwide today.Its limitation would be the relatively low irradiation dose for a spallation source such as SINQ, which can reach a higher irradiation dose.
Summary and outlook
In STIP-VIII a total of 941 specimens of steels, zircaloys, W-alloys, and SiC composites were irradiated in 2018 and 2020 in SINQ Target 13.With the exception of a few small SiC bend bars, more than 98 % of specimens were successfully recovered.The maximum doses obtained were 10.4 dpa for steels, 14.8 dpa for zircaloys, 11.5 dpa for W-alloys, and 3.5 dpa for SiC composites.The He-to-dpa ratio is in the range of 22-42 appm/dpa for steels, 13-16 appm/dpa for zircaloys, 30-36 appm/dpa for W-alloys and 90-125 appm/dpa for SiC composites.The specimens were irradiated at temperatures between 100 and 450 • C.
The post-irradiation examination (PIE), particularly the mechanical testing of the steel and zircaloy specimens, is being performed.Many PIE results will be obtained in 2024 and 2025.Some results will be published elsewhere soon.
Fig. 2 .
Fig. 2. The sketch showing the cross-sections of different of parts and specimens in Rod 1.
Fig. 3 .
Fig. 3. Simulation of temperature distribution in parts A and B of Rod 1.
Fig. 4 .
Fig. 4. (a) A metallographic picture of a laser welded dummy sample; (b) a photo of laser welded tube after burst testing.
Fig. 5 .
Fig. 5.A sketch showing SINQ Target-13.The insert on the right shows the position of the 7 STIP rods.
Fig. 6 .
Fig. 6.The maximum and average proton beam currents of Target-13 in each irradiation week in 2018 and 2020.
Y
.Dai et al.
Fig. 7 .
Fig. 7.The proton beam current at the target and the temperature at the center of the central target rod in row 12 (CT009), (a) for week 37 and (b) for week 38 in 2020.
Fig. 8 .
Fig. 8.The proton beam current and the temperature at the center of the central target rod in row 12 (CT009) in a short period.The regular beam excursions were introduced by the operation of the UCN facility.
Fig. 11 .
Fig. 11.The calotte of Target-13 was cut with a band saw in ATEC.
Fig. 15 .
Fig. 15.The proton and neutron spectra of Rod 1, 3 and 6.Note: in (a) the energy bin is 5 MeV; in (b) the energy bin is different for the different energy regions <1 MeV, 1-20 MeV and >20 MeV.
Fig. 17 .
Fig. 17.The distribution of the energy deposition along rod axis for Rod 1, 3 and 6.
Fig. 18 .
Fig. 18.The distribution of the dpa, He and H concentrations of steels, zircaloys and tungsten for Rod 1, 3 and 6.
Fig. 19 .
Fig. 19.The distribution of He-to-dpa and H-to-dpa ratios for steels, zircaloys and tungsten in Rod 1.
Table 1 A
summary of irradiation parameters of materials irradiated in different rods of STIP-VIII. | 4,855.2 | 2024-06-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
A 26-Membered Macrocycle Obtained by a Double Diels–Alder Cycloaddition Between Two 2H-Pyran-2-one Rings and Two 1,1'-(Hexane-1,6-diyl)bis (1H-pyrrole-2,5-dione)s
With the application of a double dienophile 1,1'-(hexane-1,6-diyl)bis(1H-pyrrole-2,5-dione) for a [4+2] cycloaddition with a substituted 2H-pyran-2-one a novel 26-membered tetraaza heteromacrocyclic system 3 was prepared via a direct method under solvent-free conditions with microwave irradiation. The macrocycle prepared is composed of two units of the dienophile and two of the diene. The structure of the macrocycle was characterized on the basis of IR, 1H and 13C NMR and mass spectroscopy, as well as by the elemental analysis and melting point determination. With X-ray diffraction of a single crystal of the macrocycle we have determined that the two acetyl groups (attached to the bridging double bond of the bicyclo[2.2.2]octene fragments) are oriented towards each other (and also towards the inside of the cavity of the macrocycle), therefore, mostly filling it completely.
Introduction
Macrocycles are privileged molecule structures that are of paramount importance in many areas of chemistry, including drug development, 1 formation of coordination compounds and metal-organic frameworks. 2 Generally they possess properties (structural, chemical, physical and biological) that set them apart from their linear or smallring analogues, the reason being that they can often provide sufficient flexibility for interactions with other molecules (e.g. for binding to an enzyme's active site or for a coordination to a guest ion during phase catalysis) combined with the advantages brought by the fact that they often contain more than one binding motif.This means that all of the interactions between the host (or enzyme) and the macrocycle are taking place between two molecules only and consequently the enthropy of the interaction is not so unfavourable as would be in the case where more (smaller) ligands interact simultaneously with the host.
Even though the synthesis of macrocycles has achieved some remarkable successes, there is still a lack of general approach towards them. 3There were many successful attempts towards the preparation of macrocycles, one of the most-often used being dilution techniques triggering the macrocyclization via lactonization, lactamization, metathesis reaction etc. (that were recently used for the first asymmetric total synthesis of aspergillide D 4 or for the total synthesis of mandelalide A). 5 Other options include the template-induced cyclization (around the host ion) 6 and cyclization on a solid support (like Merrifield-based synthesis of cyclic peptides or such inspired by non-ribosomal peptide aldehydes). 7More contemporary approaches are based on multi multicomponent macrocyclizations (MiBs) 8 that include various bifunctional building blocks.However, neither of the above mentioned ap-Turek et al.: A 26-Membered Macrocycle Obtained by a Double ... proaches can be applied universally.So there is still place for new routes.Recently, a lot of effort was devoted to multicomponent reactions that efficiently offer access to various macrocycles, including the possibility to incorporate points of diversity, which are, nevertheless, generally introduced before or after the key cyclization step. 91][12][13][14] Of interest are also preparations of calix [4]arene systems linked with 1,2,4-triazole and 1,3,4-oxadiazole derivatives, 15 as well as other tetraaza macrocycles applied as ligands in various coordination compounds. 16erein we present another approach, where two double Diels-Alder cycloadditions between two molecules of the substituted 2H-pyran-2-ones (each acting as a "double" diene) 17 and two molecules of the double dienophile provide a 26-membered tetraaza macrocyclic system.This strategy can be termed a multicomponent reaction (as four molecules react to form the macrocycle) with four individual [4+2] pericyclic reactions representing the crucial ring-closing steps.
1. Materials and Measurements
Melting points were determined on a micro hot stage apparatus and are uncorrected. 1H NMR spectra were recorded at 29 °C with a Bruker Avance III 500 spectrometer at 500 MHz using Me 4 Si as an internal standard. 13C NMR spectra were recorded at 29 °C with a Bruker Avance III 500 spectrometer at 125 MHz and were referenced against the central line of the solvent signal (CDCl 3 triplet at 77.0 ppm or DMSO-d 6 septet at 39.5 ppm).The coupling constants (J) are given in Hertz.IR spectra were obtained with a Bruker Alpha Platinum ATR FT-IR spectrometer on a solid support as microcrystalline powder.MS spectra were recorded with an Agilent 6624 Accurate Mass TOF LC/MS instrument (ESI ionization).Elemental analyses (C, H, N) were performed with a Perkin Elmer 2400 Series II CHNS/O Analyzer.TLC was carried out on Fluka silica-gel TLC-cards.
The starting 2H-pyran-2-one 1 was prepared by the method devised by Kepe, Kočevar et al. 18 as follows: from acetylacetone, N,N-dimethylformamide dimethyl acetal (DMFDMA) and hippuric acid by heating in acetic anhydride according to the published procedure 5-acetyl-3-benzoylamino-6-methyl-2H-pyran-2-one was obtained; followed by the removal of the benzoyl group (in concentrated H 2 SO 4 upon heating) analogously as previously described 19,20 and subsequent derivatization of the free 3-amino group with acetyl chloride the 2H-pyran-2-one 1 was obtained. 21ienophile 2 was prepared by a modification of the procedures published by Cava et al. 22 All other reagents and solvents were used as received from commercial suppliers.
Microwave reactions were performed in air using a focused microwave unit (Discover by CEM Corporation, Matthews, NC, USA).The machine consists of a continuous, focused microwave power-delivery system with an operator-selectable power output ranging from 0 to 300 W. Reactions were conducted in darkness in glass vessels (capacity 10 mL) sealed with rubber septum.The pressure was controlled by a load cell connected to the vessel via the septum.The temperature of the reaction mixtures was monitored using a calibrated infrared temperature controller mounted below the reaction vessel and measuring the temperature of the outer surface of the reaction vessel.The mixtures were stirred with a Teflon-coated magnetic stirring bar in the vessel.Temperature, pressure, and power profiles were recorded using commercially available software provided by the manufacturer of the microwave unit. 22o a clear solution of maleic anhydride (2.03 g, 20 mmol) in diethyl ether (30 mL) a separately prepared mixture of hexane-1,6-diamine (2.07 g, 10 mmol) in diethyl ether (10 mL) is added dropwise at room temperature.The viscous suspension is further stirred at room temperature for 1 h and thereafter cooled on ice.Precipitated product is isolated by vacuum filtration and used in the next step without drying or additional purification.
Synthesis of 1,1'-(Hexane-1,6-diyl)bis(1H-pyrrole-2,5dione) (2)
The entire obtained solid is slowly added to a mixture of sodium acetate (0.66 g, 8 mmol) and acetic anhydride (8 mL) in an Erlenmayer flask while vigorously stirring at room temperature.After the completion of the addition, the reaction mixture is heated on water bath (approx.100 °C) for 1 h, cooled to room temperature and poured onto ice-water mixture (30 g).The precipitated product is isolated by vacuum filtration, rinsed 3 times with distilled water and once with a few mL of petroleum ether yielding crude 2 (0.56 g, 20%) that is further crystallized from ethanol.
Synthesis of the Macrocycle 3
A 10 mL quartz microwave vessel is loaded with 2H-pyran-2-one 1 (105 mg, 0.5 mmol), dienophile 2 (152 mg, 0.55 mmol) and n-butanol (100 mg).A stirring bar is added and the vessel closed with the rubber septum.The reaction mixture is irradiated with microwaves (150 W) at 150 °C for 45 min.Thereafter, the reaction mixture is cooled to room temperature and diisopropyl ether is added (0.5 mL).The precipitated product is collected by vacuum filtration providing crude macrocycle 3 (150 mg, 34%) that is further crystallized from DMF.
Crystallography
Single-crystal X-ray diffraction data were collected at room temperature on a Nonius Kappa CCD diffractometer using graphite monochromated Mo-Kα radiation (λ = 0.71073 Å).The data were processed using DENZO. 24tructures were solved by direct methods implemented in SIR97 25 and refined by a full-matrix least-squares procedure based on F 2 with SHELXL-2014. 26All non-hydrogen atoms were refined anisotropically.Hydrogen atoms were readily located in a difference Fourier maps and were subsequently treated as riding atoms in geometrically idealized positions, with C-H = 0.93 (aromatic), 0.98 (methine), 0.97 (methylene) or 0.96 Å (CH 3 ), N-H = 0.86 Å and with U iso (H) = kU eq (C or N), where k = 1.5 for methyl groups, which were permitted to rotate but not to tilt, and 1.2 for all other H atoms.To improve the refinement results, two reflections in the case of 2a, eleven reflections in the case of 2b and twenty eight reflections in the case of 3•2DMF with too high values of δ(F 2 )/e.s.d. and with F o 2 < F c 2 were deleted from the refinement.In 2b a proposed twin law has been applied according to Platon analysis and the R 1 factor has improved from 8.13% to 7.75%, however, instead of estimated BASF 0.19 the refined BASF was found to be 0.00939.In the crystal structure of 3•2DMF a solvate DMF molecule is disorder over two positions with refined ratio 0.82:0.18and ISOR instruction was used for the refinement of C25B atom in DMF.Crystallographic data are listed in Table 1.X-Ray powder diffraction data were collected at room temperature using a PANalytical X'Pert PRO MPD diffractometer with θ-2θ reflection geometry, where n is the number of reflections and p is the total number of parameters refined.
1. Synthesis
8][29][30][31] Namely, it was already observed that 2H-pyran-2ones can act as "double" dienes, reacting in two consecutive Diels-Alder reactions with two distinctive molecules of the dienophiles, yielding bicyclo[2.2.2]octenes. 30The initial cycloaddition step leads to the formation of CO 2bridged oxabicyclo[2.2.2]octenes that in the next step eliminate a molecule of CO 2 (via a retro-hetero-Diels-Alder reaction) providing cyclohexadiene systems that act as new dienes for another molecule of dienophile finally providing the double cycloadducts.On the other hand, if the two molecules of the dienophile would be connected by a suitable tether, it would be possible to expect that the second cycloaddition step would take place intramolecularly.At least in theory, the smallest possible cyclic product would consist of just one bicyclo[2.2.2]octene fragment (formed out of one 2H-pyran-2-one ring) and one molecule of the double dienophile.Related examples were already described by the application of cycloocta-1,4-diene. 32 Of course, it could be also possible that larger cycles would be obtained, for example such that contain two bicyclo[2.2.2]octene moieties and two molecules of the double dienophile.
Here, we have focused our attention to a 3-acetylamino-6-methyl-2H-pyran-2-one (1) and 1,1'-(hexane-1,6-diyl)bis(1H-pyrrole-2,5-dione) (2) as the double dienophile (Scheme 1).In this case one could expect the formation of various cyclic systems, the simplest one consisting of one bicyclo[2.2.2]octene fragment and one fragment stemming from 2. The other possibility would be the formation of the macrocyclic ring from two dienes 1 and two molecules of 2.Even larger systems that could form theoretically, however, were not expected (as their formation is entropically less likely); on the other hand, the formation of a linear polymer containing bicyclo[2.2.2]octene fragments (stemming from 1) alternating with the dienophile parts (from 2) could not be excluded.
The preparation of the double dienophile 2 was carried out according to the literature procedure 22 starting from the commercially available maleic anhydride and hexane-1,6-diamine in diethyl ether at room temperature.After the first reaction step (i.e. the formation of an openring intermediate consisting of a terminal carboxylic acid group-formed by the opening of the anhydride ring-and a new amide fragment arising from the reaction between the remaining carbonyl group and the amine group of the hexane-1,6-diamine.This intermediate is in the next reaction step mixed with the solution of sodium acetate in acetic anhydride and upon heating to 100 °C re-cyclized into the new maleimide ring.Of course, because the starting hexane-1,6-diamine is a bi-functional compound containing two suitable amine groups, the above described reaction sequence takes place on both sides of the diamine, therefore furnishing the desired double dienophile 2. 2H-Pyran-2-one derivative 1, applied in this synthetic approach, can be straightforwardly accessed via a one-pot synthesis starting from the simple commercially available precursors: a carbonyl compound containing an activated CH 2 group (i.e.acetylacetone), a C 1 -synthon such as N,N-dimethylformamide dimethyl acetal (DMFD-MA) and hippuric acid (2) as an N-acylglycine derivative as previously described by Kepe, Kočevar and co-workers. 18The synthesis takes place under heating (approx.65-70 °C) in acetic anhydride (or in a mixture with acetic acid) as the solvent yielding the substituted 3-benzoylamino-2H-pyran-2-one.To convert this into the desired 3-acylamino derivative 1, cleavage of the amide bond is executed (in conc.H 2 SO 4 at approx.80 °C), the product containing a free NH 2 group is isolated by the extraction in CH 2 Cl 2 (after addition of water and neutralization with sodium hydrogen sulfate) 19 and further acetylated with acetyl chloride at room temperature in CH 2 Cl 2 with the addition of pyridine as the base. 21ompound 2 was in the next step applied as the double dienophile in the Diels-Alder reaction with 5-acetyl-3acetylamino-6-methyl-2H-pyran-2-one (1) as the diene component.Because 2H-pyran-2-one skeletons can in general participate in two separate Diels-Alder reactions, the combination of the double dienophile 2 and the 2H-pyran-2-one derivative 1 was deemed appropriate for the preparation of a macrocyclic system.We assumed that Scheme 1. Reaction sequence leading to the macrocycle 3.
the most probable outcome would be the reaction of two molecules of the dienophile 2 with two molecules of 2H-pyran-2-one 1.In this way the former 2H-pyran-2-one skeleton would be transformed into a new bicyclo[2.2.2] octene moiety (as described above), however due to the bifunctional nature of the dienophile 2, both bicyclo[2.2.2] octenes would be connected with two -[CH 2 ] 6 -tethers.
The cycloaddition between dienophile 2 and 2H-pyran-2-one derivative 1 was carried out employing microwave irradiation 33 at 150 °C in closed vessel and under solvent-free conditions, just with a small addition of n-BuOH (100 mg for a 10 mL vessel).Its function was to prevent the deposition of the dienophile 2 on the upper (colder) parts of the reaction vessel as a consequence of its sublimation, as we have devised previously. 34ndeed, after the cooling of the reaction mixture and addition of diisopropyl ether, product 3 was isolated and further crystallized from DMF.According to the 1 H NMR analysis of 3 it was clear that a macrocyle was obtained, composed of two -[CH 2 ] 6 -chains (multiplets at δ 1.03, 1.23 and 3.18 ppm each integrated for 8H, belonging to the two central γ-CH 2 groups of the chain, to the two β-CH 2 units and to the two α-CH 2 groups, respectively), two methyl groups (singlet at δ 1.85 ppm integrated for 6H), two acetylamino groups (singlet at δ 1.95 ppm integrated for 6H) and two acetyl groups (singlet at δ 2.02 ppm integrated for 6H).Furthermore, two most characteristic doublets (each for 4H) were observed at δ 3.00 and 4.11 ppm, each corresponding to the two sets of protons on the bicyclo[2.2.2]octene fragment.According to our experiences with NMR spectra of such systems, from the existence of only two doublets and from the coupling constant observed (i.e.7.5 Hz), we can conclude that the bicyclo[2.2.2] octene fragment is of the symmetric exo,exo structure, also consistent with our previous results. 30Furthermore, a singlet at δ 6.82 (for 2H) corresponding to both protons attached to the double bond of the bicyclo[2.2.2]octene fragments and a singlet at δ 8.43 (for 2H) for the two NH groups were also observed in 1 H NMR. In the IR spectrum of 3 bands corresponding to the NH group at 3368 cm -1 and carbonyl group at 1698 cm -1 were observed.These data were further corroborated by the 13 C NMR and mass spectroscopy, as well as elemental analysis, establishing the structure of 3 as a novel 26-membered tetraaza heteromacrocyclic system.
Crystal Structures
X-Ray crystal structures of 2 and 3 were determined, where two polymorphs of 1,1'-(hexane-1,6-diyl)bis(1Hpyrrole-2,5-dione) (2) were observed.All bond lengths of 2a, 2b and 3 are within normal ranges. 35Conventional re-crystallization of 2 from ethanol, followed by cooling to 5 °C provided a crystalline form 2a. On the other hand, slow precipitation of 2 from its mixture with starting 1 in toluene (i.e. a reaction mixture remaining after an unsuc-cessful attempt to prepare 3) upon evaporation of the solvent at 5 °C provided a different polymorph 2b.Polymorph 2a crystallizes in triclinic P -1 space group and polymorph 2b in monoclinic P 2 1 /a space group (Figure 1a,b).In both polymorphs asymmetric unit is composed of a half of molecule 2 due to the inversion center in the middle of C7-C7 i bond.
In 2a and 2b the 1H-pyrrole-2,5-dione ring is planar.The maximum deviation from the mean plane described by the ring atoms is +0.004(1) and -0.004(1) Å for the C4 and N1 atoms in 2a and a negligible deviation in the range +0.001(1) to -0.001(1) Å for the C1 and C3 and N1 and C2 atoms in 2b.Such small deviations from planarity were observed also in two known polymorphs of 1H-pyrrole-2,5-dione (maleimides). 36,37The main difference in the molecular geometry between the polymorphs of 2 is in the N1-C5-C6-C7 torsion angle being -175.71(13)° in the form 2a (where CH 2 hydrogens are eclipsed over the succinimide ring) and 68.4(3)° in 2b (where CH 2 hydrogens are eclipsed over themselves only and not over the succinimide ring) (Figure 1c,d).
D-H•
X-Ray analysis of the product 3 has confirmed the results of NMR analysis, namely that this is a 26-membered tetraaza macrocyclic system, being composed of two bicyclo[2.2.2]octene moieties, each of them fused with two succinimide rings; both these fragments are additionally connected with two -[CH 2 ] 6 -tethers into the macrocyclic structure 3 (Figure 5).Macrocycle 3•2DMF crystallizes in orthorhombic P c a n space group and the asymmetric unit Turek et al.: A 26-Membered Macrocycle Obtained by a Double ... primary side Johansson type monochromator and Cu-Kα1 radiation (λ = 1.54059Å).
Figure 1 .
Figure 1.Molecular structure and atom numbering scheme for a) 2a and b) 2b.Probability ellipsoids are drawn at the 50% level.c) View along C 6 chain for 2a (left) and 2b (right).d) Molecular overlay of polymorphs 2a (green) and 2b (orange).
Figure 5 .
Figure 5. Molecular structure of 3. Probability ellipsoids are drawn at the 50% level (left).Solvate molecules have been removed for clarity.Intramolecular N-H•••O hydrogen bonding indicated by blue dashed lines (right).Hydrogen atoms not involved in the motif shown have been omitted for clarity.
Figure 6 .
Figure 6.2D layer formation in 3 generated by C-H•••O hydrogen bonding (blue dashed lines).Hydrogen atoms not involved in the motif shown have been omitted for clarity.
Table 1 .
Crystal data and refinement parameters for the compounds 2a, 2b and 3•2DMF. | 4,251.8 | 2017-12-12T00:00:00.000 | [
"Chemistry"
] |
COMBINED ATMOSPHERIC AND OCEAN PROFILING FROM AN AIRBORNE HIGH SPECTRAL RESOLUTION LIDAR
First of its kind combined atmospheric and ocean profile data were collected by the recently upgraded NASA Langley Research Center’s (LaRC) High Spectral Resolution Lidar (HSRL-1) during the 17 July – 7 August 2014 Ship-Aircraft Bio-Optical Research Experiment (SABOR). This mission sampled over a region that covered the Gulf of Maine, open-ocean near Bermuda, and coastal waters from Virginia to Rhode Island. The HSRL-1 and the Research Scanning Polarimeter from NASA Goddard Institute for Space Studies collected data onboard the NASA LaRC King Air aircraft and flight operations were closely coordinated with the Research Vessel Endeavor that made in situ ocean optical measurements. The lidar measurements provided profiles of atmospheric backscatter and particulate depolarization at 532nm, 1064nm, and extinction (532nm) from approximately 9km altitude. In addition, for the first time HSRL seawater backscatter, depolarization, and diffuse attenuation data at 532nm were collected and compared to both the ship measurements and the Moderate Resolution Imaging Spectrometer (NASA MODIS-Aqua) satellite ocean retrievals.
INTRODUCTION
A combined ocean and atmospheric lidar has the potential to provide key measurements needed to better understand both the ocean optical properties, and hence the biogeochemical properties of the global oceans, and air-sea exchange processes important for climate studies.For example, it has been shown that the particle organic carbon stocks in the ocean are better estimated from particle backscatter than chlorophyll concentrations [e.g.1].In addition, lidar has the ability to provide vertical distributions of the optical properties within the ocean that are not possible from current passive satellite measurements.Recent results using the CALIPSO satellite measurements have revealed that global measurements [2] of vertically integrated backscatter within the ocean are possible, even from an instrument not designed for ocean profiling.As noted in [2], using the High Spectral Resolution Lidar and a passive ocean color sensor could result in a 3-D distribution of the upper ocean ecosystem.
To demonstrate the quantitative capabilities of such a system, the airborne HSRL that has flown on the King Air aircraft during many field campaign since 2006 [3] was recently configured to provide high vertical profiling to 3 optical depths below the ocean surface.The atmospheric HSRL measurements have been compared to both in situ and other remote sensors [5] and continues to provide key validation data for CALIOP sensor on the CALIPSO satellite [6].
METHODOLOGY
The HSRL technique relies on the difference between the spectral distributions of 180˚ backscatter from particles and molecules [2]: backscatter from particles (e.g., plankton or atmospheric aerosols) is at the same frequency as the transmitted laser pulse, whereas molecular backscatter is shifted in frequency.The basic diagram of the lidar system is provided in Fig 1A .As shown, an iodine vapor cell functions as a spectral filter in the receiver and frequency discrimination is achieved by tuning the narrowband laser to an absorption line of the iodine gas.The spectral distribution of the backscatter is shown in Fig. 1B for both the atmosphere and the ocean.The Co-polarized and Cross-polarized Channels measure both molecular and particulate backscatter, while the Molecular Channel measures only the molecular backscatter.Backscatter from water molecules in the ocean (blue) is dominated by Brillouin scattering, which is shifted both up and down in frequency from the source laser by ~7-8 GHz while backscatter from air molecules (red) is both Brillouin and Doppler shifted (Cabannes) scattering, resulting in broadening of about 3 GHz.Molecular scattering is therefore transmitted by the iodine vapor filter, the shaded regions shown in Fig. 1B, while backscatter from particles in the atmosphere and ocean is effectively blocked.The choice of the particular iodine line can be configured such that the ocean Brillouin scattering is insensitive to changes due to changes in ocean temperature and salinity.The instrument, measurement technique, and calibration methods for the HSRL instrument are outlined in [3] for the atmospheric measurements and are performed similarly for the ocean measurements.As noted above, the instrument was recently upgraded to provide high resolution sampling of the ocean from near the surface to approximately three optical depths (40-45m in the open clear ocean) at a resolution of 1m in the ocean (120MHz sampling).New detection electronics were developed along with integrating new Hybrid Photodetectors (HPD) to reduce the effects of after-pulsing.In addition, the laser was upgraded to narrow the pulse width (6ns) and increase the pulse repetition rate (4000Hz) with slightly higher energies (2.5mJ).
RESULTS
During SABOR we conducted 24 science flights split evenly between bases in Portsmouth NH, St. George's Bermuda, and Hampton, VA onboard the NASA LaRC King Air aircraft which flew at an altitude of 8-9km.The flights included overpasses of the RV Endeavor research vessel during the campaign, thus providing coordinated measurements made remotely and in situ of the seawater.
The flight track of the first science flight on 18 July 2014 is shown in Fig. 2 (inset); the track is color-coded with aerosol optical depth (532nm).The combined atmosphere and ocean backscatter for the region in the highlighted red box is also shown.This segment shows a region that covered a large dynamic range in aerosol optical depth (~0.04 -0.14) and a large dynamic range in the particulate backscatter in the ocean (0.3-16.7 km - 1 sr -1 ).
Note that the altitude scale for the atmosphere is in kilometers and meters for the ocean.The large atmospheric backscatter above 4km in the northern portion of the flight was dominated by smoke advected into this region from forest fires in the Western US.This condition was prevalent during most of the flights conducted from Portsmouth NH.This provides an ideal case to compare the atmospheric corrections required for the ocean color retrievals from MODIS (AQUA satellite) and VIIRS (NPP satellite).Also note that the backscatter in the ocean increases around 19UT as the flight track approaches the George's Bank region off the coast of Massachusetts, which is known for high phytoplankton stocks and productivity.The Gulf of Maine also has higher backscatter values that vary with depth compared to the open ocean.This particular flight also offered a first-of-kind opportunity to compare airborne lidar data with satellite retrievals of the ocean optical properties.It is expected that for the flight altitude and instrument parameters, the lidar attenuation is approximately equal to the exponential decrease of the downwelling irradiance with depth, known as the diffuse attenuation coefficient [8], which is retrieved by MODIS.The fundamental backscatter measurement from the lidar is the aerosol volume backscatter coefficient, [3].The fundamental retrieval from the satellite is the hemispherical particulate backscatter, (bbp).The lidar hemispherical backscatter is retrieved through a similar approach to that of Boss and Pegau [7] which assumes the following relationship; bbp.For the lidar, It is noted that the conversion factor, , between the lidar and the hemispherical backscatter can vary depending on size, shapes, and composition of the particle assemblage.However, during the SABOR campaign we evaluated the correlation between the in situ measurements of bbp, as outlined in [7], and the corresponding values derived from the lidar data and found a strong correlation when using =0.5 for the lidar as shown in Fig 3 .The slope of the correlation was 0.95 with an R 2 =0.79 using all profile data including the complex coastal waters and the open ocean.The data shown in this figure are in situ profile data collected during specific locations when the aircraft overflew these locations and the aircraft was within 1km.This result using an extensive number of comparisons is encouraging and is unprecedented in terms of using quantitative lidar measurements enabled by the HSRL technique to compare retrievals of backscatter with passive remote sensor retrievals.
Comparisons of lidar-and satellite-based retrievals of diffuse attenuation coefficients are shown if Figs. 4&5.The flight track, color coded to show the lidar retrieval of diffuse attenuation, is shown with the MODIS QAA retrievals (4-km resolution) of diffuse attenuation [9].A correction of the water absorption between 532nm, the wavelength the lidar measures, and the MODIS retrievals at 488nm is applied.No conversion is made for the particulate spectral dependence in this case.For this flight there was a high correlation (R=0.94,slope=0.97) between the During the campaign, we composited the comparisons for all flights that had measurements from the two instruments and the correlation was impressive over a range of approximately 0.03-0.23 m -1 with an R=0.96, slope=1.0 as shown in Fig 5 .Similar comparisons were made with the MODIS GSM retrievals with an R=0.9 and a slope=0.75.Similar to the diffuse attenuation results, comparisons were also made between estimates of particulate hemispherical backscatter (bbp) with no spectral corrections.Although the correlations are not as high for the backscatter data, the results offer an unprecedented set of measurements to evaluate the various ocean retrieval models.For example, the correlations of backscatter between the lidar and the satellite retrievals varied by R=0.76, 0.88, and 0.82 and slope= 1.1, 1.3, and 1.5 between the different retrieval algorithms, QAA, GSM, and GIOP [9], respectively.
SUMMARY
Results described here demonstrate that an HSRL lidar can provide both vertical profiling and direct measurements of ocean optical properties that can be used to assess and potentially improve retrievals from current and future passive ocean color sensors.The combination of these sensors will provide significant improvements in our understanding and ability to assess global ocean measurements such as particulate organic carbon and net primary productivity.It is also clear that the full benefit of lidar measurements to ocean science community has not yet been fully realized.
Figure 1 .
Figure 1.(A) Block diagram of the HSRL-1 instrument and (B) the spectra of backscatter signals at 532 nm relative to the iodine vapor filter absorption features.
Figure 3 .
Figure 3. Comparisons of the RV Endeavor and the lidar retrieved hemispherical particulate backscatter.Colors on the graph indicate different sampling locations.
Figure 2 .
Figure 2. Particulate backscatter coefficient for the atmosphere (upper) and ocean (lower).The black regions in the ocean denote where no retrieval is performed at low SNR or is during a calibration period or aircraft turn.
Figure 4 .
Figure 4. MODIS QAA retrieval of diffuse attenuation compared with the HSRL retrieved attenuation.
Figure 5 .
Figure 5. Correlation between lidar extinction and the MODIS QAA diffuse attenuation retrievals. | 2,347.8 | 2016-06-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Distributed Morphology of Akan-Twi Plurals
Following the discussions of Bodomo and Marfo’s (2002) morphophonological analysis of Akan and Dagaare noun class system using number marking: singular and plural within Lexical Phonology theoretical framework, the present study seeks to provide a broader analysis of Akan-Twi plurals based on Distributed Morphology theoretical framework provided by Halle and Marantz (1993, 1994). The core issue of this paper is to account for whether the attachment of the prefix aand the suffix -foɔ on some stems in Akan is due to the spell-out of vocabulary item (VI) or due to other morphological operations such as fission. Moreover, the paper attempts to address the context in which the Vocabulary Item /a-/ applies.
Introduction
This paper concentrates on the Distributed Morphology of Akan-Twi plurals. Akan is a language which belongs to the Kwa sub-group of Niger-Congo family of West Africa, spoken in Ghana. Number is shown in Akan based on singular and plural dichotomy. There have been various attempts to analyze Akan plurals focusing on semantic criterion. Bodomo and Marfo's (2002) recent proposal on the morphophonology of noun class system in Dagaare and Akan shows that the most appropriate criterion that can be used to set up noun classes is number-i.e. singular and pluralcategorization. This is marked mainly by suffixes in Dagaare and in Akan by prefixes. Indeed, Akan also manifests the use of suffixes to show plurality. They therefore put nouns in Akan and Dagaare into classes based on the similarity of both the singular and plural affixes. With this criterion, nouns are always in the same class, whether in the singular or plural. That is to say, nouns in the same class must have similar singular affixes and similar plural affixes. Apart from classifying the nouns based on affixation, they also relied heavily on phonological process to refine their classification. Ten (10) and nine (9) class systems were proposed in Akan and Dagaare respectively. Since the analysis of this paper is primarily on Akan plurals, I will focus only on data and analysis of Akan as proposed by Bodomo and Marfo (2002). Below is the nine-class system of Akan shown in table 1.
The noun class system of Akan is based mainly on an interface between the morphological and phonological components of the grammar, thus the basic assumption with regard to their analysis is mainly phonological. They further intimated that due to the preference involved in the selection of a particular affix, the morphological facts cannot satisfactorily explain their criterion, and therefore appealed to phonological information. The most prominent of the phonological information is the advanced tongue root (ATR) vowel harmony principle. By the vowel harmony principle, the assumption is that the ten vowels in Akan fall into two phonetically distinctive classes; i.e., a vowel is either produced with an advanced tongue root ([+ATR]; /i, e, o, u, ae/) or with a retracted tongue root ([-ATR]; /ı, ε, a, ɔ, υ/). Following the distinction, all stem vowels are required to be of a common ATR feature specification. The ATR specification in the stem then dictates that of the vowels in the affixes. In this case, stem vowels that are [+ATR] select the same vowel specification in the affix and likewise [-ATR] stem vowels. This explains the differences between the prefix in ε-kυɔ "buffalo" and e-kuo "group". There are a few exceptional cases though, where both specifications occur in a word.
Indeed, it must be mentioned that Bodomo and Marfo's analysis was theoretical grounded in lexical phonology (LP) developed by Kiparsky (1985) and Mohana (1986). The aim of this study, however, is not to contradict Bodomo and Marfo's analysis of noun class system of Akan, but rather to give an alternative theoretical basis, in this case Distributed Morphology, for analyzing plural formation in Akan. It must also be mentioned that their analysis serves as background knowledge for the analysis of plural formation in Akan. V-A- εwoɔ 'honey' -sikyire asikyire 'sugar'
Theoretical Framework-Distributed Morphology
The theory of Distributed Morphology is separationistic in nature in that it adopts the idea that the mechanisms, which are responsible for producing the form of syntactically and semantically complex expressions are separate from the mechanisms which produce the form of the corresponding phonological expressions. One of the core assumptions of DM is that syntax proper does not manipulate anything resembling lexical items, but rather, generates structures by manipulating and combining abstract roots and morphosyntactic features (taken from List 1, the "narrow" lexicon in figure 1) by means of various syntactic operations (such as movement and merger). At the post-syntactic level of Morphological Structure (MS), the arrangement and number of terminal nodes may be changed, for instance, by insertion of agreement nodes, feature copy, and morpheme insertion. Phonological matrixes are assigned to terminal nodes only after syntax at the level of Phonological Form (PF); this is referred to as "late insertion" (Marantz, 1995). Phonologically specified forms, Vocabulary items (VIs), are drawn from List 2, the Vocabulary. A VI is not merely a phonological string; rather, it also contains information about where that particular string may be inserted. Note that various VIs may compete for insertion in a given terminal node, with the most highly specified item that does not conflict in features with the specification of this terminal node winning the competition. Moreover, at PF, phonological readjustment rules may apply that change the phonological form of already inserted Vocabulary items (VIs) in certain syntactic contexts. Based on the noun class system in Akan proposed by Bodomo and Marfo (2002) in section 1, the singular and plural affixes can be summarized as: Twi nouns usually have a prefix, which may be a vowel e-, ε-, a-, o-, ɔ-, or the consonant m-or n-and the suffix -ni in the singular, and a-, m-or n-in the plural. Also suffixes -foɔ, a___foɔ for nouns referring to humans, -nom for kingship nouns, and others Ø.
Vocabulary items (VI) for the plurals a /n-/ ⇔ [pl]/ elsewhere b /a-/ ⇔ ([pl] ) c /-foɔ/ ⇔ [pl] ______ [+human] d /a_foɔ/ ⇔ [pl] e /-nom/ ⇔ [pl] ________
[+Kinship] f / Ø/ ⇔ [pl] / {NKATEε; MPA; MPABOA;...} In Distributed Morphology the above Vocabulary Items compete for insertion because they all mark the plural morpheme in Akan. The question we may ask ourselves is how do we insert the correct VI? DM proposes the Subset Principle, which specifies how VIs are inserted. 'The phonological exponent of a Vocabulary Item is inserted into a morpheme if the item matches all or a subset of the grammatical features specified in the terminal morpheme. Insertion does not take place if the Vocabulary Item contains features not present in the morpheme. Where several Vocabulary Items meet the conditions for insertion, the item matching the greatest number of features specified in the terminal morpheme must be chosen.' For example, the class of words in E will have the VI /-nom/ inserted because the context for the insertion is [+kinship] and /-nom/ is the most specified VI for such plurals-maame-nom 'mothers'.
Notwithstanding the subset principle, there are challenges as to which context does the plural morpheme /a-/ applies. We can suggest that /a-/ applies in the context of [-human] but that would not work because in Akan, there are words which have the [+human] feature and have /a-/ as the plural morpheme, for example a-hene 'kings'. Again, we may postulate an elsewhere for /a-/ because its occurrence cannot be predicted. Making such postulation will be misplaced in that the plural morpheme /n-/ has a wider occurrence compared to /a-/. Words that are borrowed into the Akan-Twi language have their plural formed with /n-/, for example bokiti 'bucket' and mmokiti 'buckets'. It must be mentioned that the surface form mmokiti is derived after the syntax, and morphology-vocabulary item /n-/ is inserted as prefix to the stem bokiti-n-bokiti. The phonology then applies to derive the correct output by the application of a phonological rule which changes n-bokiti to mmokiti.
The general question is which context does the Vocabulary Item /a-/ applies? The answer I may attempt to offer is that the vocabulary item /a-/ applies in some contexts as the default or the elsewhere morpheme. It happens to be the default in some contexts. An alternative analysis to this proposal is to list all the stems with VI /a-/ as the plural morpheme. The challenge with the alternative proposal is that we may have to also list all the words in D because those words also form plurals with /a-/ and /-foɔ/. This leads us to the next problem with Akan-Twi plurals.
The words in D use the prefix /a-/ and the suffix /-foɔ/ to indicate plural, and have the same context [+human] as the words in C. What readily comes to mind with these words is to analyze them as circumfix, but words in B and C illustrate that this should not be considered as circumfix since either part can appear separately. The analysis we may apply here is to assume that /a-/ and /-foɔ/ target different nodes in this case two different number nodes. This means that the Vocabulary Items in B and C apply in a systematic fashion. The morphological operation fission, which has been applied to languages such as Tamazight Berber, will be used to analyse the Akan case at hand. Fission occurs when Vocabulary Insertion does not stop after a single Vocabulary Item is inserted. Vocabulary Items accrete on the sister of the fissioned morpheme until all Vocabulary Items which can be inserted have been, or all features of the morpheme have been satisfactorily discharged. Noyer (1997) postulates that the features that condition the insertion of a Vocabulary Item come in two types. A Vocabulary Item primarily expresses certain features in its entry, but it may be said to secondarily express certain other features. This distinction corresponds (approximately) to the distinction between primary and secondary exponence (Carstairs, 1987). Only features which are primarily expressed by a Vocabulary Item are discharged by the insertion of that Item.
The application of fission to the Vocabulary Items listed above suggests that some vocabulary item must be inserted before others, in this case /a-/ and /-foɔ/. This is a challenge in Akan because we cannot predict which of the Vocabulary items should apply first. Any of the two can apply first and it will yield the same results. For example, if we assume that /a-/ must apply before /-foɔ/ the question then is what is the motivation for the application of /a-/ before /-foɔ/? We have not been able to establish the context for the insertion of /a-/. The alternative analysis is to assume /-foɔ/ as the primary express because its context is known and indeed morphemes with the two Vocabulary Items tend to be [+human] nouns. The secondary express then will be /a-/ which is inserted after the insertion of /-foɔ/. It must be mentioned that in fissioned morpheme, Vocabulary Items are no longer in competition for a single position-of-exponence or a single node, i.e. for the position of the morpheme itself. Rather, an additional positionof-exponence is automatically made available whenever a Vocabulary Item is inserted. This seems to be a plausible analysis because as we indicated above that /a-/ applies in some context as the default VI or the elsewhere morpheme.
For the fission analysis of the words in D some of the features of the Vocabulary Item list must be in parenthesis, and in this case is the /a-/. Parentheses are used to denote features which are secondarily expressed by a Vocabulary Item-/a-/, while ordinary features are those which primarily expresses are not parenthesized, for instance /-foɔ/. For example a morpheme a-sua-foɔ 'students' has two affixes. The primary express VI /-foɔ/ is added first before the secondary express VI /a-/ is added.
Conclusion
The analysis of the Akan-Twi plurals is too complicated because we could not establish the context for the insertion of the Vocabulary Item /a-/. If we accept the fact that it is the default or elsewhere VI then what happens to the VI /n-/? There should be a new assumption and a more concrete evidence to classify which of the two is the default. The alternative is to contradict the claim by Distributed Morphology that there is no lexicon by postulating the lexicalist idea that, in Akan, the plural prefix /a-/ has to be learned and stored in the lexicon. This is because its occurrence cannot be predicted.
Though we have been able to use fission to analyze the occurrence of /a-/ and /-foɔ/, within the class of words in C as in the word m-panyin-foɔ 'elders', how do we analyze the Vocabulary Items /n-/ and /-foɔ/? Fission would be the best way to analyzing those VIs. The challenge, here, is that /n-/ would have to be the secondary express. This brings to two VIs, which compete for the single position of the secondary exponence. The general question is how does Distributed Morphology analyze such a challenge? I leave this question for future research. | 3,028.2 | 2016-03-03T00:00:00.000 | [
"Linguistics"
] |
Realization of a Complementary Full Adder Based on Reconfigurable Transistors
Fine grain reconfigurability carried out at the transistor level, i.e. the ability to switch between n- and p-type operation, offers new possibilities for highly efficient logic gates. In particular, XOR- and Majority gate circuit implementations can considerably benefit from reconfigurable transistors, as they require less than half of the transistor count needed in conventional static CMOS technology. Using a total of eight highly on-state symmetric reconfigurable field effect transistors fabricated from monolithic Al-Si heterostructures, we experimentally demonstrate a fully functional full adder, a fundamental circuit for many arithmetic applications. The two slightly adapted reconfigurable XOR gates for sum and carry output provide a full output voltage swing using only a single symmetric supply rail, while achieving very low static power consumption due to complementary circuit design and inherent leakage suppression of the devices. Furthermore, their stable operation against input voltage variations is demonstrated with static and transient measurements.
Realization of a Complementary Full Adder
Based on Reconfigurable Transistors Lukas Wind , Moritz Maierhofer, Andreas Fuchsberger , Masiar Sistani , and Walter M. Weber , Member, IEEE Abstract-Fine grain reconfigurability carried out at the transistor level, i.e. the ability to switch between n-and p-type operation, offers new possibilities for highly efficient logic gates.In particular, XOR-and Majority gate circuit implementations can considerably benefit from reconfigurable transistors, as they require less than half of the transistor count needed in conventional static CMOS technology.Using a total of eight highly on-state symmetric reconfigurable field effect transistors fabricated from monolithic Al-Si heterostructures, we experimentally demonstrate a fully functional full adder, a fundamental circuit for many arithmetic applications.The two slightly adapted reconfigurable XOR gates for sum and carry output provide a full output voltage swing using only a single symmetric supply rail, while achieving very low static power consumption due to complementary circuit design and inherent leakage suppression of the devices.Furthermore, their stable operation against input voltage variations is demonstrated with static and transient measurements.
I. INTRODUCTION
E XCLUSIVE-OR (XOR) and Majority (MAJ) logic functions are highly relevant for the realization of arithmetic operations, but their direct implementation with conventional CMOS technology is intricate and demands considerable physical interconnect resources due to the significant number of transistors needed for their synthesis [1].In this respect, reconfigurable field effect transistors (RFETs) are a promising concept for increasing efficiency in integrated circuits.These doping free, multi-gate transistors are capable of switching between n-and p-type operation during runtime by tuning the charge carrier injection through a gated Schottky junction using a program gate (PG) and one or several channel barriers with a control gate (CG) [2], [3], [4].This reconfigurability on a transistor level offers high potential for novel, highly efficient logic gates to enhance performance and extend the capabilities of classic CMOS technology [5], [6], [7], [8].Simulations utilizing this finegrain reconfigurability on advanced System-on-Chip (SoC) or FPGA architectures predict significant enhancements in path delays, power consumption and even chip area [9], [10], [11].In particular, RFETs provide intrinsic XOR and MAJ gates with less than half the transistors required in conventional CMOS [12], [13].Recently, targeting XOR and MAJ logic gate optimizations, Gauchi et.al [14] demonstrated performance improvements on large scale integrated circuits comparing 10 nm RFET with 12 nm FinFET technology.
In this work, we experimentally demonstrate the first complementary 1-bit full adder using only 8 physically identical RFETs, excluding inverters for providing the inverted input signals.These RFETs are based on monolithic Al-Si heterostructures embedded in a three independent top-gated Schottky barrier field effect transistor (SBFET) architecture, exhibiting highly symmetric on-states [15], [16], which is essential for efficiently working logic circuits [2], [17].Based on a 3-input XOR and MAJ gate with 4 RFETs each, the logic outputs for sum and carry are calculated, as demonstrated with both static and transient measurements.
A. Symmetric On-State Si RFET
For the fabrication of the RFETs, ∼300 nm wide nanosheets were patterned from a Si on insulator (SOI) substrate with a 20 nm thick, lightly p-doped (B, ∼10 15 cm −3 ) device layer on top of 100 nm thick BOX using laser lithography and reactive ion etching.After forming the 10.3 nm thick thermal SiO 2 gate oxide, the Al source/drain (S/D) contacts are defined with laser lithography, followed by a BHF dip, Al sputter deposition and lift-off.Rapid thermal annealing (RTA) at 773 K in forming gas atmosphere is then used to induce lateral intrusion of Al into the nanosheets by an Al-Si exchange reaction, forming shortened Si channels (∼2.5 µm) with flat and highly reproducible Al-Si Schottky junctions that exhibit an abruptness down to the atomic level [18].The Ti top gates with Au bond pads are then structured using electron beam lithography, evaporation and lift-off techniques, with the PG aligned directly atop both Al-Si interfaces, and the CG in between on top of the Si channel, as can be seen in the false-color scanning electron microscopy (SEM) image in Fig. 1a.The single RFET device characteristic is plotted in Fig. 1b for various bias voltages V DS .High symmetry is achieved in terms of on-currents (8.84 µA/µm, 9.02 µA/µm), threshold voltages (−0.2 V, 0.33 V) and subthreshold slopes (217 mV/dec, 203 mV/dec) at V DS = 4 V for n-(V P G = 2 V) and p-mode (V P G = −2 V), respectively, the latter two mainly attributed to the inherent mid-gap pinning Al-Si Schottky junctions [18].All those parameters can potentially be improved by device scaling down to low nm dimensions [19], introduction of high-κ gate dielectrics or a Ge rich channel for a reduced bandgap [4], [20].Importantly, the devices are bidirectional, i.e. for positive and negative V DS , their IV characteristic does not change in absolute value.Combined with their ability to switch polarity between n-and p-mode at runtime, efficient logic gates can be built.
B. XOR Based Full Adder
A full-swing static XOR gate can be realized with only four RFETs as proposed by [3] and [5], reducing the transistor count by half vs. conventional CMOS technology.The input signals A and B, and their inverted counterparts Ā, B are applied to the CGs and PGs.Adapted to a transmission gate architecture, an additional input signal, the carry input (C in ), can be used to switch the polarity of the circuit, and thus between XOR and XNOR, to obtain a 3-input XOR gate for the sum functionality, which is depicted in Fig. 2a.Note that additional gates needed to drive the transmission gates are neglected, which may result in additional overhead depending on the application targeted and whether the circuits are realized as a combination of CMOS and transmission gates or in a full transmission logic setup.The measured voltage transfer characteristic for this logic gate is shown in Fig. 2b, with the inputs B and C in fixed at constant logic levels (V dd = 2 V for "1", V ss = −2 V for "0"), and A swept from −2 V to 2 V.A full-swing operation is achieved, and the cross current flow I C through the circuit is well suppressed (< 6 nA) at the distinct output states, peaking at the state transitions with short-circuit currents up to 170 nA.The suppression of the steady state currents, with higher current peaks at the state transitions, is characteristic for complementary circuit designs, also when using conventional CMOS technology.Remarkably, this is achieved using only the two fully symmetric operation voltage levels of ±2 V, simplifying the circuit layout.To demonstrate the stability of the RFET based logic gate against input voltage variations, output color maps are plotted in Fig. 3.While for constant C in in (a,b) the SU M output switches rapidly between ±2 V, the state transitions when varying C in (c,d) are more blurred, as the supply voltage for the circuit also changes.Nevertheless, stable operating windows for all states of at least 0.6 V are visible when varying the input levels before the output is flipped.
For full adder functionality, the carry bit also needs to be calculated.Therefore, the same reconfigurable XOR gate can be used, where only the signals defining the polarity of the circuit, C in and C in , are exchanged by C in and A, as shown in the schematic in Fig. 4a, resembling a 3-input MAJ gate.Since A and C in also modulate the supply rail voltages of the circuit, the output C out is changed linearly before switching state when sweeping A from −2 V to 2 V in Fig. 4b, leading to less sharp transitions vs. the sum operation.This can also be seen in the output maps in Fig. 5, also clearly showing stable operation regimes for all output states and therefore providing sufficient noise immunity.Remarkably, the complementary four RFET based design again provides low steady state currents of I C <1.3 nA for low static power consumption, with short-circuit currents up to 170 nA.
The transient operation of the complete full adder based on the two reconfigurable XOR gates is shown in Fig. 6 for the complete input sequence, correctly calculating all output states for SU M and C out .Due to the absence of a backend-of-line interconnect technology, our lab-based structure design, with large planar contact pads on top of an SOI substrate for contacting the probes induces extremely high parasitic capacitances, limiting the operation speed.Especially when the polarity of the logic gate is inverted with C in or A, the transitions at the outputs are rather slow due to charge RC delays and capacitive coupling, resulting in logic degradation.However, Amaru et.al [12] used HSPICE circuit simulations of the proposed 1-bit full adder design based on a 22 nm technology node to estimate that switching speeds in the GHz region are feasible, even up to 3.8x faster than their conventional CMOS counterparts given the reduced propagation delay.More recently, Quijada et.al [21] showed simulations using Ge RFETs, further improving the full-adder performance.In addition, Cadareanu et.al [22] calculated a reduction in the energy-delay product of 18 % due to the reduced number of transistors, despite a slight increase in parasitic capacities from the additional gates.
III. CONCLUSION
In this letter, we report on a 1-bit full adder based on a XOR and MAJ gate, each realized with only four ambipolar RFETs based on Al-Si-Al heterostructures, benefiting from their high on-state symmetry.With only slightly different circuitry, both 3-input XOR for SU M and MAJ for C out operations have been implemented, which can operate reliably and with low static power consumption as a result of their complementary architecture.Compared to conventional CMOS topologies, this RFET based implementation reduces the number of transistors by more than a half.By applying scaling measures and modern interconnect technologies, propagation delays and power consumption can potentially be reduced.
Fig. 1 .
Fig. 1.(a) False-color SEM image of a single Al-Si RFET.(b) Transfer characteristic of the RFET showing symmetric n-and p-type operation at V PG = 2 V and −2 V, with V DS increased from 1 V to 4 V in 1 V steps (V D = −V S ).(c,d) Schematic illustration of the energy band landscape for both operation modes.
Fig. 2 .
Fig. 2. (a) Schematic for the sum operation based on four RFETs and the corresponding truth table.(b) Voltage transfer characteristic for A increased from -2 V to 2 V for all constant logic inputs values B and C in .The dotted lines indicate the current flow |I c | through the circuit.
Fig. 3 .
Fig. 3.Output color maps for the sum operation, with two inputs varied between ±2 V and C in (a,b) and B (c,d) fixed at constant values, respectively.
Fig. 4 .
Fig. 4. (a) Schematic for the carry output C out with the corresponding truth table.(b) Voltage transfer characteristic for A increased from −2 V to 2 V for all inputs states B and C in .The current |I c | through the circuit is shown by the dotted lines.
Fig. 5 .
Fig. 5. Output color map for the carry operation, for constant values for C in in (a,b) and B in (c,d) while the other two inputs are varied between ±2 V.
Fig. 6 .
Fig. 6.Time dependent logic operation of the full adder showing the output signals SUM and C out in response to the input signals A, B and C in . | 2,853.2 | 2024-04-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Health, Economic, and Social Impacts of Substandard and Falsified Medicines in Low- and Middle-Income Countries: A Systematic Review of Methodological Approaches
ABSTRACT. Little is known about the adverse health, economic, and social impacts of substandard and falsified medicines (SFMs). This systematic review aimed to identify the methods used in studies to measure the impact of SFMs in low- and middle-income countries (LMICs), summarize their findings, and identify gaps in the reviewed literature. A search of eight databases for published papers, and a manual search of references in the relevant literature were conducted using synonyms of SFMs and LMICs. Studies in the English language that estimated the health, social, or economic impacts of SFMs in LMICs published before June 17, 2022 were considered eligible. Search results generated 1,078 articles, and 11 studies were included after screening and quality assessment. All included studies focused on countries in sub-Saharan Africa. Six studies used the Substandard and Falsified Antimalarials Research Impact model to estimate the impact of SFMs. This model is an important contribution. However, it is technically challenging and data demanding, which poses challenges to its adoption by national academics and policymakers alike. The included studies estimate that substandard and falsified antimalarial medicines can account from 10% to ∼40% of total annual malaria costs, and SFMs affect rural and poor populations disproportionately. Evidence on the impact of SFMs is limited in general and nonexistent regarding social outcomes. Further research needs to focus on practical methods that can serve local authorities without major investments in terms of technical capacity and data collection.
INTRODUCTION
The United Nations established universal health coverage (UHC) in all member countries as one of the Sustainable Development Goals for 2030, with a specific focus on universal access to safe and essential medicines. 1 The presence of substandard and falsified medicines (SFMs) around the world threatens the achievement of this goal, and according to the WHO, 10% of all medical products in low-and middleincome countries (LMICs) are either substandard or falsified. 2,3 Historically, SFMs have been described with different terminologies such as counterfeit, fake, and poor quality, all of which are still used somewhat interchangeably in the literature covered in this review. In 1999, the WHO produced a report 4 that defined counterfeit drugs specifically as "a medicine which is deliberately and fraudulently mislabeled with respect to identity and source." In the same report, the WHO advised governments to establish national drug regulatory authorities to be responsible for the licensing of legitimate drugs and identification of SFMs. Substandard medicines are now defined by the WHO as "authorized medical products that fail to meet either their quality standards or specification, or both"; falsified medicines are defined as "medical products that deliberately/fraudulently misrepresent their identity, composition or source." 3 Research on this subject increased significantly since then and demonstrated that the existence of SFMs needs to be tackled as a serious public health challenge. 3,[5][6][7][8][9][10] The channels through which SFMs can have an impact on a population's health, economic, and social status are manifold. 11 Patients who do not receive effective treatment can remain ill for longer periods and experience increased morbidity and mortality, with detrimental effects to both individuals and communities. [3][4][5][6][7][8][9]11,12 In the case of infectious diseases, treatment failure may also increase the risk of transmission to other individuals. 12 At the social level, there is a wide range of possible impacts of the prevalence of SFMs on values, relationships, and dynamics that have not yet been identified or quantified. Patients and informal caregivers may have to spend additional time and financial resources on health and social care. As a result, patients and carers are less productive and their personal income is reduced, with potential adverse effects on the economic welfare of communities and countries. This means an economic loss for all agents involved-patients, their families, the health system, the pharmaceutical industry, and national productivity. The prevalence of SFMs can also lead to loss of confidence in health systems and medical staff, with adverse impacts on care-seeking behavior. Moreover, scarce medical resources are wasted and are less available for the financing of good-quality medicines. 10,12 This situation frustrates both patients and providers. Patients may try to procure alternative treatments that are not as effective; health workers may feel that the value of their work is unrecognized. Substandard and falsified medicines are often consumed disproportionally by poorer households; therefore, they can aggravate existing inequalities in health and economic status. 13 It is important to understand the magnitude and direction of these impacts to inform public policies and design them to protect, in particular, the most vulnerable patients from SFMs.
In the context of assessing the adverse impacts of SFMs, during past years several studies were conducted to address this issue, most of which were in sub-Saharan Africa (SSA) and focused on substandard and falsified (SF) antimalarial medicines. 6,14 Mathematical modeling has been shown to be a very useful tool in these studies. However, because of the scarcity of reliable population-based data from LMICs, authors often need to simulate a variety of scenarios over a range of SFM prevalence to achieve a broad picture of the potential impacts for society. According to estimates, our systematic review found that SF antimalarial medicines can account for 10% to 40% of total annual malaria costs, and SFMs affect rural and low-socioeconomic status (SES) populations disproportionately. To the best of our knowledge, there is no agreed or standard method that allows countries and institutions to estimate (even broadly) the magnitude of SFMs in national pharmaceutical markets. Reliable empirical estimates of the impacts of SFMs are crucial to provide rigorous evidence for policies that improve surveillance, regulation, and governance of the pharmaceutical supply chain. 5 Our systematic review aims to identify the existing methodological approaches used to estimate the health, economic, and social impact of SFMs in LMICs. It also presents a summary of the results found with each method and current gaps in the literature, while outlining the geographic settings and medicine classes included in existing research of SFM impacts.
MATERIALS AND METHODS
This systematic review was conducted in accordance with the preferred reporting items for systematic reviews and meta-analysis guidelines. 15 Search strategy. A preliminary search was conducted in early March 2021 to find medical subject headings entry terms associated with "substandard and falsified medicines" and "low-middle-income countries." 16 After this, the full review started in late March 2021 and was updated in June 2022. Details on the search strategy are presented in Supplemental Tables 1-4. The Ovid search engine was used to search the Medline, Embase, Econlit, and Global Health databases. Scopus, Web of Science, Open Grey Literature Database (http://www.opengrey.eu/), and WHO Essential Medicines and Health Product Information Portal were also searched. 17,18 Because we use open-source registered guidelines for systematic literature reviews, the specific protocol for this systematic review was not registered.
Selection and synthesis. Search results were downloaded from databases and stored in EndNote software (v.20.2, Clarivate, PA) before being uploaded to Covidence software systematic review software (Veritas Health Innovation, Australia) for screening. Titles and abstracts were screened against inclusion and exclusion criteria. When this was inconclusive, the full text was evaluated. All work included had to meet all inclusion criteria and none of the exclusion criteria. Eligibility criteria consisted of primary research papers in English estimating the health and/or economic and/or social impact of SF drugs in LMICs. Articles published in languages other than English or that were not primary original research (reviews, commentaries, and letters) were excluded. Any studies with settings not defined as LMICs by the World Bank at the time of the search (both in March 2021 and June 2022) were excluded. 18 We included work that estimated impacts of prevalence after laboratory testing, and that focused on health and socioeconomic outcomes and impacts of SFMs. Studies that solely estimated the prevalence of SFMs or tested protocols for SFMs (real laboratory studies/laboratory procedures) were excluded, as well as studies using animal populations, those measuring the effects of herbal medicines, or those focusing on legislative issues surrounding medicine quality. Our inclusion criteria allowed for studies using SFM synonyms such as counterfeit, poor quality, or fake. Studies in which effects related to SFMs or non-SFMs were not clear were excluded. Literature on physiological impacts of SFMs (e.g., adverse drug reactions) were not included.
This systematic review is limited by the existing evidence on the impacts of SF medical products, 19 which focus our search on medicines, excluding other medical products such as vaccines or diagnostic kits, as pointed out by the 2020 WHO report. [20][21][22] Vaccines are a specific medical product, distinct from drugs, typically provided at the national level and with heavily regulated distribution and commercialization. 23 This means that all the steps involved in the research process, from defining which products to sample, how to sample them, where to sample them, how to test them, and the laboratory requirements to test vaccines are distinct from those of drugs. References of included articles and relevant systematic reviews were hand-searched and eligible citations were included. [6][7][8] The following data were extracted for analysis from the included articles: 1. Basic characteristics of the studies, including year of data collection, year of publication, and geographic setting 2. Drug class or molecules under study and the respective diseases they treat 3. Analysis strategy (e.g., modeling or statistical analysis) and data sources 4. Health outcomes, including individual and aggregated health measures that indicate the impact on morbidity and/or mortality (e.g., number of infected individuals, hospitalizations, deaths, disability-adjusted life years, years of life lost) 5. Economic and social impacts, including direct and indirect costs associated with monitoring, controlling, and treating a given disease on the health system, patients, or society (including medication and hospitalization costs, travel and waiting costs to patients, diagnostic costs, and productivity losses). Indirect costs can include changes in socioeconomic and geographic inequalities depending on SFM distribution, unemployment resulting from prolonged illness, and changes in the demand for medicines resulting from the lack of confidence in the health-care system.
Quality assessment. An adapted version of the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist was used to assess the quality of reporting in the included studies. 24 The CHEERS checklist contains 24 items with recommendations for the minimum amount of information that should be available in economic evaluations. The checklist is widely used in health economics, and is the most appropriate critical appraisal framework for the studies included in this review after screening and reference searching. Each study was assessed against the CHEERS criteria and a mark was given for each criterion not met. The 24 criteria were all taken into account to the best of our knowledge, unless they were not applicable to the study (Supplemental Table 5). This was the case, for example, of one included study in which the authors did not develop a mathematical model, so the criteria relative to the assumptions do not apply. 25 Studies were grouped into four categories based on CHEERS checklist scores, similar to the categories used by two studies in 2011 and 2015. 26,27 The categories were modified for use in this review; studies could FALSIFIED MEDICINES IN LOW-AND MIDDLE-INCOME COUNTRIES be either excellent (0 point), good (1-6 points), average (7-12 points), or poor (11 points or more). †
RESULTS
We retrieved a total of 1,078 records, which were stored in Endnote and uploaded to Covidence for screening. Once in Covidence, 157 duplicates were removed, leaving a total of 921 records. Using our eligibility criteria, 549 records were excluded and 372 were deemed appropriate for full-text screening. None of the titles and abstracts screened on the websites of Open Grey and the WHO Essential Medicines and Health Product Information Portal were deemed eligible for screening in Covidence. 17 After conducting a full-text screening, five studies were deemed eligible. In addition, we searched manually the reference lists of these five studies and conducted systematic reviews for additional studies. From the manual search, we found 23 potential matches and retrieved another five eligible studies, excluding two duplicates already identified in the database search. A total of 10 records matching our inclusion/exclusion criteria were included in this systematic review. 20,25,[28][29][30][31][32][33][34][35] One record was a report that includes two studies, both eligible, making a total of 11 studies selected. [20][21][22] Figure 1 shows the details of the identification of studies through databases and citation searching.
Regarding the reports assessed for eligibility, 39% (142 of 367) were excluded because they were not original research (e.g., reviews, letters, commentaries). 7,8,[12][13][14] An additional 213 articles were excluded because they focused on outcomes out of the scope of our study, such as one qualitative study we reviewed. 36 A further 213 articles were excluded because they focused on outcomes out of the scope of this study, such as one qualitative study reviewed. 36 Literature focusing on medical adverse events fell into the exclusion criteria. 37 Three studies were excluded for not being available in English, one for studying an animal population, 38 and five for being set in a high-income country. [39][40][41][42][43] The 11 studies included were assessed for quality of reporting using the CHEERS checklist. 23 Quality assessment findings are presented in Supplemental Figure 1. Data extracted from the studies included are summarized in Tables 1 and 2. All studies included in our review had geographic settings in the region of SSA.
From the 11 included studies, nine covered the impact of having SF artemisinin-based combination therapy (ACT) medicines in the market. ‡ Of these nine studies, eight FIGURE 1. Preferred reporting items for systematic reviews and meta-analysis flowchart illustrating search and screening processes. Source: Page MJ et al., 2021. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372: n71. For more information, visit http://www.prisma-statement.org/. † The scale used by two reference papers was reversed for the purpose of our review because some of the CHEERS criteria was not applicable to some studies. 18,19 Allocating points according to criteria missed allowed a fair comparison among studies. ‡ Artemisinin-based combination therapies are the first-line therapy in almost all countries where malaria is endemic because of its high efficacy and tolerability, and ability of artemisinin-based combination therapies to reduce ongoing transmission of the parasite. 46 25 In addition, one report contained two studies estimating the impact of SF antimalarials on childhood malaria globally and in the WHO Africa Region. 20 Specific medicines used to treat childhood malaria in these settings are not specified in the study. Because of the scarcity of reliable population-based data from LMICs, most of the included studies simulated a variety of scenarios over a range of SFM prevalence. [30][31][32]35 For example, impacts of SFMs at alternative prevalences can be simulated with estimates of 10.3% and 19.1%. 30 Most of the selected studies used modeling to estimate the impacts of SFMs, parametrizing each model with the best and most recent data available from the literature and open access databases (e.g., ACTwatch, Global Burden of Disease, Malaria Atlas Project). [28][29][30][31][32][33][34][35] In the WHO report, an expert group was consulted to refine the parameters of the study. 30 The group compared empirical estimates for the Kenyan population with their baseline model output to test whether the model predicted outputs within an acceptable range. More recent approaches used data on reported prevalence of SF essential medicines among LMICs gathered from a systematic literature review and meta-analysis, 47 together with data on SFM prevalence and UHC indicators. 35 Only one study in our review used empirical data, from a regulatory authority and the major importers and distributors of pharmaceuticals from 2005 to 2015, to estimate the impact of SFMs. 25 Social impact associated with losing trust in the health system and workers, unemployment, or increased inequalities at regional, gender, and ethnicity levels resulting from SFM prevalence were not measured explicitly in any of the 11 studies included in our review. The 11 studies used different types of methods to estimate the impact of SFMs. The methods varied from crosssectional analysis to analytical models, such as a decision tree, logistic regression, agent-based model, and compartmental model. § Six of the 11 studies used the SAFARI model, a dynamic agent-based model used specifically to estimate health and economic impacts of SF antimalarials.
An application of the SAFARI model was first published in 2019, focusing on Uganda and the Democratic Republic of the Congo (DRC). 33,34 This, in itself, represented a novelty at the time, because most studies on this topic conducted analyses on the entire region of SSA, rather than at the national level. In the SAFARI model, agents seek care and treatment from a variety of health facilities as they move through different health states-healthy, infected (symptomatic or asymptomatic), or dead-over specific time intervals and time horizons. Model inputs typically rely on demographic and epidemiological data, care-seeking behavior, medication stock by facility, probability of stockouts of antimalarials, medication effectiveness, the average cost of antimalarials in facilities, nonmedication costs, and proportion of Table 2. The model has also been applied to measure the impact of specific interventions. As an example, SAFARI has been used to evaluate an intervention that replaced all antimalarials with good-quality ACTs in Zambia. 30 Results showed that the economic impact of SF antimalarial medicines accounted for U.S. dollars (USD) 31 million-8% of the total economic impact of malaria among children who sought medical care. Applying the same methods to the DRC, 34 SFMs were estimated to cost 35% of total annual malaria costs in Kinshasa Province (USD 20.9 million) and 43% in the Katanga region (USD 130 million). These costs were even greater when the model accounted for the possibility of patients developing resistance to the active substance by consuming SFMs. The authors of the SAFARI model followed up with the evidence found in Uganda and the DRC and, in a study published in 2020 they investigated the relationship between UHC indicators and the prevalence of SFMs in different countries of SSA. 35 This was a novel application of the SAFARI model, in which they analyzed different scenarios of possible policies designed to improve health-care provision in each country. 35 Investing in quality improvement of antimalarials by 10% was estimated to result in annual savings of USD 8.3 million in Zambia, USD 14 million in Uganda, USD 79 million in two DRC regions, and USD 598 million in Nigeria (excluding the intervention costs).
The SAFARI model has also been used to show that SFMs affect rural and low-SES populations disproportionately. 29 Researchers have found that those in the lowest SES quintile were 23 times more likely to have been hospitalized as a result of SF antimalarials than the highest SES quintile. Investigating the impact of SFMs on specific subgroups is key in identifying particularly vulnerable populations that can be targeted by specific policies. 45 Studies using methods other than the SAFARI model are listed in Table 3.
The first study included in our review aimed to estimate any impacts of SFMs in LMICs. 31 The authors used an uncertainty model with Latin hypercube sampling (LHS) to estimate the number of deaths in children younger than 5 years in SSA resulting from the consumption of SF antimalarials. Latin hypercube sampling uses "sampling without replacement"-a method used to simulate a sample that reflects the distribution of the population. 50 After 10,000 calculations performed using LHS, the study estimated that, in 2013, there were more than 120,000 deaths in children younger than 5 years resulting from SF antimalarials across 39 countries in SSA, with Nigeria accounting for 60.6% of the deaths. 50 Based on 2010 WHO estimates of under-five malaria deaths, this number represents about 22.33% of total deaths. 31 In terms of economic costs, we found estimates of annual expenses associated with the presence of SFMs in the market. 34 Data were sourced originated from the Tanzanian regulatory authorities, importers, and distributors of pharmaceuticals, with information covering the period of 2005 to 2015. 32 Using a simplified "ingredient approach," the authors calculated that, during the study period, substandard medicine costs to the national government were USD 13.65 million and falsified medicine costs were USD 149,369. 32 In this study, the prevalence of substandard medicines had a considerably greater economic impact than falsified medicines in Tanzania. 32 The data were extracted from confiscation reports held by the regulatory authority, with information on SFMs collected from 2005 to 2015. The ingredient approach consists of identifying, quantifying, and validating individual items. Cost estimates were then computed by multiplying the tallied quantities with unit prices for each item using the median buyer prices from the International Drug Price Indicator Guide or the Tanzanian Medical Stores Price Catalogue of 2015 to 2016, 32 depending on the data availability. 32 The retrieved data also allowed the identification of SFM manufacturers in the national market, which can be a helpful tool in other geographic settings as well.
Decision tree models, frequently used in cost-effectiveness studies, have also been applied to estimate the impact of SFM prevalence in the market. As part of a WHO 2017 report 22 on this subject, one study used a cost-effectiveness approach to investigate the impacts of SF antimalarials on childhood malaria in SSA. 22,51 According to the estimates, the use of antimalarials with an active pharmaceutical ingredient of less than
FALSIFIED MEDICINES IN LOW-AND MIDDLE-INCOME COUNTRIES
85% caused 529 deaths per 1 million malaria cases. In the same report, there is another relevant contribution to this subject that studies the impact of SF antibiotics on under-five mortality resulting from childhood pneumonia. 21 The method adopted was a multinomial logistic framework using vital registration and data-based multi-and single-cause models in India and China. 52 The authors analyzed different scenarios to estimate the change in the number of deaths associated with a change in the prevalence of SFMs. 52 For example, simulating a scenario of 1% prevalence of SF antimicrobials, the authors estimated that the pneumonia case fatality rate would double, and the number of deaths in children younger than 5 years would be approximately 8,688; a 10% prevalence would result in an estimated increase of 72,430 in under-five mortality in that scenario. 21 One study from 2017 used a deterministic compartment model that involves a human-mosquito system to quantify the impact of sulfadoxine-pyrimethamine (SP) quality on the transmission of SP-sensitive and SP-resistant Plasmodium falciparum between humans and the female Anopheles mosquito in Kenya. 32 Compartment models divide the population into susceptible, infected, and recovered from the disease, and then estimate how people fluctuate over time among the defined compartments. 32,51 In this study, humans can be either susceptible or exposed to SP-sensitive and/or SP-resistant P. falciparum, whereas the female Anopheles mosquito can be either susceptible, exposed, or recovered from P. falciparum. 32 These compartments are defined by time-varying functions with no random fluctuations, determined by the parameters and initial conditions. Sulfadoxine-pyrimethamine is no longer recommended for the treatment of malaria because of widespread SP resistance; however, it is still prescribed commonly in Kenya. 32,49 The authors showed that the use of SF and subtherapeutic doses of this medicine increased the numbers of human malaria cases and SP-resistant infections by 776.9%. 32 The studies examined in our review show that methods used to measure the impact of SFMs have evolved relatively quickly during the past decade, going from a simple uncertainty model using LHS in 2015 to the SAFARI agent-based model in 2019. This progress results in an increase in model complexity that brings researchers closer to simulating the reality of how medicine quality affects patient outcomes and disease progression. The earliest studies use methods that were more static, with more demanding assumptions. The LHS used with a simple uncertainty model produced general estimates of the number of deaths resulting from SFM prevalence, but it did not allow for simulating how SFM prevalence can affect patient behavior and disease progression, nor did it allow for individual specific variations. 29 The same is true for the costing study. 25 This is a simple method of identifying and summing up costs, which is only suited to measure tangible costs, and it does not include any simulation. To overcome the restrictions of these simpler methods, produce projections over time, and test potential interventions, other studies included in this review used analytical models to examine the impacts of SFM prevalence. The decision tree model, for example, is used to simulate decision processes when an event or choice results in a set of potential outcomes with different probabilities of happening. 24 Another study used a multinomial logistic regression to predict the impact of one or more independent variables (including SFM prevalence) on a specific outcome variable (child mortality) with more than two categories, but results can be more difficult to interpret than those of the decisiontree model. 21 Although the latter two methods can measure, to some extent, the different ways that SFMs can affect disease progression and survival, as the length of the study and the number of events or decisions increase, these can quickly become computationally overwhelming and difficult to follow. 53 In turn, compartment and agent-based models can deal with most of these computational limitations. The compartmental model study included our review quantifies the transmission dynamics between mosquitoes and humans, and measures how medicine quality affects those dynamics and malaria transmission. 32 Agents in these models are grouped and treated as interchangeable, so they cannot be used to study individual behavior. 54 In turn, agent-based models can grasp complex decision processes and, by allowing one to define agents and states, can produce a representation that is closer to reality. The SAFARI model in particular was developed and adapted specifically to measure how SFMs can affect populations at different levels. [33][34][35] The SAFARI model accounts for patient-specific as well as national and regional variation, and allows researchers to specify costs and financing sources. [33][34][35] Evidence has shown that compartmental and agent-based models can both be used to model transmission routes and can even produce similar results, but the latter allows for studying individual, specific variation through time, which gives further information. 55 One potential and relevant drawback of agent-based models such as SAFARI is the requirement of multiple types of high-quality data, as well as software, mathematical modeling expertise, and time. These factors can represent disincentives for national authorities to read, trust, and replicate the analysis in other contexts or to promote the study of the impacts of SFM prevalence in general. 56
DISCUSSION
This is the first systematic review to identify the methods used to estimate impacts of SFMs and outline the geographies and drug classes for which this has been done. This review acts as a resource for those who aim to work on SFM prevalence impacts by making an updated, comprehensive collection and analysis of the most relevant contributions to this subject and thus outline the current gaps in the literature. The first objective of our systematic review was to identify the methods used to estimate the health, economic, and social impact of SFMs in LMICs. More than half the included studies (n 5 6) used the SAFARI model for estimating SFM impacts. 28,29 Other methods included a simple uncertainty model using LHS, a costing study, a decision tree model, and a deterministic compartmental model. [50][51][52][53][54] Estimates indicate that SF antimalarial medicines can account for 10% to 40% of total annual malaria costs. Results also showed that SFMs affect rural and low-SES populations disproportionately, and thus it is key to analyze the impacts on specific subgroups that should be targeted by policies. 25,[28][29][30][31][32][33][34][35] Our second objective was to identify the current gaps in the literature in terms of geographic setting, medicine classes, and impact. Namely, all included studies determined the health and economic impacts of SFMs only in the region of SSA, leaving all LMICs in other regions out of this review, as well as several classes of medicines. Nine of the 11 studies estimated the impacts of SF antimalarials only, one report included SF antibiotics, 20 and another estimated the impact of a range of SFMs, including antibiotics, antimalarials, and antivirals. 32 An important gap identified is that we could not find any study that measured the potential social impact of SF prevalence, and there is a lack of data regarding health outcomes (after treatment with medication) as well as costs associated with post-treatment health care.
Existing literature has shown that LMICs bear most the burden of SFMs, likely a result of poor surveillance mechanisms, governance, regulations, and management of pharmaceutical supply chains. 36 Corruption, weak governmental policies, and limited technical capacity can also be responsible for enabling the distribution of SFMs in LMICs and decrease the likelihood of detecting SFMs. 36,57 Governments in many LMICs are striving to achieve UHC, which may result in political pressure to decrease drug prices so they are accessible to patients from all levels of economic status. This may jeopardize medicine quality. 36,57,58 Highincome countries are also affected by the distribution of SF drugs. Between 2005 and 2010, there was a reported 92% increase in falsified drug sales in the United States amounting to a total of USD 75 billion. 58 Although the previously mentioned risk factors are common to several countries around the world, our review shows that most of the existing studies estimating the impact of SFMs were developed in the region of SSA only. To understand the real magnitude of the prevalence of SFMs, future studies on this subject must expand geographic coverage; otherwise, the evidence will always be limited by the specificities of a single region.
Studies of SFMs have not only been focusing on the region, but also on the class of medicine, which has been mostly antimalarials and antibiotics. Conditions treated commonly with antibiotics are a great burden on LMICs, and there is evidence of antibiotics and antimalarials being targets of SFM production. 6,47,59 As for antimalarials, after 2001, when the WHO declared ACTs as the first-line medicine for malaria treatment, 46 reports of various surveys in Southeast Asia showed that up to 50% of the artesunate monotherapy sold was fake, and studies predicted this situation could get worse. 60,61 This, together with the presence of ineffectual drug regulation and inadequate technical capacity in many LMICs with endemic malaria, create a big challenge for the development of effective policy action on SFMs. 62 In 2018, Ozawa et al. 6 found that 19.1% of antimalarials and 12.4% of antibiotics were substandard or falsified. Although these can be potential reasons for SF antimalarials and antibiotics being the focus of most SFM studies, WHO Global Health estimates show that it is not always well justified. 59 In 2019, although most of the top 10 diseases to cause death in lowincome countries were indeed communicable diseases, noncommunicable diseases such as ischemic heart disease and stroke ranked third and fourth, respectively. 59 In LMICs, ischemic heart disease and stroke are the number-one and -two causes of death. 63 Malaria was the sixth common cause of death, claiming 190,000 lives in 2019, whereas 416,363 people died of lower respiratory infections and 262,905 resulting from diarrheal disease. 63 These data suggest there is a need for more research on medicines used to treat noncommunicable diseases such as diabetes or hypertension, diseases with an increasing burden over time, and essential medicines, including cough syrups, antiepileptics, or anesthetics, for which there are historical records reporting cases of falsification or other types of corruption, with dramatic consequences for their users, often children. [59][60][61][62][63][64][65] Access to effective and safe, essential health care for all is a public health priority, and now even more topical because of the ongoing COVID-19 pandemic. In March 2020, U.S. Customs and Border Protection seized fake COVID tests and the World Customs Organization reported an increase in falsified hand sanitizers and respirator masks. 66,67 Falsified chloroquine, reported by some in the media to be a remedy for COVID-19, was discovered to be in circulation in West Africa. 68 Substandard and falsified medication can lead to an increase in the number of adverse events, and patients may become even more reluctant to seek health-care treatments, further amplifying the impacts of SFMs, particularly severely on advanced diseases. 14,69 Patients may also become disillusioned by generic brands and may purchase expensive brand-name medicines to avoid SFMs. 14,69 This is a pertinent issue particularly for many LMICs, where out-of-pocket costs for medical treatment impoverish 90 million people annually. 2,5 On the occasions that governments provide free or subsidized medicines to patients, public authorities are affected directly by the presence of SFMs in the market. 7,14,[64][65][66][67] The methods identified in our review show progressive and heroic efforts to grasp the broadness of SFM prevalence impacts. There is a clear evolution in the analytical methods adopted over a relatively short period of time that improves the capacity to quantify impacts and study the different channels through which SFMs affect populations and governments. Although our review focused on quantitative studies, the use of qualitative methods has made extremely relevant contributions to this field-namely, related to the potential economic and political factors that enable the presence of SFMs in the market. 36 This suggests that a more interdisciplinary approach using mixed qualitative and quantitative methods can help our understanding of the effects of SFMs on all involved parties, from patients to health-care professionals, manufacturers, and government institutions. 36,57 The findings of our systematic review generally match the results from one earlier review on the methods to measure impacts of SFMs. 70 That study, by Ozawa et al. 70 from mid 2022, identified 9 of the 11 studies in our review. The authors highlight the roles of simulation models in providing estimates of the impact of SFMs, and the lack of data on the costs and effectiveness of interventions to improve medicine quality provision and monitoring. 70 Our study goes beyond what has already been published by including a review on the measurement of social impact. In fact, one of our main conclusions is that we could not find any studies that cover these impacts, despite their relevance to the design of health-care policies, which is a valuable and new contribution to the literature. In addition, our review further supports and complements the earlier findings by providing a systematic analysis of the existing literature, using quality checklists, and going into more detail on the current evidence and existing methods. This more detailed approach also allowed us to explore further the gaps found in terms of primary data collection and availability, and the need to expand studies on the impact of SFMs to other regions and more medicine classes.
Moving forward, this review endorses WHO recommendations to improve the estimation of public health and socioeconomic impacts of SFMs that there is a need to adopt more consistent standards for data collection, analysis, and reporting of SFM impacts. 20 Consistency in the definitions of SFMs used on guidance and testing of suspected SF medical products can allow for more replicability and comparison across studies, which would improve our understanding of the nature and magnitude of the issues associated with the prevalence of SFMs. This would naturally contribute to close the main literature gaps found in our systematic review. In addition, there is a need for more primary data collection and availability (e.g., patient characteristics before and after treatment, laboratory results) that would allow for more robust estimates of the prevalence of disease, behaviors, or SFMs. According to the WHO, this could be achieved by having national regulatory authorities responsible for recording data and implementing after-market surveillance systems to create appropriate partnerships to share information at the international level. 20 We also consider that the existence of user-friendly tools to measure SFM impacts (i.e., low cost and easy to replicate) can be important to help and incentivize governments to monitor how SFMs are affecting their populations with methods they can trust, without overloading their capacity. 55 Prior to 2006, it was widely accepted that, globally, 10% of drugs were counterfeit. 8,9 In that same year, the International Medical Products Anti-Counterfeiting Taskforce published a document updating the estimates and explaining that using a blanket figure of 10% worldwide was reductive. 71 More than a decade has passed and the WHO's estimation on the prevalence of SFMs in LMICs is at 10.5%. To our knowledge, all currently available SFM prevalence estimates were not derived from laboratory trials (i.e., from medicines collected in the field and subject to laboratory testing), but from applying several strong assumptions to aggregated data, which causes a large uncertainty about the real prevalence and impacts of SFMs. 72,73 Our systematic review has several limitations. Because all studies included in this review were developed in SSA, our findings may not be generalizable to other settings. Considering that the evidence is concentrated in six countries in SSA (Uganda, Nigeria, Zambia, DRC, Kenya, and Tanzania), the findings may not be applicable to other LMICs or even to other African countries, because of the socioeconomic differences among countries and quality of data sources. 62 Our search was also limited by the fact that, to our knowledge, the existing evidence on the impacts of SF medical products does not include vaccines or diagnostic kits. 20 Because there are limited data on the prevalence of SF antimalarials and great variation in estimated prevalence, the findings included in these studies may be underestimations. For example, one study used a prevalence of 19.1% of SF antimalarials in the DRC, while other studies have reported the prevalence of SF antimalarials to be as high as 62% in Kinshasa. 33,64 In addition, the studies included in our review did not estimate the costs of implementing interventions to reduce the prevalence of SFMs. The studies found estimate costs of SFMs and changes attributed to the interventions, but do not count for how much the interventions themselves cost, which is extremely relevant for policy implementation purposes.
In conclusion, the development of a standard method for measuring the health, social, and economic impacts of SFMs can improve the existing coverage of geographic regions, SF drug classes, and population characteristics.
Moreover, even though mathematical models have allowed us to grasp some of the magnitude of this problem, better observational data could improve researchers' capacities to analyze the effects of SFMs. Having better primary data to feed the analytical models would allow for more in-depth and robust evidence on this subject. At the same time, acknowledging the importance of complying with all ethical considerations when developing observational studies related to health care and improving SFM monitoring systems (including alerts, sampling, collection, testing, and identifying health impacts) would bring a valuable contribution to the existing literature. 72,73 Table 3 provides a summary of the main conclusions and respective recommendations that resulted from our study. We discovered that mathematical models such as SAFARI are currently the most used approach to estimate the effects of SFMs in LMICs. We found several important gaps in the literature, such as the lack of studies on the impacts of SFMs in LMICs outside the region of SSA, the lack of coverage of different medicine classes (other than antibiotics and antimalarials), and the lack of evidence on the potential social impact of SFM prevalence. More methodological guidance on designing and conducting SF impact studies is needed for a more nuanced understanding of this complex topic. | 9,150.4 | 2023-06-20T00:00:00.000 | [
"Medicine",
"Economics"
] |
Nodal solutions for the Robin $p$-Laplacian plus an indefinite potential and a general reaction term
We consider a nonlinear Robin problem driven by the $p$-Laplacian plus an indefinite potential. The reaction term is of arbitrary growth and only conditions near zero are imposed. Using critical point theory together with suitable truncation and perturbation techniques and comparison principles, we show that the problem admits a sequence of distinct smooth nodal solutions converging to zero in $C^1(\overline{\Omega})$.
The potential function ξ ∈ L ∞ (Ω) is indefinite (that is, sign changing) and the reaction term f (z, x) is a Carathéodory function (that is, for all x ∈ R, the mapping z → f (z, x) is measurable and for almost all z ∈ Ω, x → f (z, x) is continuous). We do not impose any global polynomial growth condition on f (z, ·). All the conditions on f (z, ·) concern its behaviour near zero. In the boundary condition, ∂u ∂np denotes the generalized normal derivative defined by extension of the map with n(·) being the outward unit normal on ∂Ω. The boundary coefficient β ∈ C 0,α (∂Ω) (with 0 < α < 1) satisfies β(z) ≥ 0 for all z ∈ ∂Ω. When β = 0, we have the usual Neumann problem. Using variational methods, together with suitable truncation and perturbation techniques and comparison principles, and an abstract result of Kajikiya [5], we show that the problem admits an infinity of smooth nodal (that is, sign changing) solutions converging to zero in C 1 (Ω). Our starting point is the recent work of Papageorgiou and Rȃdulescu [9], where the authors produced an infinity of nodal solutions for a nonlinear Robin problem with zero potential (that is, ξ ≡ 0) and a reaction term of arbitrary growth. They assumed that the reaction term f (z, x) is a Carathéodory function and there exists η > 0 such that for almost all z ∈ Ω, f (z, ·)| [−η,η] is odd and f (z, η) ≤ 0 ≤ f (z, −η) (the second inequality follows from the first inequality and the oddness of f (z, ·)). Moreover, they assumed that for almost all z ∈ Ω, f (z, ·) exhibits a concave (that is, a strictly (p − 1)-superlinear) term near zero. So, f (z, ·) has zeros of constant sign and it presents a kind of oscillatory behaviour near zero. In the present work we introduce in the equation an indefinite potential term ξ(z)|x| p−2 x and we remove the requirement that f (z, η) ≤ 0 for almost all z ∈ Ω. We point out that this was a very convenient hypothesis, since the constant functionũ ≡ η > 0 provided an upper solution for the problem andṽ = −η < 0 a lower solution. With them, the analysis of problem (1) was significantly simplified. The absence of this condition in the present work, changes the geometry of the problem and so we need a different approach. We should mention that in Papageorgiou and Rȃdulescu [9], the differential operator is more general and is nonhomogeneous. It is an interesting open problem whether our present work can be extended to equations driven by nonhomogeneous differential operators, as in [9].
Wang [13] was the first to produce an infinity of solutions for problems with a reaction of arbitrary growth. He used cut-off techniques to study semilinear Dirichlet problems with zero potential driven by the Laplacian. More recently, Li and Wang [6] produced infinitely many nodal solutions for semilinear Schrödinger equations. We also refer to our recent papers [11,12], which deal with the qualitative analysis of nonlinear Robin problems.
We denote by · the norm of the Sobolev space W 1,p (Ω) defined by u = u p p + Du p p 1 /p for all u ∈ W 1,p (Ω).
The Banach space C 1 (Ω) is an ordered Banach space with positive (order) cone This cone contains the open set D + = {u ∈ C + : u(z) > 0 for all z ∈ Ω}.
Also, letD + ⊆ C + be defined bŷ Evidently,D + ⊆ C 1 (Ω) is open and D + ⊆D + . On ∂Ω we consider the (N − 1)-dimensional Hausdorff (surface) measure σ(·). Using this measure, we can define in the usual way the boundary Lebesgue spaces L s (∂Ω), 1 ≤ s ≤ ∞. From the theory of Sobolev spaces, we know that there exists a unique continuous linear map γ 0 : W 1,p (Ω) → L p (∂Ω), known an the "trace operator", such that So, the trace operator assigns "boundary values" to every Sobolev function. The trace operator is compact into L s (∂Ω) for all s ∈ 1, In the sequel, for the sake of notational simplicity, we will drop the use of operator γ 0 . All restrictions of Sobolev functions on ∂Ω, are understood in the sense of traces.
Given h 1 , h 2 ∈ L ∞ (Ω), we write that h 1 ≺ h 2 if and only if for every compact set K ⊆ Ω, we can find = (K) > 0 such that We see that, if h 1 , h 2 ∈ C(Ω) and h 1 (z) < h 2 (z) for all z ∈ Ω, then h 1 ≺ h 2 . The next strong comparison theorem can be found in Fragnelli, Mugnai and Papageorgiou [3].
As we have already mentioned in the introduction, the sequence of nodal solutions will be generated by using an abstract result of Kajikiya [5], which is essentially an extension of the symmetric mountain pass theorem (see also Wang [13]). Recall that, if X is a Banach space and ϕ ∈ C 1 (X, R), we say that ϕ satisfies the "Palais-Smale condition" ("PS-condition", for short), if the following holds: admits a strongly convergent subsequence".
Theorem 2.1. Let X be a Banach space and suppose that ϕ ∈ C 1 (X, R) satisfies the P S-condition, is even and bounded below, ϕ(0) = 0, and for every n ∈ N there exist an n-dimensional subspace V n of X and ρ n > 0 such that Then there exists a sequence {u n } n≥1 ⊆ X such that In what follows, we denote by A : It is well-known (see, for example, Gasinski and Papageorgiou [4]), that A(·) is monotone continuous and of type (S) For x ∈ R, we set x ± = max{±x, 0}. Then, given u ∈ W 1,p (Ω), we can define is the critical set of ϕ.
3. Infinitely many nodal solutions. In this section we prove our main result, namely the existence of a whole sequence of distinct nodal solutions {u n } n≥1 which converge to zero in C 1 (Ω).
The uniqueness of this positive solution of problem (4) follows from Theorem 1 of Diaz and Saa [1].
Since problem (4) is odd (note that k(z, ·) is odd, see (3)), it follows that is the unique negative solution of (4).
Using the two constant sign solutions of problem (4) produced by Proposition 2, we introduce the following truncation-perturbation of the reaction term f (z, ·) (recall thatξ 0 > ||ξ|| ∞ ) This is a Carathéodory function. We also consider the positive and negative truncations off (z, ·), that is, the Carathéodory functionŝ for all u ∈ W 1,p (Ω).
In a similar fashion we show that v ≤ṽ for all v ∈ Kφ − . Proposition 6. If hypotheses H(ξ), H(β), H(f ) hold and V ⊆ W 1,p (Ω) is a nontrivial finite dimensional subspace, then we can find ρ V > 0 such that | 1,799 | 2017-09-01T00:00:00.000 | [
"Mathematics"
] |
Multi-scale crowd feature detection using vision sensing and statistical mechanics principles
Crowd behaviour analysis using vision has been subject to many different approaches. Multi-purpose crowd descriptors are one of the more recent approaches. These descriptors provide an opportunity to compare and categorize various types of crowds as well as classify their respective behaviours. Nevertheless, the automated calculation of descriptors which are expressed as measurements with accurate interpretation is a challenging problem. In this paper, analogies between human crowds and molecular thermodynamics systems are drawn for the measurement of crowd behaviour. Specifically, a novel descriptor is defined and measured for crowd behaviour at multiple scales. This descriptor uses the concept of Entropy for evaluating the state of crowd disorder. By results, the descriptor Entropy does indeed appear to capture the desired outcome for crowd entropy while utilizing easily detectable image features. Our new approach for machine understanding of crowd behaviour is promising, while it offers new complementary capabilities to the existing crowd descriptors, for example, as will be demonstrated, in the case of spectator crowds. The scope and performance of this descriptor are further discussed in detail in this paper.
Introduction
Various physical analogies and modelling approaches have been used in dealing with crowd motion modelling. Some of the more popular modelling analogies in this domain include cellular automata, social force model and fluid mechanics. These methods each have their own functionalities and limitations. Cellular automata (CA) have been used to simulate crowd dynamics in situations such as evacuation [2,7,13,15] or to simulate certain effects such as line formation in the crowd [30]. CA does not aim to capture all the microscopic dynamics, but only the one which is necessary to produce a certain macro effect. The social force model is another pop-ular method for crowd simulation [9,10]. The social force model is based on a simple concept wherein individuals move according to their goals and environmental constraints. It is assumed that each individual has a desired direction and velocity, while seeking to keep a social distance from other members of the crowd as well as avoid hitting boundaries. The accuracy of the social force model is directly dependent on the accuracy of the estimated desired velocities, which is in itself a challenging problem. Fluid mechanics has also been investigated for modelling pedestrian motions. Henderson was the first to propose a gas kinetic model for pedestrian flows [11]. Using this basis of a Boltzmann-like gas kinetic model, Helbing [8] developed a special theory for pedestrians, distinguishing between different groups within the crowd with different types of motions and goals. However, these works do not include any experimentation with real crowds using vision or any other in situ observation or measurement.
In addition to the above-mentioned modelling approaches, where models impose hypothetical structures that are controlled by sets of parameters on crowd motion, other approaches for understanding crowd behaviour are achieved through observation and learning from reoccurring crowd patterns of motion. Topic models have been used successfully for learning these patterns in an unsupervised fashion. In this, they have been used to learn the semantic (spatial) regions within a crowd scene [29,38]. Other methods for detecting and segmenting semantic regions have also been proposed [5,17,21,27,28]. Other approaches for analysing crowd behaviour include agent-based methods [32,36], where the behaviour of pedestrians as individuals is considered and modelled in relation with the rest of the crowd; appearance-based approaches [22] using crowd behaviour priors in the form of image patches which are learned offline; and methods which look at groups and group activities within the crowd [4,20,33].
One of the first works which proposed a descriptor for crowd and demonstrated its usefulness in analysing the behaviour of the crowd is by Zhou et al. [35,37]. Crowd Collectiveness was introduced as a measure of the degree of individuals acting as a union in a collective motion. Collectiveness is based on two key properties of collective motion: (i) behaviour consistency in neighbourhoods; (ii) global consistency among non-neighbours. Using collectiveness as a method for detecting groups within the crowd was also proposed [37]. The detection of groups within a crowd was further studied using the concepts of coherent motion [31]. A number of group-level crowd descriptors were then introduced by Shao et al. [25]. These descriptors include Stability, Uniformity and Conflict. Our approach bears similarities to these works, in that crowd descriptors are sought to assist understanding of the behaviour of the crowd.
Following the methods that consider a crowd of people to resemble a physical system, we propose Entropy as an additional descriptor for crowd analysis. Entropy has similarities to both collectiveness and stability. However, it is distinctly different in terms of its definition and computation. In practice, as will be demonstrated, these descriptors are suitable in different circumstances. A detailed comparison is made between entropy and collectiveness, while stability is also compared to these both. We will further introduce the concept of Internal energy for crowds and offer initial discussions as to its validity as a crowd descriptor.
As for the significance of defining and estimating crowd descriptors, Zhou et al. [37] note that the "lack of universal descriptors" to characterize crowd behaviour(s) is the main reason behind the inadequacy of most surveillance technologies for automatically detecting crowd behaviour(s) across different scenes. Crowd descriptors, especially when they are used as a set of features, provide the generality of approach which is needed to handle different types of crowds and different types of crowd behaviours. This is in contrast to the modelling approaches which have been developed to investigate specific crowds and specific behaviours. Different descriptors are therefore required to express the various aspects of crowd behaviour. In our study, we propose a novel and complementary descriptor for meso-scale crowd description. This is inspired from the fundamental principles of statistical mechanics, where the macroscopic properties of gases are derived from the statistical motion realisation of their constituent microscopic molecules (microscopic particles). The similarity can be found with crowd, where micro-scale (individual) motions within the crowd can influence the overall behaviour(s) of the crowd at the macro-scale levels. In this, individual motions are observed and utilized to measure crowd descriptors at macro-level. This provides a method to quantify entropy as a computer vision descriptor for crowds. Furthermore, it opens up an opportunity to explore the knowledge of statistical mechanics for the benefit of crowd behaviour analysis from vision.
The use of an ensemble of particles for modelling people was introduced in the initial theoretical studies [8,10,11] and was further utilized for automated visual analysis of crowd behaviour. Some of the works mentioned in this section use a similar framework [1,5,18]. A survey work by Moore et al. [19] refers to this as particle-based framework and reviews the benefits and scope of such an approach.
Our contributions include (i) the introduction of entropy as a complimentary descriptor for crowd analysis; and (ii) a new approach for unusual behaviour detection in crowds via the crowd space. A similar method can also be used for defining multiple crowd states and thereby detecting a change in the crowd state.
In the next section, a new crowd feature space is introduced. In this, three features of Structure, Energy and Translation are intuitively identified to facilitate the understanding of the state of a crowd and its behaviour. Also, these features can be used directly to evaluate the usualness or the unusualness of this behaviour. Section 3 provides a detailed description of structure and its usage in context of the study of crowd. It is shown here how this descriptor can be mapped onto the statistical mechanics principles of entropy. The scope and comparisons to the other descriptors are also covered. Discussions on sub-groups and homogeneity in sub-groups, as well as discussions on internal energy as a crowd descriptor, can be found in Sect. 4. Section 5 looks into unusual behaviour detection of crowds in real settings and within a context. Finally, conclusions are summarized in Sect. 6.
The crowd features
In our approach, we assume that a force keeps crowd members together. The strength of connections between the members will be referred to as Structure. Irrespective of the strength of connections, the crowd may be in an excited state (high energy) or a calm state (low energy). This feature is It is also possible to consider that the whole crowd moves in space. This is referred to as Translation. As will be discussed in greater detail later, the above features are inspired and translated into statistical mechanics concepts of entropy, internal energy and flow. Figure 1 depicts a visual representation of these features.
Following the introduction of these features, a threedimensional crowd space can be defined. Figure 2 shows a representation of the structure-energy-translation crowd space. In this, the cube represents a space of normalised parameters representing the state of the crowd system. Table 1 offers a set of hypothetical examples of different types of crowds, while Fig. 2 shows where these would reside in the crowd space. Figure 2 shows a number of examples where the different crowd types can be differentiated using the crowd feature space. Changes in the state of a crowd may also be detected using this feature space. 1 Further, unusual behaviours can be detected using this space by defining a sub-space wherein the crowd is expected to reside. Figure 3 illustrates how the state of crowds can be correctly monitored using these features. 1 Given that these changes are within the scope of the features. The scope is addressed in the following sections. As shown in Fig. 2, a crowd may reside in any location in the crowd space. However, for any given situation or context there would be an expectation of where the crowd should reside. A divergence from this expected or desired position can be considered as an unusual crowd behaviour. Figure 3 shows the envisaged sub-spaces of usual (expected) crowd behaviour spaces in various situations and crowds. By mapping the crowd onto the crowd space and learning the limits of usual behaviour, a crowd with unusual behaviour can be defined as a crowd which does not fall within such limits.
Entropy of crowd
The concept of entropy is based on the generic observation that there are many more ways for a system of microscopic particles to be disordered than to have a certain specific order. While manifesting a specific macroscopic state, it is more probable for such system, or statistical ensemble to assume a level of disorder. If a certain order is observed, given that it would have been unlikely that such order is obtained at random, it can be concluded with certain confidence that there were other forces at play which enforced such arrangement.
In this section, one assumes that the crowd is a homogeneous system or otherwise the concepts are considered for homogeneous groups within the crowd. Detection of homogeneous groups is achieved through the use of Collective merging [37]. These meso-scale groups are tracked in consecutive frames. This is further discussed in Sect. 4.
In classical statistical mechanics theory, entropy, S, is the measure of mechanical disorder for a system of microscopic particles. It is defined in the following way: where, for a system with a discrete set of microstates, p i is the probability of occurrence for microstate i and K is the Fig. 3 Usual behaviour sub-spaces are shown on the right of each crowd example. a Spectator crowd is denoted by variant levels of energy at high structure with no translation. b When arriving or departing, spectator crowd has lower structure but significant translation. c A crowd on an escalator is a good example of a low-energy crowd. d A crowd on stairs has smaller structure in comparison with (c) since each individual is moving at its own pace and more energy since each individual moves its limbs Boltzmann constant. Similarly, entropy (mostly denoted by H ) is adopted as a measure of uncertainty in information theory: For both the above-mentioned theories, entropy leads to understanding of the overall macroscopic state of a system of microscopic particles, by calculating the statistical realisation of their microscopic states. The initial definition of entropy in classical statistical mechanics, S = k B ln W , connects entropy directly to the number of microstates, W , which corresponds to the macroscopic state of the system. Considering the states of matter which include solid, liquid and gas, entropy for these states can be understood intuitively. In a solid, molecules oscillate around a fixed point, while entropy remains relatively low. In a liquid system, molecules move relatively freely while keeping certain distances from one another. In such case, entropy is usually higher in value than that of a solid system. Finally in a gaseous system, the constituent molecules can freely move anywhere, which leads to the highest values of entropy. In other words, higher manifested values in entropy are observed when the uncertainty on the position of the constituent molecules of matter increases.
One of the challenges in evaluating the value of entropy is that for each crowd example only a limited subset of all possible microstates are observed. Therefore, it is not possible to count the number of microstates or directly calculate their probabilities. For this, an extra step is devised to infer a model for all possible microstates using the set of observed microstates.
Calculation of entropy using a microstate model
We define the entropy of a crowd as the joint entropy of N p individuals who are scattered in N l locations with a probability mass function f Y i on a discrete random variable, Y i , defined at each spatial bin, l i .
The joint entropy of two ensembles X and Y is [16] H where both X and Y are triples. X is a triple (x, A X , P X ) where x is the value of a random variable, which takes on one of a set of possible values, Thereby, the entropy of a crowd can be described as where X k is a triple (x k , L X , P X k ). x k takes on one of a set of possible values, L X = {l 1 , l 2 , . . . , l N l }, having probabilities P X k = {p k,1 , p k,2 , . . . , p k,N l }, with P(x k = l i ) = p k,i . Two approaches are considered here to evaluate H (X 1 , . . . , X N p ).
Approach 1: Complete enumeration
First, the complete enumeration of all possible microstates is considered using the ones which have been observed to calculate f Y i s. The joint probabilities, P(x 1 , . . . , x N p ), in Eq.
(4) are the other unknowns. While these probabilities can be calculated using the probability mass functions f Y i , assumptions regarding the dependency of the individuals need to be made. The computation cost is of the order O(N N p l ). This is the permutation of N p individuals scattered in N l locations. For each of these permutations, the joint probability p(X 1 , . . . , X N p ) needs to be calculated.
The validity of this approach may be contested since the probability mass functions f Y i are calculated using a limited sample set of observed microstates and it is prone to overfitting the model to the observed sample set. Thus, relaxing some of the conditions in this model may be favourable for a better coverage of the space of all possible microstates.
Approach 2: Preserving the density pattern
One of the assumptions in the above approach concerns the dependence between the positions of the individuals. In the example below, it will be shown that although there is reason to believe that these positions are dependent, sufficient information is not available to understand their dependencies in an unbiased manner.
In support of the dependency argument, consider that people tend to keep certain distances from each other, the socalled personal space. Also depending on the relationships between the individuals, they may tend to group together or avoid others. From a different point of view, consider that a certain macrostate has been observed in a crowd: a number of clusters of people are observed in different locations. There may be different causes for this effect. Hypothesis A: some physical locations are more desirable than others, and people cluster in them for that reason. Hypothesis B: there is some social relationship between members of the crowd, and they cluster together due to that relationship. In Hypothesis B, the act of clustering is important, while the cluster positions are random. Furthermore, Hypothesis C can be added to accommodate the combination of the other two hypotheses. However, sufficient information is not given in favour of either hypothesis A, B or C in the above example.
Therefore, we propose that when analysing crowd formation through a few correlated frames a simpler model which exhibits similar outcomes is adopted. We hypothesize that a pattern is formed in the crowd if each individual is bounded by the same pattern. In this model, apart from the locations of people, which are considered to be independent, the individuals are considered to be identical. As a result of this approach, the calculation of entropy simplifies.
Let n i, j be the number of times that individual j has been observed in bin l i in N f frames (N f is the number of frames in a chosen time window). The probability of selecting this bin, l i , by individual j is Given that the location of individuals is considered independent and no distinction applies between individuals, the probability of any individual selecting bin l i is the same as any other individual. Thus, the probability of selecting bin l i , P(x = l i ), is estimated in the following way: where n i is the sum of all density counts at bin l i in N f frames. Since the locations of individuals are independent, the joint entropy of the crowd, H (X 1 , . . . , X N p ), simplifies as Also, note that the locations of all the individuals are based on the same location probabilities, P(x = l i ). Thus, where X is a triple (x, L X , P X ), the outcome x is the value of a random variable which takes on one of a set of possible values,
Pre-processing
Three pre-processing stages should be considered before crowd entropy is calculated: Real-world locations; The locations of individuals in an image have been subjected to projective transform. The severity of the distortion caused by this transform is relative to the angle between camera's image plane and the scene's ground plane. Ideally, this angle would be zero. This is specifically when the camera is placed overhead and looking down at Figure 4 shows three examples where the disruptive effects of projective transform are increasing from left to right. Given the head locations of the individuals, the real-world positions can be retrieved using the camera calibration matrix and assuming an average height for the entire crowd. This is done through head-height plane homography transform [24]. However, the problem of head detection has proven difficult in the context of crowds. An alternative method using image features is discussed in Sect. 3.5. [19] also noted in their survey paper that side views "are least preferable for particle-based frameworks". However, a soft calibration can be considered in the case of features as was also demonstrated by Zhou et al. [37]. Internal position density map; In order to calculate entropy, the internal position of each individual within the crowd, x i , is required. If the crowd is stationary, then the observed position, x o , is equal to the internal position ( . However, if the crowd is moving with a flow velocity, v f , the change in the internal position in a time step dt can be calculated as Internal position density map; Once the internal positions of individuals are known, an internal density map can be created. Note that the width of the density map bins, w bin , is a significant parameter in the calculation of entropy. In this, a too large a bin will mask the very information that entropy is aiming to extract; with a too large a spatial bin, a gas and a solid may appear similar the way they uniformly occupy the space, while a too small a bin will be prone to noise. This is illustrated in Fig. 5. This figure shows two entropy levels with Fig. 5a low entropy and Fig. 5b high entropy. A time window of two consecutive frames is also depicted with the blue circles representing the position of particles at time t 0 and green circles for positions at time t 1 . The spatial gridding was done using two bin sizes: large bins and small bins. Please note that each spatial bin only counts the number of particles which lands on that bin, while the location of the particle within the bin is inconsequential. Conceptually, entropy for Fig. 5a when observed by the large bin is zero, since there is no difference between the two observed microstates and the particles appear stationary. The oscillations are better observed with the small bin where two of the particles are observed in new bins in t 1 . In Fig. 5b, depicting a large entropy, the large bin only observes three out of 16 particles to have moved between t 0 and t 1 , while this number is 15 out of 16 for the small bin. As a larger time window is considered, it is expected that the particles of example (b) would populate the available space, while the particles of example (a) are expected to oscillate around the original location. This effect cannot be observed by the large bin since it even observes the example (a) with a two-frame time window as uniformly populating the available space.
Normalisation of Entropy
Non-normalised entropy can only be used to compare crowds which are composed of the same number of individuals and have the same spatial extent. Since these conditions are rarely met, the normalisation of entropy becomes a necessary step to achieve. Specific entropy; Specific entropy is the entropy per unit of mass. Assuming each individual has a unit of mass, the specific entropy, H k , will be the entropy of one individual in this crowd: where X is a triple (x, L X , P X ), as in Eq. (8).
Specific entropy per unit of area; Entropy is maximized if P X is uniform [16]: It can be seen that the maximum value of entropy increases with the increase in the number of spatial bins, N l . To account for this, we borrow a concept called redundancy from information theory. Redundancy is a measure of the amount of wasted space when coding and transmitting data. The redundancy of X , R(X ), on alphabet A X measures the fractional difference between H (X ) and its maximum possible value: Complementary to the concept of redundancy is efficiency, where redundancy and efficiency of a code add up to one. In this case, our notion of normalised specific entropy, h k , is analogous to efficiency: Minimum entropy; The minimum value for entropy is theoretically equal to zero. This is when only one microstate is possible for the system, and therefore the probability of that microstate to occur is one. We do not differentiate between individuals, and the probability of their presence at each location is calculated from the density map of the entire crowd. Thus, except if the entire crowd is concentrated at one spatial bin (which does not sound like a proper behaviour for a crowd if the bin size is set correctly), the minimum value of zero is not obtainable. Instead, the obtainable minimum value of entropy is dependent on the initial density map, which in turn depends on the number of individuals, their sparseness and the bin sizes. It is desirable to assign a small entropy to a crowd that holds its structure, no matter how dense or sparse that structure may be. In this, the focus should be on the deviation of the crowd from its original arrangement. The minimum entropy is assumed to be that of the initial state (with window size zero). This normalises for density and sparsity of the crowd. A crowd for which the members hold their initial positions and just oscillate within the bounds of their respective positions the entropy is considered to be minimum within that time window. The entropy of this crowd is mapped onto zero entropy. In other words, only if the same structure is repeatedly replicated the entropy is considered to be zero. In practice, as the time windows get larger, uncertainty and noise build up and generally entropy grows with the increase in the size of the time window. Therefore, in real examples zero entropies do not occur. Similarly, the uniform coverage of the spatial bins will not be achieved in real examples and so is the entropy value of one. A word of caution: it is possible that in the initial state the particles are nearly uniformly distributed. In such cases, the difference between the minimum and maximum entropy is very small. This is generally a cue to incorrect bin size. An example of this was seen with the large bin in Fig. 5. The minimum entropy is thereby defined as where p 0 i is the probability of location l i being occupied in the initial frame. Thereby, the normalised, scaled, specific entropy,h k , is defined as The normalised, scaled specific entropy,h k , will be referred to as entropy hereafter.
Experimental results
Three crowd examples have been used in order to demonstrate the proposed method for conceptualising crowd as a statistical mechanical system. Experiment A (exp A) shows a crowd of people going down a staircase. The motion of the crowd in this example is unidirectional. Figure 6a shows one frame example of this crowd. It depicts an indoor scene with artificial lighting, while the crowd is viewed from an oblique frontal view. Figure 6b shows the second crowd example (exp B). This focuses on people on an escalator which is located on the left-hand side of the same video footage. Here, the pedestrians are mostly standing still while the escalator carries them upwards. Finally, Fig. 6c shows a larger crowd in an open indoor space (shopping mall) with pedestrians moving in various directions (exp C). Both exp A and B reside in a crowd footage from the data-driven crowd analysis data set [23]. This video is captured at a resolution of 640 × 360 pixels and comprises 1155 frames at 25 frames per second (fps). Exp C uses a video footage from the Collective Motion Database [35] and has a resolution of 1000 × 670 pixels with 600 frames captured at 25 fps. It is expected that: (i) the crowd in exp B, Fig. 6b, has the smallest entropy; (ii) the crowd in exp A, Fig. 6a, has a larger entropy than the crowd in exp B but still smaller than that of the crowd in exp C, Fig. 6c. The largest entropy is envisaged for the crowd in exp C.
In these experiments, the respective figures show three calibration planes. In this, the orange plane is the reference plane which is manually drawn. The blue and yellow planes are the ground-level and head-level planes, respectively. These are projected back to the image plane after calibration. The red circles show the position of the individuals' heads on the head-level plane. Entropy was initially calculated using manually labelled heads. These were projected into the ground plane [24]. For this, a pre-processing step with a head detection algorithm was assumed to be present. Experiments were carried out for varying time window sizes (w tw ) and spatial bin widths (w bin ). The results confirmed the hypothesis with Figure 7 shows the results, where a time window size of 3 s is used. It can be seen that the order of entropy values is as expected and the separation within the error bars between the various experiment crowds is mostly achieved. This figure also demonstrates the effects of spatial bin size, where bins in the range of [0.01 m, 0.6 m] are investigated. It can be seen that the smallest bin sizes do not offer a good separation between the crowds. The same also goes for the larger bin sizes. The best separation is achieved for bin sizes within the range [0.04 m, 0.2 m]. As the bins get larger, the entropy becomes unstable for the escalator case. This can be attributed to the small volume of the escalator crowd as well as that too large bin sizes are not sensitive enough to the differences in individuals' motions. It is also observed that larger time windows offer better separation. However, it must be noted that due to observing a non-stationary crowd with a stationary camera, it is possible that the crowd or the section of the crowd which is being analysed would move beyond the camera field of view. The results for exp B, when analysed with a 5 s time window, may be less reliable for this reason. However, as it transpired, obtaining a good tracking of heads with a generic algorithm for different crowd examples was an elusive task. Thus, a series of image features that are detected readily and tracked easily were considered as the initial step. The immediate concern would be that the features are not necessarily from the head area; they can be from different parts of the body. However, if the crowd is dense enough most features will be from the head region. But since we deal with crowds that are not sufficiently dense, the mapping of the features onto the ground plane is problematic. We have experimented with masking the none head plane regions to eliminate those features which are defiantly not on the head plane and assume that the rest of the features are on the head plane. However, this is a very naive assumption and introduces large errors in the position of features. Depending on the specifics of the example, these errors may be more disruptive than the distortion caused by the projective transform. This issue will be discussed further in the next section.
Entropy via image features
Corner feature detection using a method which was introduced by [26] is used for feature detection in images. These features are specifically designed to be suitable for tracking. If a background image is available, the detected features are compared against the features which are detected on the background and the background features are removed from the list of detected features. As mentioned before, a mask for the head plane is used to eliminate all the features which can- not be on the head plane. The remaining features are assumed to be on the head plane and are mapped onto the ground plane. Entropy is calculated as before.
A visual and intuitive description of how the algorithm works is shown in Figs. 8 and 9. In Eq. 6, n i was defined as the sum of all density counts at bin l i in N f frames. It can also be seen from this equation that the p i s which determine the value of entropy are linearly dependent on these n i s. An image showing all these n i s where the image intensity at location l i is dependent on the value of n i is referred here to as n i -map. Note that the locations on the n i -maps are the internal positions of the features which are projected into the ground plane. Figure 8 shows the n i -map for exp A (stairs) over a 2s time window. Since the locations with n i = 0 do not affect the value of entropy, condensed versions of the n imaps for all the three experiments are also shown in Fig. 9. We shall call these, condensed n i -maps, profiles. Figure 9 shows the profiles for exps A&B&C in the order of increasing entropy from left to right. This effect (increasing entropy) can be seen visually. In these, the probability of feature occurrence is linearly dependent on the value of the pixels. In Fig. 9a, most of the pixels are very low-valued (red in colour). Thus, they have low probability of feature occurrence. However, note that all the points in the profile are nonzero. In contrast, there are also some isolated high-valued pixels. (These can be viewed as peaks of probability function.) In fact, they offer a sound hypothesis for features' respective locations. The background pixels have higher values in Fig. 9b (yellow in colour), meaning that the probability of feature occurrence is more evenly distributed over spatial bins. However, there are still many high-valued points (peaks) where the probability of occurrence is higher. In Fig. 9c, the background is yet higher in value (green in colour), while the peaks are less prominent. In this example, the probability of feature occurrence is more evenly distributed and thus high values of entropies are expected. Table 2 shows the normalised respective entropies which are calculated for these examples. The normalisation values h min and h max affect the result significantly. Also, it can be seen that the values for normalised entropies are very high. This is due to the small size of the bins being used. In Fig. 10, these results reside in the upper left corner of the graph. Small bin sizes are depicted for more intuitive visualisation. Figure 10 shows the detected entropy of the three examples using image features. The level of separation between the entropies is understandably lower. This is due to the noise which is introduced by replacing head detection with feature detection and the added distortion which corresponds to assuming feature points are on the head plane. The mean value separation still holds for all the bin sizes. It was initially noted that the distortion introduced by an approximate ground plane projection might be more disruptive than that which has been originally introduced by the projective transform. Therefore, the results for entropy via image features using image coordinates were also provided. 2 These 2 Note that the entropy values need to be compared at corresponding bin sizes. Therefore, the reference plane is used to find a mapping between Fig. 11. It can be seen that the results are improved and the separation is mostly achieved for the three experiments. The effect of using larger time windows is demonstrated in Fig. 12. As was described before, when larger windows are considered, more variability is observed together with the natural build-up of noise. Therefore, the value of entropy increases. However, it is worth mentioning that in the case of the experiments shown in Fig. 12, where no ground plane mapping is used, the results remain consistent. In those, the mean value separation is obtained between the entropy values of the experiments at various time window sizes. the corresponding bin sizes between exps A, B and C. For example, the bin size of 3 pixels in exp A&B corresponds to bin size of 1.9 pixels in exp C, all corresponding approximately to a bin size of 5 cm. As an example of execution time, the video containing both the escalator and stairs experiments is processed at a mean rate of 10 fps with the spatial bin size of 16 pixels. This is using an Intel Core i7-2600 CPU at 3.40GHz. One notes that the execution speed decreases with the increasing number of crowd clusters in the frame. (Crowd clusters are discussed in Sect. 4.3.) On the other hand, the speed increases as a result of using larger spatial bins.
Entropy versus collectiveness
Collectiveness is a measure of collective motion that is introduced by Zhou et al. [37]. They define it as follows: "Collectiveness describes the degree of individuals acting as a union in collective motions". Collectiveness seeks collective manifolds wherein consistent motion is observed in neighbourhoods, while global consistency among non-neighbours is obtained through intermediate individuals in neighbourhoods on the manifold. Collectiveness assigns values in the range [0, 1] to a given crowd. It requires setting a parameter, K , which defines the range of neighbourhoods in the given experimented crowd.
Collectiveness bears similarities with entropy. In order to be able to compare collectiveness with entropy directly, the notion of structure is introduced. As noted, entropy is basically a measure of disorder, while structure can be described as a measure of order. For a normalised entropy ranging within an interval [0, 1], structure and entropy are complementary and add up to unity: s k = 1 −h k , where s k is the normalised structure. Figure 13 shows a comparison between collectiveness and structure (via entropy using image features with no ground plane projection). It can be seen that collectiveness also achieves separation between these examples. Although entropy finds a larger distinction between exp A (Stairs) and exp C (Hall), collectiveness finds exp B (Escalator) and exp A (Stairs) more distinct. This is an early sign that depending on the sample which is to be analysed one or the other method may be more effective. The most important factors which may contribute here are: (i) the density and behaviour of crowd; (ii) camera view angle and spatial resolution; and (iii) structure of the environment. It should be mentioned here that both collectiveness and entropy values depend on the respective adopted parameters of these methods. These include K for collectiveness and spatial bin sizes (w bin ) for entropy (temporal window, w tw , is not that significant) . Here, a mid-range k (k = 20) is used to produce the collectiveness results and w bin is subsequently chosen to produce similar values for the structure in the escalator example and then used to evaluate the other two examples. Figure 14 shows an example where collectiveness fails to produce stable and reliable results. It is worth noting that collectiveness is essentially a different concept from that of entropy. Collectiveness is best for analysing crowds with discernible motions in the form of flows and limited oscillatory motions. Figure 14 depicts an example of a stadium, wherein the initial state of the crowd is calm with sparse incoherent motions. However, an event which may occur on the pitch may trigger increased level of excitement of the crowd in the stadium arena. 3 Figure 14c, d shows the values of collectiveness and entropy in the crowd for illustration. Here, the dotted red line indicates the time of the event, while the volatility of the crowd increases before the event in anticipation. In this circumstance, collectiveness does not seem to provide intuitive results. The initial state of crowd has small amounts of motions, meaning that any small group with more significant motion can override the value for the collectiveness. Further, in the absence of such groups collectiveness becomes unstable as it tries to connect incoherent sparse motions within the crowd. In contrast, entropy clearly captures the increased volatility and the change in the state of the crowd.
Other crowd descriptors
The other relevant crowd descriptor which has been recently proposed by Shao et al. [25] is Stability. This descriptor is defined as the property which characterizes "whether a group can keep internal topological structure over time". Stability is a composite descriptor, and it is computed using three components that each assess one of the following stability criteria for the group members. In this, the stability of the group is measured via the stability of its members. The stable members are assumed to: (i) maintain a similar set of nearest neighbours; (ii) keep a consistent topological distance with neighbours; and (iii) be less likely to leave their nearest neighbour set. Shao et al. have compared stability with collectiveness and found a weak positive correlation between the two. It has also been shown that groups with similar collectiveness can have very different stabilities. With its focus on measuring the stability of each member, this descriptor is very useful for measuring small groups, but less suitable for dense crowds viewed at distance. Also, like collectiveness, stability was found not to be suitable for mostly stationary crowds with random oscillatory motions (e.g. spectator crowds) due to its reliance on tracklets. Stability has been shown to provide promising results alongside other descriptors for the applications such as crowd monitoring, crowd classification and retrieval. However, a detailed analysis of the behaviour of this descriptor in different crowd examples was not shown.
Internal kinetic energy
The internal energy of a crowd as a thermodynamic system, U , can possibly be used as a measure of how excited the crowd is. Irrespective of its entropy, a crowd may be in an excited/agitated state (high energy) or a calm state (low energy). On this note, it is worthy to point out that in thermodynamics, entropy and internal energy are both state variables. U is composed of two components: U kinetic can be computed as v i rms is the square root of the mean of the squares of internal velocity, v i , of the particles (v i rms = v2 i ). Having extracted the subgroups in the crowd and detected their flow, the internal velocity for a particle j at time t is v i ( j, t): where v o is the observed velocity and v f is the sampled flow velocity at location x i ( j, t), which is the internal location of particle j at time t.
In many occasions, sufficient information can be gathered using solely the U kinetic . For example, generally for a gas at higher temperatures and lower pressure the potential energy due to inter-molecular forces becomes less significant when compared with the internal kinetic energy of the particles: U potential is a significantly more complex value to calculate. Two of the most prominent pedestrian modelling approaches took inspiration from gas kinetic theory where the focus is on the kinetic energy of the crowd [8,12] with the considerations that if there are more than one phase (gas, liquid, solid) present potential energy needs to be considered [12]. Hughes [14] defines the crowd potential energy as the "common sense of the task the pedestrians face to reach their common destination". However, a directly measurable value has not been defined. We will address U potential calculation in our future works.
Homogeneity and multi-scale descriptors
Entropy and internal energy are calculated at meso-scale (sub-group) level within the crowd. Collective merging [37] has been used here as the starting point for the detection and tracking of the sub-groups within the crowd. Collective merging has two tuning parameters: α, which indicates the scale of the cluster of interest, and K , which is a parameter for collectiveness that indicates the spatial extent of a pedestrian in pixels. α and K control the scale of the sub-groups of interest. This highlights the need for having a pre-knowledge about the crowd and the scale of the desired behaviour analysis. The detection of putative crowd clusters is performed for each pair of consecutive frames using collective merging. Further, a mapping is made between the detected clusters in consecutive frames based on their population, motion and feature points, and thereby the clusters are tracked for as long as they are in the field of view. However, there is an inherent ambiguity in determining the sub-groups within a crowd, as was denoted by [4] and [24]. Ideally, crowd attributes are assigned to the parts of the crowd which are homogeneous with regard to that attribute. Different groups can be detected within a crowd depending on the attribute which guides this segmentation. For example, it is possible to detect the segments of the crowd which have similar energy levels. Note that these segments may be different from the ones detected using entropy for example. The most intuitive and common basis for finding groups within a crowd is through detecting segments of the crowd that demonstrate collective motion [25,31,34,37]. Note that this is different from the social group formations as described by [20]. Here, the members of the detected groups do not necessarily have social attachments. Consider the example of a marathon run where the entire participant population can be considered as one group. Using the idea of collective motion as denoted by [37] is similar to segmentation based on flow.
Unusual behaviour detection
The work here has been performed within the eVACUATE project [6]. Its goal is to facilitate the safety and security of crowds as they are evacuated from confined spaces. This includes a holistic situation awareness and guidance system for sustaining the active evacuation route under different crowd evacuation scenarios. The work described here contributes to the situation awareness functionality of the system by detecting usual/unusual behaviour of crowd using computer vision with added context awareness. The earlier works have been published in two conference papers [3,24].
A series of experiments have been performed within the eVACUATE project, while looking into different crowd behaviour scenarios. These included experiments in an airport, a metro station and a stadium. In these, a context is established for a given crowd taking into account the event and the spatial characteristics of the venue. For instance, for a crowd at a football match, the event is the match and the venue is the stadium. During the match, it is expected that the crowd will be mostly seated. Furthermore, one of the main features of a crowd at a stadium is that they are prone to excitement. Thus, a wide range of internal energy levels is also expected for this crowd.
As an example, a series of experiments have been performed in the Anoeta Stadium, San Sebastian. Different scenarios and events have been enacted and recorded during the evacuation of a crowd. 4 Initially, different social groups have been established in the crowd. In this, the crowd were segmented into groups of 2, 3 and 4, while some were directed to act as individuals. Each group was asked to appoint a group leader, and a susceptibility level to be led is also assigned to each member of the group as a personality trait. A number of actors were also used in some of the scenarios to initiate certain behaviours in the crowd.
Examples of our prototype system are provided in Fig. 15 as a proof of concept. Here, the notion of crowd space has been used to define the thresholds of usual behaviour within a context. In our future work, we will look into automatically setting these thresholds and tuning parameters such as bin size for the evaluation of entropy. Figure 15a is a screen capture of the system detecting unusual behaviour of crowd in one of the experiments which we conducted at the Anoeta Stadium. Results from the same system detecting unusual behaviour in the stairs and escalator example are shown in Fig. 15b.
Conclusions
A new crowd descriptor has been introduced to characterize the behaviour of people using vision measurements. This descriptor is inspired from properties of statistical molecular systems and entropy. The quantification of this descriptor has been investigated, and alternative methods explored. Experiments have been performed on example crowds from two publicly available data sets and an in situ data set generated for this work. The descriptor, entropy, is shown to capture the desired outcome for entropy of crowd. It achieves this consistently throughout several experiments while using easily detectable image features. The effects of projective transform and mitigation strategies using calibration have been investigated. It has also been shown that entropy offers complementary capabilities to the set of existing crowd descriptors including collectiveness.
In our future work, we shall explore Internal Energy as a crowd descriptor. Also, we shall systematically investigate more on the performance of the currently defined crowd descriptors. The descriptors are evaluated against the inherent characteristics of crowd, such as its density and homogeneity as well as the recipient environment in which it moves. Visual variations of video footage such as view angle and lighting conditions will also be considered. Another interesting avenue which we will explore is to predict crowd behaviour through understanding the nature of the mechanics of groups and their potential dispersion in context of the venue spaces, boundaries and temporal constraints. | 11,361 | 2020-04-21T00:00:00.000 | [
"Computer Science",
"Physics"
] |
3-(2-Hydroxy-5-methylanilino)isobenzofuran-1(3H)-one1
In the molecule of the title compound, C15H13NO3, the phthalide ring system is virtually planar, with a dihedral angle of 1.98 (3)° between the fused five- and six-membered rings. The substituted aromatic ring is oriented at a dihedral angle of 57.50 (3)° with respect to the phthalide ring system. In the crystal structure, intermolecular O—H⋯O and N—H⋯O hydrogen bonds link the molecules, forming a three-dimensional network.
In the molecule of the title compound, C 15 H 13 NO 3 , the phthalide ring system is virtually planar, with a dihedral angle of 1.98 (3) between the fused five-and six-membered rings. The substituted aromatic ring is oriented at a dihedral angle of 57.50 (3) with respect to the phthalide ring system. In the crystal structure, intermolecular O-HÁ Á ÁO and N-HÁ Á ÁO hydrogen bonds link the molecules, forming a three-dimensional network.
S1. Comment
The present work is part of a structural study of compounds of 3-substituted phthalides, and we report here the crystal structure of the title compound, (I).
The molecule of (I), (Fig. 1 Ring C is oriented with respect to the coplanar ring system at a dihedral angle of 57.50 (3)°.
In the crystal structure, intermolecular O-H···O and N-H···O hydrogen bonds (Table 1) link the molecules by C(4) chains ( Fig. 2) (Bernstein et al., 1995;Etter, 1990), to form a three-dimensional network (Fig. 3), in which they may be effective in the stabilization of the structure.
S2. Experimental
The title compound was prepared according to the method described by Odabaşoğlu & Büyükgüngör (2006), using phthalaldehydic acid and 2-aminophenol as starting materials (yield; 80%). Crystals of (I) suitable for X-ray analysis were obtained by slow evaporation of an ethanol-DMF (1:1) solution at room temperature. The molecular structure of the title molecule, with the atom-numbering scheme. Displacement ellipsoids are drawn at the 30% probability level. A partial packing diagram of (I). Hydrogen bonds are shown as dashed lines [symmetry code: (i) 1 -x, -y, z + 1/2]. H atoms not involved in hydrogen bondings have been omitted for clarity. (2) Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 711.2 | 2008-04-02T00:00:00.000 | [
"Chemistry"
] |
ON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES
For a linear dynamic system with control and disturbance, a feedback control problem is considered, in which the Euclidean norm of a set of deviations of the system’s motion from given targets at given times is optimized. The problem is formalized into a differential game in “strategy-counterstrategy” classes. A game value computing procedure, which reduces the problem to a recursive construction of upper convex hulls of auxiliary functions, is justified. Results of numerical simulations are presented.
Introduction
In this paper a linear dynamical system subjected to actions of control and disturbance is considered. A feedback control problem with quality index optimization is posed. The quality index is given in the form of the Euclidean norm of a set of deviations of the system's motion from given targets at given instants of time. The "saddle point condition in a small game" [1, p. 79] (see also [5, p. 46]) also known as the Isaacs condition [2] (see further inequality (2.7)) is not assumed. Withing the game-theoretic approach [1][2][3][4][5][6][7][8][9][10] the problem is formalized into a positional differential game in "strategy-counter strategy" classes (see, e. g., [1, p. 78], [5, p. 20]).
Basing on methods from [4,5], a procedure that reduces the considered problem under condition (2.7) to recurrent constructions of upper convex hulls of auxiliary functions was given in works [9,10]. In the present paper, the applicability of that procedure is proved for the case when condition (2.7) is not imposed. To achieve this, we follow the idea of unification of differential games [3] and use constructions of characteristic inclusions from the theory of minimax solutions of Hamilton-Jacobi equations [6] (see also [7,8]).
Results of numerical simulations are presented.
Problem Statement
Consider a dynamical system described by the following equation: Here t is time, x is a phase vector, u is a control vector, v is a disturbance vector; t 0 and ϑ are fixed instants of time (t 0 < ϑ); P and Q are given compact sets; matrix function A(t) is continuous on [t 0 , ϑ], vector function f (t, u, v) is continuous on [t 0 , ϑ] × P × Q. A current position of system (1.1) is a pair (t, x) ∈ [t 0 , ϑ] × R n . Denote Here and further the symbol · denotes the Euclidean vector norm. Define a set K of possible positions: where R 0 > 0 is some fixed number. Let a position (t * , x * ) ∈ K, t * < ϑ, and an instant t * ∈ (t * , ϑ] be given. We assume that admissible control and disturbance realizations are Borel measurable From the position (t * , x * ), such realizations uniquely generate the motion of system (1.1) as an absolutely continuous vector-function The aim of the control is to make quality index γ (1.4) as small as possible. While solving this problem, it is convenient to consider a problem of forming the most unfavorable from the control's point of view disturbance actions aimed at maximizing γ.
According to [1, p. 75; 5, p. 51], these two problems may be united into an antagonistic positional differential game of two players in "strategy-counter strategy" classes. A control action u is interpreted as an action of the first player, a disturbance action v is interpreted as an action of the second player. Admissible strategy u(·) of the first player is an arbitrary function Admissible counter strategy of the second player is an arbitrary function which for fixed (t, x) ∈ K, ε > 0 is Borel measurable with respect to u ∈ P. Here ε > 0 is the accuracy parameter (see., e.g., [1, p. 68], [5, p. 47]).
Procedure for Calculating the Game Value
In accordance with [10], consider the following procedure for calculating the value of differential game (1.1), (1.4). Let t * ∈ [t 0 , ϑ). Assign a partition of the time segment [t * , ϑ] : In further considerations of partitions like (2.1) we will assume that it contains the instants Here and further the symbol ·, · denotes the inner product of vectors.
Step by step, in the reverse order, starting from the last point of the partition ∆ k (2.1), define sets G j (t * , τ j ± 0) of vectors m ∈ R n and scalar functions ϕ where the upper index denotes the matrix transposition. Further constructions are carried out according to the following recurrent relations. Assume that for j + 1, 1 j k, the sets G j+1 (t * , τ j+1 ± 0) and the functions ϕ j+1 (t * , τ j+1 ± 0, m), m ∈ G j+1 (t * , τ j+1 ± 0), are already defined. Then, for the current j, let us define where the symbol ψ(·) * G denotes the upper convex hull of the function ψ(·) on the set G, i.e. the minimal concave function that majorizes ψ(·) for m ∈ G.
Next, if the instant τ j is not equal to any of the instants t [i] from (1.4), then we set where maximum is calculated over all such triples {ν, m * , l} that according to (2.3) correspond to the given vector m ∈ G j (t * , τ j − 0). Let us denote For t * = ϑ, we formally assume that ∆ k denotes a degenerate partition which contains only (2.5) Theorem 1. For any number ξ > 0 there exists a number δ > 0 such that, for any initial (1.4) are contained in this partition, the following inequality holds (2.6) In paper [10] the statement of this theorem was proved under the assumption that the following saddle point condition in a small game holds: The aim of this paper is to prove Theorem 1 without using condition (2.7).
The u-and v-stability properties of the value e(·)
In paper [10] inequality (2.6) is proved on the basis of the u-and v-stability properties of value e(·) (2.4) with respect to system (1.1). But in the case when condition (2.7) does not hold, some stricter u-stability property is necessary (see, e.g., [1, p. 208]). If one tries to prove this stricter property by following the scheme from [10], there arise the following substantial problems. When the control action v is formed in response to admissible realizations of u = u(t) by the rule v = v * (u(t)), where the function v * : P → Q is Borel measurable, the reachable set of system (1.1) may lack compactness. That is why further we consider an auxiliary z-model, establish proximity of motions of system (1.1) and the z-model, and prove an appropriate u-stability property of the value e(·) with respect to the z-model. Property of v-stability does not depend on condition (2.7), that is why further we use this property as it was stated in [10].
Let S ⊂ R n be a unit sphere and q ∈ S. Motions of the auxiliary z-model are described by the following differential inclusion Here λ 2 0 is the constant from (1.2). Note that similar differential inclusions are considered in order to define minimax solutions of Hamilton-Jacobi equations (see, e.g., [6, p. 14], [8]). A position of z-model (3.1) is a pair (t, z) ∈ [t 0 , ϑ] × R n . Define a set K z of possible positions of the z-model: where α > 0 is some fixed number, and λ 0 is the constant defined in (1.2). It can be proved that for any (t, z, q) ∈ [t 0 , ϑ] × R n × S the set F * (t, z, q) is nonempty, convex and compact in R n , and the multivalued mapping [t 0 , ϑ] × R n × S (t, z, q) → F * (t, z, q) ⊂ R n is continuous in the Hausdorff metric. Therefore (see, e.g., [11]), for any position (t * , z * ) ∈ K z , t * < ϑ, and any t * ∈ (t * , ϑ] and q ∈ S differential inclusion (3.1) has at least one solution z[t * [·]t * ] = {z(t) ∈ R n , t * t t * } that satisfies the equality z(t * ) = z * . Each such solution determines a motion of z-model (3.1) that starts from the position (t * , z * ). For any such motion an inclusion (t, z(t)) ∈ K z , t ∈ [t * , t * ], is valid. Moreover, according to [11], for any fixed q the reachability set of differential inclusion (3.1) at the instant t * from the position (t * , z * ) is a convex compact set in R n .
Lemma 2 (property of u-stability with respect to the z-model). Let (t * , z * ) ∈ K z , t * < ϑ and a partition ∆ k (2.1) is chosen. Let t * = τ 2 be the second instant of the partition ∆ k . Then for any q * ∈ S there exists a motion z[t * [·]t * ] of z-model (3.1), for q = q * , that starts from the initial position (t * , z * ), such that the following inequality holds Here ∆ * k * is a partition of the time segment [t * , ϑ], induced by the instants from the partition ∆ k : P r o o f of this lemma is similar to the proof of the u-stability property from [10] with a replacement of the reachability set of system (1.1) by the reachability set of differential inclusion (3.1).
Lemma 3 (property of v-stability).
Let (t * , x * ) ∈ K, t * < ϑ, and a partition ∆ k (2.1) is chosen. Let t * = τ 2 be the second instant of the partition ∆ k and ∆ * k * be partition (3.7). Then for any control realization u * [t * [·]t * ) = {u * (t) = u * ∈ P, t * t < t * } there exists such an admissible disturbance realization v[t * [·]t * ), that for a motion x[t * [·]t * ] of system (1.1) generated from the position (t * , x * ) by these realizations the following inequality holds P r o o f of this lemma is given in [10]. In the proof of Theorem 1 the following fact from [10] is used: For any position (t * , z * ) ∈ K z , t * < ϑ, and partition ∆ k (2.1), the following relations hold (3.8) where the value of h(t * ) is determined according to (1.5).
For j = 1 inequality (4.5) is derived from relation (4.2). Given that inequality (4.5) is valid for j, 1 j k, let us prove it for j + 1. Choose a vector q e j = q e j (τ j , x(τ j ), ε) ∈ S from the condition By Lemma 2 for q = q e j there exists such a motion z (j) [τ j [·]τ j+1 ] of z-model (3.1) that starts from the position (τ j , z j ) and for which the following inequality holds (4.7) By Lemma 1, due to the choice of the number δ * > 0, taking defenition (4.4) of strategy u e (·), choice (4.6) of the vector q e j and inequality (4.3) into account, we obtain Hence, taking into consideration definition (4.2) of accompanying points, we derive From (4.7) and (4.8) we conclude , then h(τ j+1 ) = h(τ j ) and the validity of inequality (4.5) for j + 1 follows from inequality (4.9), equality (3.8) and the induction hypothesis.
R e m a r k 1. In a similar way with clear modifications it can be checked that if in the procedure in definition of function ∆ψ j (·) (2.2) the operations of minimum and maximum are exchanged, then value e(·) (2.4), constructed on the basis of such modified procedure, will approximate the function of the value of differential game (1.1), (1.4) in classes of "counter strategies -strategies". R e m a r k 2. On the basis of value e(·) (2.4) by means of the extremal shift to accompanying points [1,5] one can construct ζ-optimal control laws of the players (see [5,13]), that guarantee inequalities (1.6) and (1.7).
Example
The example considered below is based on a model problem from [12, p. 49-58] (see also [5, section 38]). Consider a dynamical system described by the following equation Initial condition x(0) = (1, −1, 1, 1), and quality index are given. The control problem for system (5.13) with quality index (5.14) was solved by means of constructions described above. Results of numerical modeling are the following. In numerical experiments we used uniform partition of time segment [0, 4] with the step δ = 0.02 and the value of accuracy parameter ε = 0.2. The a priori calculated value of differential game (5.13), (5.14) in classes "strategies -counter strategies" was ρ u ≈ 2.46, while in classes "counter strategies -strategies" was ρ v ≈ 1.52. In the picture on the left the narrow curve depicts the motion trajectory of system (5.13) which was formed in the result of actions of ζ-optimal control laws of the first and the second players in On Calculating the Value of a Differential Game in the Class of Counter Strategies 47 classes "strategies -counter strategies". The realized value of quality index (5.14) was γ = | − 1.55 + 0.5| 2 + | − 0.91 + 2| 2 + | − 1.16| 2 + |0.75 − 2| 2 1/2 ≈ 2.28 ≈ ρ u .
In the picture on the right, the narrow curve depicts the motion trajectory of system (5.13) that was formed in the result of actions of ζ-optimal control law of the second player in classes "strategies -counter strategies", while the control actions of the first player were chosen randomly. The realized value of the quality index was γ ≈ 4.51 > ρ u . The thick curve depicts the motion trajectory that was formed in the result of actions of ζ-optimal control law of the first player in classes "counter strategies -strategies", while the control actions of the second player were chosen randomly. The realized value of the quality index was γ ≈ 0.12 < ρ v .
The targets are shown in the pictures by small black squares. Points on the trajectories correspond to the moments of motion quality evaluation. | 3,459.4 | 2016-07-15T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Hybrid in vitro diffusion cell for simultaneous evaluation of hair and skin decontamination: temporal distribution of chemical contaminants
Most casualty or personnel decontamination studies have focused on removing contaminants from the skin. However, scalp hair and underlying skin are the most likely areas of contamination following airborne exposure to chemicals. The aim of this study was to investigate the interactions of contaminants with scalp hair and underlying skin using a hybrid in vitro diffusion cell model. The in vitro hybrid test system comprised “curtains” of human hair mounted onto sections of excised porcine skin within a modified diffusion cell. The results demonstrated that hair substantially reduced underlying scalp skin contamination and that hair may provide a limited decontamination effect by removing contaminants from the skin surface. This hybrid test system may have application in the development of improved chemical incident response processes through the evaluation of various hair and skin decontamination strategies.
Most casualty or personnel decontamination studies have focused on removing contaminants from the skin. However, scalp hair and underlying skin are the most likely areas of contamination following airborne exposure to chemicals. The aim of this study was to investigate the interactions of contaminants with scalp hair and underlying skin using a hybrid in vitro diffusion cell model. The in vitro hybrid test system comprised "curtains" of human hair mounted onto sections of excised porcine skin within a modified diffusion cell. The results demonstrated that hair substantially reduced underlying scalp skin contamination and that hair may provide a limited decontamination effect by removing contaminants from the skin surface. This hybrid test system may have application in the development of improved chemical incident response processes through the evaluation of various hair and skin decontamination strategies.
The adverse health effects of exposure to hazardous materials can be mitigated by decontamination: the timely removal of contaminants that may be on or near to body surfaces 1 . The emergency services' response to incidents involving large numbers of individuals is commonly termed "mass casualty decontamination" 2 ; one such protocol is the "Ladder Pipe System" of decontamination (LPS), during which a pair of fire engines park in parallel to deliver a high-volume, low-pressure water mist into a corridor through which casualties pass (https://medicalcountermeasures.gov/barda/cbrn/prism/; https://www.nfpa.org/~/media/files/news-and-research/resources/ external-links/first-responders/decontamination/ecbc_guide_masscasualtydecontam_0813.pdf?la=en) 1 . The vast majority of previous studies that evaluated the efficacy of various decontamination systems focused on removing contaminants from the skin surface [3][4][5][6][7][8][9][10] . In contrast, relatively few studies have investigated hair decontamination and there is currently no in vitro model to quantify the risk associated with spreading contamination from hair to the underlying scalp or lower body surfaces during wet decontamination processes (e.g. LPS). The limited number of previous hair decontamination studies utilised pig or human scalp skin [11][12][13][14][15][16] . Although anatomically correct, such models do not accurately reproduce the normal geometric coverage of hair over the underlying scalp skin. This is because hair naturally falls under the influence of gravity and so the number of hairs covering the scalp will tend to increase from the top of the head down. Therefore, a model incorporating a "curtain" of hair laid over a skin surface would seem more representative.
We have previously developed an in vitro diffusion cell model that reproduces LPS hydrodynamics, allows the skin to be placed in a more realistic (vertical) geometry during showering and has a relatively large (~20 cm 2 ) area to investigate the spreading of contaminants over the skin surface 17 . A logical step to develop this model further is the inclusion of human hair. Here we describe a hybrid in vitro model comprising excised pig skin partially overlaid with a curtain of human hair. The time-resolved, compartmental distribution of four chemicals-a curcumin and methyl salicylate mixture (CMX), sodium fluoroacetate (SFA), potassium cyanide (KCN) and phorate (PHR)-was investigated using the modified in vitro system. CMX has previously been validated for use as a simulant for medium volatility chemical warfare agents such as sulphur mustard 18 . The other contaminants were selected as being representative of toxic industrial chemicals.
Results
A clear, consistent outcome was the predominant recovery of contaminants from within the hair (Fig. 1 Table 1) as a function of time post exposure. Each data point represents the mean recovery from n = 6 replicates, with error bars representing standard deviation.
Compartment Description
Vapour loss Represents amount of contaminant that volatilised from the skin/hair surfaces, recovered from Tenax tubes.
Hair surface Contaminant recovered from swabs of hair curtains.
Hair
Contaminant remaining in the solubilised hair curtains following swabbing. Assumed to be mainly representative of contaminant either trapped within the hair or bound to the hair surface.
Combined hair Sum of recoveries from hair surface and hair.
Skin surface Recovery of contaminant from cotton swabs of the skin surface and surrounding donor chamber.
Skin
Residual contaminant from within solubilised skin following skin swabbing.
Combined skin Sum of recoveries from skin surface and skin.
Absorbed or receptor
The amount of contaminant recovered from the fluid bathing the underside of the skin. The second greatest proportion of applied dose was recovered from the hair surface (hair swabs): the time-averaged recoveries were SFA 11.5% (range 6.0-16.0), KCN 5.4% (1.3-7.9), PHR 3.0% (2.2-4.6) and CMX 2.2% (1.6-2.4), with SFA and CMX being statistically different (P = 0.0021; Friedman test with Dunn's multiple comparisons post test). When subcategorised by lipophilicity or hydrophilicity, the hair surface recoveries were attributable to solubility (P < 0.0001) and time (P < 0.05; ordinary three-way ANOVA).
Given the statistically significant time-dependencies identified above, the hair-to-hair surface ratios (H:HS) of each contaminant were further investigated (Fig. 2). There was no statistical correlation for the H:HS ratio of CMX and PHR with time. However, there was a linear increase in the H:HS ratio for SFA (slope = 0.323 ± 0.043; P = 0.00217) and KCN (slope = 0.628 ± 0.07; P = 0.0014), indicating dynamic partitioning from the hair surface into the hair over the 4-hour exposure period.
The area contaminated with 14 C-CMX, 14 C-PHR and 14 C-KCN was consistently and significantly (P < 0.05) greater on the hair curtains than on the underlying skin (Fig. 3). This was not the case for 14 C-SFA, where there was no statistically significant difference between hair and skin (Fig. 3). There was also no significant correlation between either hair or skin surface contamination and time (P > 0.05), indicating that the spreading of each contaminant occurred within the first ten minutes of exposure (Fig. 4).
The initial recoveries of 14 C-radiolabelled CMX, SFA and KCN from the skin surface appeared to reach a maximum at 10 minutes and then decreased thereafter (Fig. 4). This was not observed with PHR, but the inherent variability (coefficient of variance ~125%) precluded any statistically relevant comparisons between or within contaminants. The maximum skin surface recoveries were, SFA (27.8% of the applied dose) > KCN (9.9%) > CMX (5.8%). Since the amounts of contaminant recovered from within the skin and receptor chamber fluid were consistently lower than the skin surface recoveries (Fig. 1), the reduction of material recovered from the skin surface after 10 minutes could not be attributable to absorption into the underlying skin and receptor chamber fluid. There was a small but significant time-related increase in vapour loss of 14 C-CMX (P < 0.0007) and 14 C-PHR (P = 0.0002) with time ( Fig. 1), which equated to evaporative loss rates of ~56 and 12 µg min −1 , respectively.
There were no statistically significant differences between the different hair types employed in this study (Fig. 5). However, the recovery was consistently greater for hair contaminated with 14 C-CMX and PHR compared to SFA and KCN.
Lateral movement of each contaminant (across the hair or skin surfaces) did not significantly vary over the duration of the study (P > 0.05; one-way ANOVA with Dunn's multiple comparisons post test), indicating that lateral diffusion was complete before the first time point (8 minutes). The spreading of each contaminant over the hair surfaces was consistently greater than the spreading over the underlying skin (P < 0.05; paired, two-tailed Wilcoxon test) with the exception of SFA (Fig. 6).
Discussion
To our knowledge, this is the first description of an in vitro skin model that integrates a realistic "curtain" of human hair to investigate the time-dependent distribution of contaminants in different compartments (within the hair, skin surface, air sampling, etc.). Several methods have previously been reported for investigating skin decontamination using human scalp 12 , decontamination of human hair in the absence of skin 11,14 , and combined hair and skin decontamination using excised pig scalp skin 13 . Hair strands tend to fall under the influence of gravity to form a sheet over the underlying scalp skin. Therefore, a human hair curtain laid over dermatomed pig skin would appear to represent a rational approach for developing a more accurate hybrid hair/skin model. This calculation is based on the understanding that it is not the hair follicle density that dictates the amount of hair per unit area, but the accumulation of hair resting against the side of an individual's head (Fig. 7). It is important to note that this in vitro model is only relevant for hair lengths in excess of 1.7 cm (represented by the distance d in Fig. 7). Thus, our hybrid model would not be applicable to individuals with shorter hair. The in vitro skin diffusion cell system used in this study has been described previously 17 and is essentially a scaled-up version of a standard, validated diffusion cell system 20 . The relatively large surface area of our model provided adequate space for the introduction of the hair curtain. However, consideration should be given to the type of skin employed and the storage conditions of the skin (fresh or frozen) as this can affect dermal penetration 21 . This hybrid system uses previously frozen porcine skin, making this model slightly more conservative as frozen skin is more permeable to chemicals than fresh human skin 21 .
A surprising outcome of this study was the level of protection afforded by the hair, which retained the majority (53-89%) of the applied dose of each contaminant. In contrast, a previous in vitro study (using the nerve agent VX in a porcine scalp model) reported a hair recovery of ~10% of the applied dose, with the majority (75%) of the chemical distributed on or within the skin 13 . Since phorate and VX have similar physicochemical properties, this disparity may be explained by the different amounts and/or orientation of hair: the porcine scalp skin model mainly comprised vertical strands of hair at a density of 82 cm −2 . In contrast, the human hair curtain in this current study was orientated horizontally and so provided greater coverage over the underlying skin. A recent human volunteer study reported a hair-to-skin ratio of ~20 for CMX at one hour post exposure 22 . This is of the same order of magnitude as the ratio of ~36 derived at the same time point for CMX in this current study. It should be noted that the volunteer study data would tend to underestimate the hair-to-skin ratio, as the hair was not excised and subjected to solvent extraction. Therefore, the in vitro model arguably provides a more realistic representation of the distribution of a contaminant between the hair and underlying skin. Collectively, these data indicate that scalp hair provides a significant degree of protection, a practical implication of which is that casualties exposed to toxic chemicals may be able to focus on other, less protected anatomical areas which may be more time-critical for effective decontamination.
No statistically significant differences were observed in the distribution of contaminants between the different hair types used in this study. Whilst subtle differences in hair types have previously been identified 23 , these do not appear to influence the distribution of contaminants following gross contamination.
An interesting observation was the dynamic redistribution of aqueous-based contaminants (SFA and KCN) between the hair and hair surface (H:HS ratio; Fig. 2). The latter was quantified as the recovery of chemicals from hair swabs, whereas the former was derived from solubilised hair following swabbing. Such compartmentalisation makes the assumption that swabbing is 100% efficient at removing unbound contaminant from the hair surfaces. It is conceivable that swabbing may underestimate the recovery of unbound contaminant to some degree (and thus artificially increase the H:HS ratio). Whilst care was taken to perform the swabbing in a reproducible manner, careful interpretation of these data is warranted. Taking this limitation into account, there was unequivocal evidence for the transfer of SFA and KCN from the hair surface to within the hair over the 4-hour exposure period. This effect was not observed with the lipophilic contaminants CMX (Log P = 2.23) and PHR (Log P = 3.67), the most plausible explanation being that partitioning of these lipophilic chemicals from the surface to within the hair was much more rapid and occurred before the first time point (i.e. within 8 minutes). This may be attributable to the presence of the sebaceous coating on the hair 24 .
A further interesting outcome of this study was the observation that recovery of contaminants from the skin surface increased transiently and then decreased 10 minutes post exposure (Fig. 4). This effect was most pronounced for the aqueous contaminants (SFA and KCN). One would expect the decrease in skin surface recoveries to be consistent with an increase in dermal absorption. However, the recoveries of contaminants from the solubilised skin and receptor chamber fluid compartments did not support this more obvious explanation. The fact that there was a corresponding increase in recoveries from the hair and hair surfaces after 10 minutes leads to the tentative conclusion that hair may absorb contaminants from the skin surface. A similar mechanism has been suggested to explain the re-coating of hair by sebum 24 .
A major limitation of this hybrid model is that it does not consider the impact of systemic absorption via the hair follicles, since the hair curtains were fixed to the skin surface and thus do not reproduce the normal anatomy of the scalp skin. Given that a (substantial) proportion of contaminants was recovered from within the hair,
Conclusion
A hybrid model has been developed that combines dermatomed pig skin with a partial overlay of human hair.
The hair layer provides a substantial degree of protection against exposure of the underlying skin, which is in agreement with a previous human volunteer study. Analysis of the temporal distribution of contaminants within the different experimental compartments has indicated a dynamic redistribution that is both time-and solubility-dependent. The two chemicals applied in aqueous solution (SFA and KCN) gradually partitioned into the hair over the 4-hour exposure period. In contrast, transfer from the hair surface to the hair was extremely rapid (<8 minutes) for the two lipophilic substances (CMX and phorate). There was evidence to suggest that hair may also have a limited capacity to remove contaminants from the skin surface. The hybrid system will be used in future studies to assess the effects of various dry and LPS decontamination strategies. Hair and skin diffusion cell apparatus. Each human hair curtain comprised ~1785 individual strands cut to a length of 17 mm. These were transferred to a strip of self-adhesive tape (3 × 0.4 cm). A thin film of cyanoacrylate glue was placed over the lower half of the tape to ensure that tip of each hair strand was embedded in the adhesive before the top of the tape was folded over the tip of the hairs. The resulting hair curtains (Fig. 8) were stored at room temperature for up to one month before use.
Materials
Skin diffusion cells and a manifold delivery system were manufactured by Protosheet Ltd. (Kent, UK) as previously described 17 . Full thickness skin was obtained post mortem from female pigs (Sus scrofa, large white strain, weight range 15-25 kg) from a reputable supplier. The skin was close clipped and removed from the dorsal aspect of each animal. The excised skin was then wrapped in aluminium foil and stored flat at −20 °C. Prior to the start of each experiment, a skin sample from one animal was removed from cold storage and thawed for approximately 24 hours. The skin was subsequently dermatomed to a thickness of 1000 μm (Humeca Model D80; Eurosurgical, Surrey, UK). Once dermatomed, the skin samples were cut into 10 cm diameter discs and mounted into the diffusion cells. Hair curtains were placed onto the skin 0.5 cm from the top (centre) of the interior clamp (Fig. 5) so that the side edges of the hair curtain were securely held in contact with the underlying skin. Each cell was connected to a peristaltic pump (Watson-Marlow 520S) through which receptor fluid (50% ethanol water) was infused at a rate of 0.5 mL min −1 . Each diffusion cell was placed on a silicone heat mat connected to a digital controller (both supplied by Holroyd components, UK). The temperature of each heat mat was set to achieve a skin surface temperature of 32 °C (confirmed using an infrared camera; model FLIR P620). Diffusion cells were left for 1 hour to equilibrate, after which pre-weighed 20 mL glass scintillation vials were positioned at the receptor fluid effluent port to collect serial samples of receptor chamber fluid. Experiments were performed in batches of six diffusion cells, with each allocated a specific treatment according to a pseudo-Latin square design (so that no treatment repeatedly occupied the same position within the fume cupboard). Each experiment was repeated six times to give a total of n = 6 replicates per treatment group, with each replicate being performed on skin from a separate skin donor (which was matched across all treatment groups). The hair type in each diffusion cell was randomised so that each experimental time point had at least one of each hair type. The treatment comprised exposure for a fixed duration (8, 10, 30, 60, 120 and 240 minutes) to one of the four test chemicals.
Each experiment was started by the addition of a 20 µL droplet of either 14 C-MS, 14 C-PHR, 14 C-SFA or 14 C-KCN directly to the centre of the hair curtain surface. Air from within each donor chamber was sampled using a constant volume pump (Pocket Pump model 210-1002MTX, SKC Ltd., Dorset, UK) set at a sampling volume of 75 mL min −1 . Glass sorbent tubes were purchased from Markes International (Llantrisant, UK). Each tube glass tube was filled with 150 ± 5 mg of Tenax TA 35/60 absorbent. Filled Tenax tubes were conditioned using a TC-20 tube conditioner (Markes International Ltd., UK) in accordance with the manufacturer's instructions.
After an appropriate delay (depending on the treatment group), the hair curtain was removed, placed into a pre-weighed 30 mL vial and stored at −70 °C for a maximum of 1 week. The skin and the inside surfaces of the diffusion cells were swabbed with dry cotton wool, which was placed into 10 mL ethanol; the diffusion cells were then disassembled and the skin samples were placed flat on a petri dish and stored at −70 °C for up to 1 week prior to autoradiographic analysis. The Tenax was removed from the glass sorbet tubes, placed into pre-weighed vials and then reweighed prior to the addition of 10 mL of propan-2-ol. The tubing connecting the donor chamber and the Tenax tubes were placed into pre-weighed vials containing 10 mL of propan-2-ol.
Digital autoradiography. The skin and hair samples were removed from cold storage and placed into large (35 × 43 cm) autoradiography cassettes containing a radiometric calibration slide (30-862 nCi g −1 ; ARC, USA). A sheet of clear cellophane (38 μm thickness) was laid over the skin and hair samples and an erased phosphor imaging film (Fujifilm 20 × 40 BAS-MS, GE Healthcare, UK) was placed on top. Each film was exposed for 3 hours and then processed using a confocal, variable-mode laser scanner (Typhoon FLA 700, GE Healthcare, UK). Each scan was set to a resolution of 25 μm per pixel and a PMT of 800. Images obtained via autoradiography were spatially calibrated and analysed using ImageJ v1.48p to assess skin surface spreading using a (software) threshold of 244-62258. Regions of interest (ROI) were placed over the exposed area and each image was analysed. An average background signal (based on a ROI derived from negative controls) was subtracted from all analysed areas.
Once images were acquired, the skin samples and associated sections of cellophane were placed into pre-weighed 120 mL jars and reweighed before the addition of 50 mL of soluene-350 tissue solubiliser (PerkinElmer, UK). Hair curtains were blotted using a single sheet of absorbent paper (Wypall L20, Kimberly-Clark Professional, USA) cut into 5 × 5 cm swatches, which were then immersed in a 20 mL vial containing 15 mL propan-2-ol. Following blotting, each hair curtain and corresponding section of cellophane was placed into glass vials containing 20 mL of scintillation fluid. Sample analysis. Radioactivity of samples (swabs, Tenax, tubing, receptor chamber fluid, skin, etc.) was quantified using a PerkinElmer Tri-Carb liquid scintillation counter (Model 2810 TR), employing an analysis runtime of 2 minutes per sample and a pre-set quench curve. The amounts of radioactivity in each sample were converted to quantities of 14 C-MS, 14 C-PHR, 14 C-SFA or 14 C-KCN by comparison to standards (measured simultaneously) that were prepared on the day of each experiment by the addition of a known amount of test chemical ( 14 C-MS, 14 C-PHR, 14 C-SFA or 14 C-KCN) to (i) cotton wool swabs in 10 mL ethanol, (ii) Tenax and tubing in 10 mL propan-2-ol, (iii) individual hair curtains, and (iv) unexposed skin tissue dissolved in 50 mL soluene-350. A standard receptor fluid solution was also prepared by the addition of 10 μL of test chemical to 990 μL of fresh receptor fluid (50% aqueous ethanol), from which a range of triplicate samples (25, 50, 75 and 100 μL) were placed into vials containing 5 mL of LSC fluid to produce a standard (calibration) curve. Aliquots (250 μL) of each sample were taken and placed into vials containing 5 mL of liquid scintillation fluid for liquid scintillation counting. A summary of each sample type and its corresponding compartment is presented in Table 1.
Statistical analysis.
A normality test (Kolmogorov-Smirnov) was performed on all data. Equal proportions were not found to be normally distributed. Therefore, treatment effects were subsequently analysed by non-parametric tests using proprietary software (GraphPad PRISM v7.0a, USA).
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | 5,280.8 | 2018-10-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
HLA-B27 is associated with reduced disease activity in axial spondyloarthritis
HLA-B27 is associated with increased susceptibility and disease activity of ankylosing spondylitis, but the effect of HLA-B27 on the activity of the broader category now called axial spondyloarthritis (AxSpA) is apparently the opposite. A modified Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) was used to assess disease activity among 3435 patients with spondyloarthritis (SpA) who participated in a survey designed to assess the effect of their disease and its treatment on the susceptibility and severity of Covid-19. Chi square testing was used to compare BASDAI scores between HLA-B27 positive and negative subjects. 2836 survey respondents were HLA B27 positive. The average BASDAI for the HLA-B27 negative cohort was 4.92 compared to 4.34 for the HLA-B27 positive subjects. Based on linear regression, a subject’s sex could not fully account for the differing BASDAI score in HLA-B27 negative subjects compared to those who are HLA-B27 positive. The difference between B27 positive and negative subjects was skewed by those with a BASDAI score of one or two. HLA-B27 positive subjects were more than twice as likely to have a BASDAI score of 1 compared to HLA B27 negative subjects and about 60% more likely to have a BASDAI score of 2 (p < 0.0001). HLA-B27 positive subjects have less active spondyloarthritis compared to HLA-B27 negative subjects as measured by a BASDAI score. Our data indicate that patients with mild back pain and a diagnosis of AxSpA are disproportionately HLA-B27 positive. The HLA-B27 test facilitates the diagnosis of axial spondyloarthritis such that patients from a community survey with mild back pain may be disproportionately diagnosed as having AxSpA if they are HLA-B27 positive. The test result likely introduces a cognitive bias into medical decision making and could explain our observations.
The New York criteria provide classification criteria for ankylosing spondylitis and require definite radiographic evidence for sacroiliitis on a plain x-ray 10 . The realization that sacroiliac inflammation occurs in advance of and does not always result in radiographic change led to the ASAS (Assessment of Spondyloarthritis) criteria for the classification of axial spondyloarthritis (AxSpA). These criteria have had a major impact on the diagnosis and understanding of axial spondyloarthritis 11 . The recognition of non-radiographic (nr) AxSpA is an advance that is benefiting many patients who were previously not diagnosable 12 . Thus, spondyloarthritis is an umbrella term that includes both ankylosing spondylitis and nr-AxSpA. It is also sometimes used to describe individuals with only peripheral joint disease that does not affect the spine 13 . Everyone with ankylosing spondylitis should meet criteria for the diagnosis of AxSpA, but many with AxSpA do not meet criteria for ankylosing spondylitis. AxSpA differs from AS in several major respects. For example, in contrast to AS, AxSpA is more common in females 14,15 . Despite the absence of radiographic changes in the sacroiliac joints of many patients with AxSpA, the disease activity in AxSpA is generally comparable to the disease activity in AS as judged by the BASDAI 14 www.nature.com/scientificreports/ The effect of HLA-B27 on disease activity in AxSpA has been reported relatively infrequently, but it paradoxically appears to have an opposite effect compared to AS, i.e. HLA-B27 negative subjects with AxSpA generally report higher BASDAI scores than HLA-B27 positive subjects 16 . The reasons for this are not completely understood.
All classification or diagnostic criteria must achieve a balance between sensitivity and specificity. HLA-B27 fulfills a prominent role in the ASAS criteria for nr-AxSpA since only 3 criteria are required to make a diagnosis, and HLA-B27 can be one of the three 11 . While the presence of HLA-B27 is a useful discriminator, it also has intrinsic limitations with regard to sensitivity and specificity. For example, about 25% of patients with nr-AxSpA are HLA-B27 negative 17 .
We recently conducted a survey on spondyloarthritis and Covid-19 18,19 . This report analyzes the effect of HLA-B27 on the BASDAI scores in this cohort.
Materials and methods
We have reported details about our survey on spondyloarthritis and Covid-19 elsewhere 18,19 .
Subjects were asked questions that included a modified BASDAI. The original BASDAI is based on six questions, the last two of which are both on morning stiffness. We relied on a single question on morning stiffness in order to shorten the survey and improve completion rates. In addition, because this was a web-based survey, BASDAI scores were reported as integers, not as a continuous variable. The study was reviewed and approved by the Oregon Health & Science University IRB, study number 0021375. As an electronic survey, subjects did not provide written informed consent, but they were explicitly informed that participating in the study was a form of giving consent. The research was performed in accordance with all relevant guidelines and regulations. The research complied with the recommendations made in the Declaration of Helsinki.
The chi-square test was used to compare frequencies of HLA-B27 among subjects with varying BASDAI scores or with variable use of a biologic. Linear regression was used to determine if specific variables such as treatment, HLA-B27 status, or sex could account for differences in the BASDAI score.
Results
As recently reported (18), we recruited 3435 respondents with a median age of 52 from 65 countries, 74.5% from the United States and 8% were from Canada. Of the respondents who stated that they have physician-diagnosed spondyloarthritis, 2836 or 82.6% knew their HLA-B27 status and 76.1% were positive. 63.7% of those answering the survey were women. As previously reported 19 , 86% of respondents reported their disease as ankylosing spondylitis and 6.8% identified themselves as having nrAxSpA.
In analyzing results from this survey, we noted that HLA-B27 status had a statistically significant negative effect on the modified BASDAI (Table 1) (average BASDAI of 4.92 for the B27 negative group and 4.34 for the B27 positive group, p ≤ 0.0001 by chi square). Individual BASDAI questions each produced a similar difference between HLA-B27 positive and negative subjects. Women had an average BASDAI of 4.79 and men had an average BASDAI of 3.95. Women made up 73.9% of the HLA-B27 negative group and 60.5% of the HLA-B27 positive group. We used linear regression as shown in Table 2 to determine if sex, HLA-B27 status or use of a biologic independently affected the BASDAI. Both sex and HLA-B27 status do have independent effects on the BASDAI, while the score was not different in those taking a biologic compared to those not on a biologic. Table 1 shows that HLA-B27 positive subjects were more than twice as likely to have a BASDAI of 1 and about 60% more likely to have a BASDAI of 2 compared HLA-B27 negative subjects (p ≤ 10 -4 by two by two Chi square). Since this analysis included subjects with multiple different forms of spondylitis, we repeated the analysis but restricted the comparison just to the subset of subjects with self-described ankylosing spondylitis. Results were similar with HLA-B27 positive respondents 2.5 times as likely to have a BASDAI score of one and 65% more likely to have a BASDAI score of 2. A potential explanation is that HLA-B27 positive subjects are treated more aggressively than B27 negative subjects. Details on therapy for subjects in this cohort have been published previously 18 . Rather than finding that HLA-B27 positive subjects were treated more aggressively, we found the
Discussion
The finding of a negative effect of HLA-B27 on the activity of spondyloarthritis was the opposite of the results from at least 7 studies that have considered the effect of HLA-B27 on the BASDAI in patients with AS [3][4][5][6][7][8][9] . We are aware of only one study which found that HLA B27 positive subjects with AS had less active disease 20 . In contrast to the preponderance of data on AS, a single study on the effect of HLA-B27 on the activity of AxSpA concluded, as supported by our data, that HLA-B27 was associated with less disease activity 16 . A potential explanation for this paradox is that AS is more common in men, while AxSpA is more common in women 15,21 .
Women have been shown to have higher BASDAI scores than men 15,21 . Further, the survey nature of our study likely contributes by including more women than men since women are known to participate more frequently in surveys compared to men 22 . Sex differences, however, do not account fully for the difference which we found between HLA-B27 positive and negative subjects (see Table 2). Most studies on the effect of HLA-B27 on disease activity were conducted before the recognition of AxSpA using the ASAS criteria [3][4][5][6][7][8] . HLA-B27 positivity greatly facilitates the diagnosis of AxSpA by the ASAS criteria. We hypothesize that the ASAS criteria have encouraged clinicians to feel more comfortable diagnosing spondyloarthritis in an individual who is HLA-B27 positive. Thus, a patient who is HLA-B27 positive with mild back pain might be labelled AxSpA, while a subject with similar symptoms and a negative test for HLA-B27 is more likely to receive an alternative diagnosis such as fibromyalgia or mechanical low back pain, a possible example of a cognitive bias in diagnostic reasoning 23 and subsequent misclassification of subjects.
We did not anticipate that patients who are HLA-B27 positive would report lower BASDAI scores than patients who are HLA-B27 negative. Physicians, however, are relatively intolerant of uncertainty 24 . A positive test for HLA-B27 could bias the diagnostic process toward attributing back pain to spondyloarthritis. Conversely, a negative test for HLA-B27 could unconsciously bias a clinician against making a diagnosis of spondyloarthritis for a patient with mild, chronic back pain.
Our data do not distinguish between over diagnosing AxSpA in a patient who is HLA-B27 positive versus under diagnosing the condition in someone who is HLA-B27 negative. Indeed, both are likely to be true.
This study has limitations which are inherent in data obtained by a survey. Most importantly, we believe that this is a study that relates to AxSpA rather than AS despite the fact that only 6.8% of respondents self-identified themselves as having nr-AxSpA, while 86% of respondents self-reported their disease as ankylosing spondylitis. All reported diagnoses related to spondyloarthritis in this survey have been previously reported 19 .
Based on our clinical experience with patients' description of their own diagnosis, we question if respondents reliably distinguished AxSpA or nr-AxSpA from AS. Epidemiologic data indicate that AxSpA is common 15 . Further, subjects were allowed to provide more than one diagnosis such as acute anterior uveitis and AS. Since AS is a subset of AxSpA, more subjects should have had AxSpA than AS. In fact, an important observation from our study is that the term, axial spondyloarthritis, is apparently not commonly recognized by the lay public. Our conclusion that this is a study on spondyloarthritis and not AS is also supported by the predominance of female respondents and a lower prevalence of HLA-B27 than would be expected in a study of just ankylosing spondylitis.
Another limitation is that HLA-B27 status and diagnosis are by self-report and not independently validated. This weakness, however, is also a potential strength because the survey nature of the study allows us to capture subjects with mild disease and it provides insight into community practice. Community standards for diagnosing AxSpA are more broadly applicable than standards which might be used in a study based on subjects being evaluated at an academic medical center. Additional limitations include that the BASDAI relies on a subjective assessment and not on objective observations; and clearly, respondents to a survey represent a self-selected group. Despite these limitations, we believe that the observation is both provocative and heuristic. It is consistent with a prior report 16 which did not attempt to explain why HLA-B27 should be associated with lower disease activity in AxSpA. It makes intuitive sense that a diagnostician would weigh the result of HLA-B27 testing excessively in assessing mild back pain. An acknowledgement of cognitive bias can lead to improved diagnostic accuracy and will encourage refinement of valuable guidelines 11 .
In summary, we are aware of only one other published study on the effect of HLA-B27 on the BASDAI score of patients with AxSpA 16 . Paradoxically, HLA-B27 is associated with lower disease activity in AxSpA and higher Table 2. Linear regression analysis of the impact of sex, HLA-B27 positivity, or use of a Biologic on the BASDAI score. a The intercept is the average BASDAI score for a female who is HLA-B27 negative and not taking a biologic. The regression coefficient shows how much lower the BASDAI would be on average if the respondent is HLA-B27 positive or male or taking a biologic. | 2,947 | 2021-06-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Intuitions, theory choice and the ameliorative character of logical theories
Anti-exceptionalists about logic claim that logical methodology is not different from scientific methodology when it comes to theory choice. Two anti-exceptionalist accounts of theory choice in logic are abductivism (defended by Priest and Williamson) and predictivism (recently proposed by Martin and Hjortland). These accounts have in common reliance on pre-theoretical logical intuitions for the assessment of candidate logical theories. In this paper, I investigate whether intuitions can provide what abductivism and predictivism want from them and conclude that they do not. As an alternative to these approaches, I propose a Carnapian view on logical theorizing according to which logical theories do not simply account for pre-theoretical intuitions, but rather improve on them. In this account, logical theories are ameliorative, rather than representational.
Introduction
In recent years, the view that "logic is in the same epistemic boat as other scientific theories," as Bueno and Colyvan (2004, p. 156) put it, is becoming increasingly accepted among philosophers of logic. To contrast with the previous orthodoxy, which regarded logic as exceptional in that the epistemic justification of logical theories would require evidence and methods other than those commonly used in other sciences, this view was named "anti-exceptionalism about logic" (Hjortland 2017;Williamson 2007). Once logic is equated in epistemic terms with other sciences, wherein observation and experimentation constitute the main sources of evidence, the question of what counts as evidence for logical theories becomes pressing. In response to this question, a number of philosophers have proposed guidelines for theory choice in logic (e.g., Bueno and Colyvan 2004;Resnik 2004;Priest 2016;Williamson 2017;Martin and Hjortland 2020).
Two of these accounts are abductivism (Priest 2016;Williamson 2017) and predictivism (Martin and Hjortland 2020). In these accounts, logical theories are viewed as intended to provide a faithful description of pre-theoretical logical intuitions. Accordingly, conflicts between verdicts about validity given by a logical theory and verdicts about validity given by intuitions are seen as prima facie reasons to reject the former. Despite the importance these accounts attribute to intuitions, there is little clarity about what logical intuitions are and why, or whether, they are reliable. I start this paper by investigating the prevalence and the reliability of pre-theoretical logical intuitions. After outlining abductivism and predictivism as methods for theory choice in logic in Sect. 2, in Sects. 3, 4 and 5 I draw on results from the psychology of reasoning and on Dutilh Novaes' (2021) Prover-Skeptic dialogical model of deduction to argue that truly pre-theoretical intuitive judgments about validity are less reliable than judgments about validity informed by a logical theory. If this is so, intuitions should not count as a reason to reject logical theories, contrary to what abductivism and predictivism assume.
Given the unreliability of pre-theoretical intuitions, what seems more plausible, I submit, is that pre-theoretical intuitions constitute only a starting point for logical theorizing. Logical theories do not simply capture pre-theoretical intuitions, but rather improve on them. In other words, logical theories are ameliorative, rather than representational. In the two final Sects. (6 and 7) of this paper, I draw on Carnap's (1950Carnap's ( , 1963 view of scientific and logical theorizing as explicative and on Haslanger's (2012) account of ameliorative philosophical analysis, accounts in which theory choice is pragmatic, to argue that what determines theory choice in logic are, first, the investigative aims of the theorist and, second, data about logical theories themselves (e.g., meta-theorems) showing whether a given theory can satisfy the pursued aims.
As in most anti-exceptionalist accounts, here I take logical theories to be theories of validity. However, here logical theories do not model a pre-theoretical notion of validity; rather, they lay down explications (in Carnap's sense) of validity that may be more or less satisfactory depending on one's investigative aims. The result is a yet anti-exceptionalist account of logical theorizing, but one that sees logical theorizing as similar to scientific theorizing not because both are representational undertakings, but rather because both improve our thoughts and practices, often flying in the face of our pre-theoretical intuitions. Priest (2016) and Williamson (2017) have advanced similar accounts of theory choice in logic. Although they disagree on the outcome of the application of the criteria they propose, they agree that logical theories should be chosen on the basis of abductive arguments, i.e., inferences to the best explanation. On their accounts, logical theories compete as candidate explanations of a given phenomenon. The best explanation is the one that satisfies a number of theoretical virtues to the highest degree, such as simplicity, strength, unifying power, and adequacy to the data. This latter criterion-adequacy to the data-is particularly important, since it relates the proposed explanation to its target phenomenon. For Priest, the target phenomenon of logic is the notion of validity. Validity, however, is not an observable phenomenon in the same sense that the phenomena studied by physics or chemistry are, which poses a difficulty for Priest from the outset.
Abductivism and predictivism
In the criterion of adequacy to the data, what counts as data? It is clear enough what provides the data in the case of an empirical science: observation and experiment. What plays this role in logic? The answer, I take it, is our intuitions about the validity or otherwise of vernacular inferences (Priest 2016, p. 41).
The gist of Priest's answer lies in the words 'intuition' and 'vernacular.' His claim seems to be that we have pre-theoretical intuitions about validity, 1 on which we rely for making inferences in ordinary situations. These everyday inferences are conducted "in the vernacular," rather than in a formal language; "maybe the vernacular augmented with a technical vocabulary (such as that of chess, physics or whatever); maybe the vernacular augmented with mathematical apparatus; but the vernacular nonetheless" (Priest 2006, pp. 169-170). The pre-theoretical notion of validity that guides vernacular inferences constitute, according to Priest, the data against which competing logical theories should be tested. Some inferences conducted in the vernacular "strike us" as valid, whereas some others "strike us" as invalid. "Any account that gets things the other way around is not adequate to the data" (Priest 2016, p. 42).
Williamson has a different view about the target phenomenon of logical theories. For him, logical theories are intended to capture the most general aspects of the world. Accordingly, he accepts that empirical evidence can confirm or disconfirm logical theories, although indirectly. Roughly, Williamson claims that competing logical theories A and B can be "empirically tested" in the following way: take a set of well-confirmed scientific sentences ; derive all its empirical consequences according to A, generating the set of sentences A , and according to B, generating the set of sentences B ; finally, compare how well A and B fit with empirical data (Williamson 2017, p. 334). However, Williamson also admits other sources of evidence, not related with predictions deduced from scientific theories: "[e]vidence here is not confined to observations. We may use anything we know as evidence" (Williamson 2017, p. 335). In particular, we can recruit pre-theoretical intuitions.
For example, in the case of propositional modal logic, we may know that the coin could have come up heads, and could have not come up heads, but could not have both come up heads and not done so, and on that basis eliminate this proposed law: . By contrast, this law identifies a useful pattern in the modal data: (♦ p ∨ ♦q) → ♦( p ∨ q). In that sense, we can verify some predictions of the law by using our pretheoretic ability to evaluate particular modal claims (Williamson 2017, p. 336).
He claims that this sort of "pre-theoretical modal knowledge [is] accessible to almost any reasonable, intelligent person" (Williamson 2013, p. 427). What is more, he claims that pre-theoretical modal knowledge is much more relevant than evidence coming from natural science, at least when it comes to the study of modal logic.
Although nothing in theory precludes the application of results from any branch of natural science to the present enquiry [i.e., the study of modal logic], we have seen little evidence that they would be of much help in practice. It would hardly be relevant to carry out special experiments or make special measurements. A combination of logico-mathematical reasoning with elementary modal knowledge in particular cases turns out to be far more useful (Williamson 2013, p. 423, emphasis added).
Thus, for Williamson, as for Priest, pre-theoretical intuitions (or pre-theoretical "knowledge," as Williamson calls it) have a central role to play as evidence for the acceptance or rejection of logical laws. The same view is shared by many other philosophers. For example, speaking of "the classical research tradition in logic," Bueno and Colyvan say that "[t]he aim of logic is taken to be to provide an account of logical consequence that captures the intuitive notion of consequence found in natural language" (Bueno and Colyvan 2004, p. 168, emphasis added). This echoes Tarski, who notes that "in making precise the content of this concept [logical consequence], efforts were made to conform to the everyday 'pre-existing' way it is used" (Tarski 2002, p. 176). Resnik elaborates on this point: When it comes to describing our inferential practice or the reasoning used in some branch of science or mathematics, logicians, like empirical linguists, try to achieve the best systematization of their data that they can … Here we find logicians relying both upon data concerning our inferential practice and their intuitions-both normative and metaphysical-concerning the facts of logic (Resnik 2004, pp. 180-181, emphases added).
In sum, the idea seems to be that logical theories are aimed at capturing either pretheoretical intuitions directly or capturing the features of an underlying phenomenon these intuitions convey information about. Either way, logical theories are thought of in representational terms: they describe a given phenomenon.
In an attempt to capture this widespread trait of logical practice-the view that intuitions about the validity of inferences expressed in ordinary language count as data for logical theories- Martin and Hjortland (2020) have recently advanced a model of anti-exceptionalist logical methodology that they call "logical predictivism." On their view, a logical theory is evaluated with respect to its ability to make successful predictions. In contrast with Williamson's account, though, in Martin and Hjortland's account the relevant predictions do not concern empirical consequences inferred from sets of well-confirmed scientific sentences, but rather predictions about which "vernacular arguments," to use Priest's term, should be counted as valid according to the logical theory at stake. If the theory predicts that a certain argument will be judged valid by competent reasoners and they do judged it so, this counts as confirmation for the theory; otherwise, this counts as disconfirmation.
On their account, logical theorizing involves two steps. For example, a logician who is concerned with the identification of the logic underpinning informal mathematical proofs would start by identifying which kinds of inference are judged correct by mathematicians. If the logician identifies the use of, say, Modus Ponens in some informal mathematical proofs mathematicians judge correct, this counts as prima facie evidence for taking Modus Ponens as a valid logical rule. The second step is testing the hypothesis that Modus Ponens is valid. Since this hypothesis allows for the prediction that the occurrence of Modus Ponens in other informal mathematical proofs will also be regarded as valid, additional mathematical proofs containing inferential steps based on Modus Ponens that are judged correct by mathematicians count as evidence for the maintenance of Modus Ponens as a logical principle; otherwise, i.e., if mathematicians saw these inferential steps as incorrect, this would count as counter-evidence for the general validity of Modus Ponens in mathematics. In sum, "what's taken as a reliable indicator of validity, and thus suitable data to test the consequences of the theory, are the judgements of mathematicians regarding acceptable informal proofs" (Martin and Hjortland 2020, p. 13). Naturally, these judgments are not made by means of formal tools and axiomatized logical theories, but rather by recruiting their pre-theoretical intuitions about the validity of mathematical proofs.
Martin and Hjortland claim that the same approach can be applied to capture the pre-theoretical notion of "what follows from what" involved in everyday arguments. In this case, not only the judgement of mathematicians, but also the judgment of laypeople "over the correctness of arguments, or over whether some conclusion 'follows from' some premises, are treated as data and taken to be prima facie reliable indicators of validity" (Martin and Hjortland 2020, p. 16). They recognize, however, that, differently from the judgments of mathematicians, the judgments of ordinary people about validity may be unreliable.
After all, we are well aware from cognitive psychology of the unreliability of individuals' logical reasoning under certain conditions (see, for an introduction, Evans [19]). Thus, it seems either we must admit that the proposed data is unreliable, or we need to pre-identify certain agents as reliable judges of which propositions follow from others in particular arguments (Martin and Hjortland 2020, p. 17).
They identify the difficulty but do not offer a solution to it, since they "only aim to show what sense can be made of a logical methodology that treats such judgements as reliable" (Martin and Hjortland 2020, p. 17). This is a difficulty not only for predictivism, but also for abductivism. If there are unreliable deductive reasoners, the abductivist also has to pre-identify them and discount their judgments as data. As a result, a problem that both accounts have to tackle is that, in order to discriminate between reliable and unreliabe judges, the logician has to have a prior notion about what counts as a valid inference. Only with this prior notion in place, can the logician discount some judgments as erroneous. In the next three sections, I discuss the difficulties of pre-identifying reasoners with reliable logical intuitions that could guide theory choice in logic.
Unreliable reasoners from the perspective of classical logic
The findings from cognitive psychology Martin and Hjortland mention in the quotation above belong to a tradition in psychology of reasoning that uses syllogistic and classical logic as its normative theories. Oaksford and Chater (2020) call this tradition the logical paradigm in psychology of reasoning. One of the most famous experiments of this paradigm is the Wason selection task. This task is designed to test participants' ability to reason about conditionals. In the original formulation of this task, participants are presented with four cards. Each card has a number on one side and a letter on the other. The cards are laid down on a desk so that only one of its sides is visible. For example, participants are shown the cards with N, T, 6, and 8 written on the side facing up. Then, participants are asked to verify, by turning over all and only the relevant cards, the following conditional: "If there is an N on one side of the card, then there is a 6 on the other side." This conditional has the form 'if p then q' and, according to the rules of material implication, it is false only if p is the case and q is not. Therefore, participants reasoning according to the canons of classical logic should turn over the cards with N and 8 facing up. However, this is not what most participants do; they turn over only the card N, or the cards N and 6 (Elqayam 2018). In its original formulation with numbers and letters, less than 20% of participants give the logically 'correct' answer (Ragni et al. 2017). The conclusion is that people without training in logic have poor intuitions about material implication. Commenting on the consequences of this result for his claim that pre-theoretical intuitions are a reliable guide for theory choice in logic, Priest writes: it needs to be said that the intuitions in question here need to [be] of a robust kind, purged of clear performance errors. As the literature on cognitive psychology shows, people make not only mistakes, but systematic mistakes, such as those involved in the Wason Card test. What makes these clear mistakes is that once the matters have been pointed out to the people concerned, they can see and admit their errors. Neither is this done by teaching them some high powered logical theory: it can be done by showing simply that they get the wrong results. The intuitions invoked in theory-weighting have to be steeled in this way (Priest 2016, p. 16).
Priest's classification of errors in the Wason selection task as "performance errors" seemingly invokes the Chomskyan distinction between competence and performance in linguistic contexts. According to this distinction, some mistakes people make when speaking-performance errors-may not reflect what they know about the language they are speaking-their linguistic competence. Analogously, one could argue, as Priest seems to do, that the mistakes people make in the Wason selection task do not show that they do not know how to reason in accordance with material implication; it may be just that they have made performance errors when trying to apply what they know to this specific task.
However, there are reasons to suspect that mistakes in conditional reasoning cannot be fully attributed to performance errors. A way of experimentally distinguishing between performance errors and lack of competence consists in providing subjects with hints that are likely to prevent the competent ones from making performance mistakes. Markovits (1985) adopted this strategy to distinguish between the influence of performance and competence factors in conditional reasoning in adults. He observed that participants who failed in a very easy conditional reasoning task wherein no hint was provided also failed in subsequent tasks wherein hints were provided. Importantly, most of the participants who succeeded when hints were provided had also succeeded in the easier task without hints. He concludes: "[t]hese results are generally inconsistent with any purely 'accidental' or performance-base theory of incorrect conditional reasoning in adults" (Markovits 1985, p. 246). Furthermore, in the so called "new paradigm" in the psychology of reasoning (more on this below), "mistakes" in the Wason selection task are neither performance errors nor a sign of lack of competence, but adequate answers based on probabilistic reasoning (Oaksford and Chater 1994). This is not to deny that performance factors may affect conditional reasoning; the point is that competence with conditional reasoning in accordance with the canons of material implication does not seem to be a widespread trait in adults. 2 In another experimental task typical of the logical paradigm in the psychology of reasoning, participants are asked to evaluate the validity of simple syllogisms. The experimental setting is simple: participants are presented with a syllogism and asked to judge whether its conclusion follows from its premises. Participants are explicitly instructed to assume that the premises are true, even if they know that the premises are false, and to evaluate the conclusion with respect to the given premises only. Typical of the logical paradigm, experimenters assume that, since syllogisms are presented in natural language, previous training is not required for participants to understand the task. In a study of this kind, Sá et al. (1999) presented participants with the following syllogism: All living things need water. Roses need water. Roses are living things.
Asked to evaluate this syllogism, 68% of the participants judged it valid, although anyone trained in logic easily recognizes it as invalid. This result shows what every logic teacher perceives in the first days of a new class: syllogisms whose conclusion is believable tend to be judged valid (even if the conclusion does not follow logically from the premises), whereas syllogisms whose conclusion is unbelievable tend to be judged invalid (even if the conclusion follows logically from the premises). This result is confirmed in many other studies of the same kind. For example, in Evans et al. (1983), 71% of invalid arguments with believable conclusions were judged valid, whereas only 56% of valid arguments with unbelievable conclusions were correctly judged valid (i.e., in 44% of the cases, participants failed to recognize a valid argument because its conclusion was unbelievable).
Experiments like these have repeatedly confirmed that, when evaluating validity, people who are not trained in logic do not take into account only the information contained in the premises-as they should, according to the canons of deductive logic-but bring prior beliefs to bear in the completion of the task. In other words, they reason non-monotonically: prior beliefs, not contained in the premises, can defeat or vindicate an argument. This tendency was dubbed "belief bias," and it was considered a bias under the assumption that deductive reasoning, particularly the rules of deductive reasoning codified in classical logic, should be the standards governing reasoning (Ball and Thompson 2018).
The influence of belief bias is not limited to the evaluation of validity. This tendency permeates many other aspects of our cognitive lives. It is closely related with confirmation bias, sometimes called myside bias, which is "a tendency [that people have] to find arguments that support their point of view, whether that means supporting a position they agree with or attacking a position they disagree with" (Mercier 2018, p. 404). In other words, just as we tend to judge positively arguments whose conclusion agrees with our beliefs (belief bias), we are also better at finding arguments to support our beliefs than to undermine them (myside bias). These two tendencies are obviously related: myside bias hinders the search for counterexamples that could invalidate an argument with a believable conclusion (i.e., a conclusion that is in line with one's point of view), and hence invalid arguments with believable conclusions are likely to go unnoticed. By contrast, an argument with an unbelievable conclusion (i.e., a conclusion in tension with one's point of view) will naturally elicit counterexamples, if there are any, and therefore it is much less likely that an invalid argument with unbelievable conclusion goes unnoticed. (In Evans et al. (1983), only 10% of invalid arguments with unbelievable conclusions were incorrectly judged valid.) There are various explanations for the causes of belief bias available in the literature on psychology of reasoning (for a review, see Ball and Thompson 2018). Some of them are compatible with the hypothesis that belief bias and other reasoning "mistakes" are caused by performance factors, and that competence with deductive reasoning in accordance with the canons of classical logic is widespread. However, an explanation that is gaining traction in recent years holds that belief "bias," after all, is not a bias, but a feature of human reasoning. What has been described as the "new paradigm" in psychology of reasoning has it that ordinary reasoning is best modeled by Bayesian probability theory (Elqayam 2018;Oaksford and Chater 2020). "What appear to be erroneous responses, when compared against logic, often turn out to be rationally justified when seen in the richer rational framework of the new paradigm" (Oaksford and Chater 2020, p. 305). According to the new paradigm, everyday inferences are probabilistic and, hence, are, and should be, "knowledge-rich"-i.e., based on background knowledge and therefore non-monotonic-since evaluation of probabilities requires previous knowledge of the domain the inference is about.
A simple instance of Modus Tollens, presented in Oaksford and Chater (2020), illustrates this point. If someone is told that "If John turns the key, the car starts" and "the car didn't start," she is more likely to infer that John in fact turned the key but the car had some problem, than that John didn't turn the key. "This implication is based both on our understanding of relevant aspects of the world (cars do not start spontaneously, they are mostly immobile, etc.) and on the norms of conversation … We infer the existence of a failed key-turning attempt; otherwise, the car not starting would not be worth mentioning" (Oaksford and Chater 2020, p. 305). Thus, not following what seems to be a Modus Tollens is completely rational in such a everyday situation. Certainly, in cases like this the conclusion will not follow from the premises with full certainty-there will always be the possibility that John really didn't turn the key-but the point is that in everyday situations we rarely need full certainty.
[D]eductive reasoning is hardly ever instantiated 'in the wild,' so to speak, as it is at odds with the strong component of defeasibility in everyday reasoning. In most everyday circumstances, deductive reasoning is overkill: the point is not to infer with absolute certainty what follows necessarily from the available information, but rather what is likely to happen given the available information and a number of background assumptions (Dutilh Novaes 2021, pp. 20-21).
If non-monotonicity is a widespread property of everyday reasoning, monotonic deductive reasoning is not to be found everywhere. This means that, from the perspective of monotonic deductive logics, the logical intuitions of most people will count as unreliable. What can the abductivist and predictivist do in the face of this fact?
Obstacles in identifying reliable reasoners
The prevalence of apparently non-monotonic reasoning in laypeople (here, people who were not trained in logic nor mathematics) poses a challenge to both the abductivist and predictivist. In order to pre-identify the reliable reasoners who will provide data for them, they have (at least) four methodological alternatives. All of them face obstacles, as we will see.
A first alternative is to assume that, contrary to the first impression of nonmonotonicity, in fact most people reason deductively. It is only that in everyday situations people usually respond to richer arguments than the ones that are explicitly presented to them. The first task, in this case, would be the identification of the implicit deductive arguments subjects are responding to, and then taking their judgments about these arguments as the basis for logical theorizing. The difficulty in this case is that the transformation of putative enthymemes into completely stated arguments requires the assumption, in advance, of a logic: in order to identify which premises have to be added so as to convert an enthymeme into a deductively valid argument, one needs to adopt a notion of validity. Since this notion has to be in place before any analysis of the data can start, its justification must rest on something other than the data. 3 That is, abductivists and predictivists would have to choose a logic on some other basis than abductivism and predictivism.
A second alternative is to discount the judgments of those people who reason nonmonotonically, retaining as data only judgments that are in line with deductive canons. This would amount to the rejection of the majority of the possible data sample. If such a significant part of the data sample is discarded, what justifies the claim that everyday reasoning is a target phenomenon of logical theorizing? Furthemore, given that this rejection would be based on the view that deductive logics are normative for everyday reasoning, the abductivist and the predictivist would have to justify, in advance, why these logics are preferable. For this, they could not rely on the judgments of the reliable reasoners they have already selected, on pain of circularity. In other words, as above, they would have to explain their preference for deductive logics on some other basis than abductivism and predictivism.
A third alternative is to assume the non-monotonic judgments commonly found in everyday situations as data and try to devise a non-monotonic logic that could account for them. The difficulty in this case is that, by selecting non-monotonicity as a normative principle for correct reasoning in everyday situations, abductivists and predictivists would be eliminating from their data sample exactly that minority of cases where people draw indefeasible conclusions in accordance with reasoning practices that have proved to be fruitful in mathematical and scientific contexts. This could be seen as a kind of pluralism: perhaps everyday reasoning relies on a logic other than the one employed in mathematics and some sciences. But, in this case, why should it be wrong to apply a monotonic logic in everyday contexts?
A fourth alternative would be to reject laypeople's judgments altogether when it comes to the choice of deductive logical theories. This would amount to assuming a position like Dutilh Novaes's, according to whom "deduction is a term of art corresponding to practices belonging to niches of specialists [mathematicians, scientists, philosophers], rather than a basic building block of human cognition" (Dutilh Novaes 2021, Ch. 10, section 5, para. 5). If this is so, when it comes to the identification of reliable logical intuitions, only the intuitions of these specialists should matter. Martin and Hjortland consider this alternative: It may be that logicians do indeed only take into account the judgements of perceived "reliable reasoners", whether this be logicians themselves, philosophers as a whole, or members of professions required to engage in detailed reasoning within their working lives, such as lawyers and scientists. This would certainly explain why logicians do not go in much for empirical studies (Martin and Hjortland 2020, p. 17).
If deductive reasoning is not a basic building block of human cognition, then the ability to reason deductively has to be learned. Therefore, only those people who were trained in certain practices, such as those of mathematics, logic, science, and philosophy, would have the intuitive ability to reliably identify validity.
This hypothesis is in line with Williamson's (2011) response to the critique that experimental philosophers raise against reliance on intuitions in analytic philosophy. Experimental philosophers have argued that intuitions are not a reliable source of evidence for philosophical hypotheses because it has been experimentally observed that non-philosophers' intuitions about the thought "experiments" philosophers use to elicit intuitions vary across philosophically-irrelevant factors, such as one's cultural background and the order of presentation of questions (Machery 2015). In response to this, Williamson (2011) argues that professional philosophers differ from nonphilosophers in that they deploy more reliable intuitions about thought experiments due to their professional training. "[P]hilosophical training substantially reduces the influence of the distorting factors, even short of total eradication" (Williamson 2011, p. 219). Regardless of Williamson being right about the reliability of philosophers in conducting thought experiments in general, certainly his point holds (almost trivially) with respect to the superior ability of logicians and philosophers of logic to recognize valid arguments due to the specific kind of training they receive. The findings from psychology of reasoning mentioned above show that laypeople's deductive-logical intuitions are unreliable, but surely those findings are not a reason to deem philosophers' and logicians' intuitions equally unreliable.
However, particularly with regard to the logical intuitions of logicians and philosophers of logic, there is a further problem. Their intuitions are not pre-, but post-theoretical, and therefore likely to be deeply influenced by their training and their own philosophical preferences, making their intuitive judgments biased. Machery raises a similar problem against Williamson's claim that philosophical training makes philosophers' intuitions more reliable. "Some evidence suggests that at times expertise even makes experts worse than laypeople because their theoretical commitments bias their judgments" (Machery 2015, p. 196). MacFarlane raises this problem with respect to the logical intuitions of logicians and philosophers of logic themselves: The dominant methodology for addressing them [questions about validity] involves frequent appeals to our "intuitions" about logical validity. I do not think it should surprise us that this methodology leads different investigators in different directions. For our intuitions about logical validity, such as they are, are largely the products of our logical educations (MacFarlane 2004, p. 2).
MacFarlane (2004, p. 2) refers to these intuitions as "indoctrination biases." It is not difficult to see that the intuitions of a dialetheist about true contradictions will diverge starkly from the intuitions of a classicist, for example. Affected by indoctrination biases, the intuitions of logicians and philosophers of logic should not guide theory choice in logic, on pain of circularity (at least if we assume abductionism or predictivism as the method for theory choice). What is left, then, are the intuitions of mathematicians, philosophers, and scientists who were not trained in logic. But are their intuitions reliable?
The pre-theoretical logical intuitions of specialists
So far I have been speaking of intuitions, but I did not address the question of what intuitions are. In philosophy, it is not unusual to see the word 'intuition' associated with some kind of supposedly innate or a priori knowledge, whose origins are obscure. As De Cruz (2015, p. 236) observes, both in psychology and philosophy "intuitions are regarded as assessments that come about without explicit reasoning and that seem to have some prima facie credibility to those who hold them." This aura of credibility is what makes intuitions philosophically relevant, and the absence of explicit reasoning-intuitions apparently just pop up in the mind-is what makes their origins obscure. Psychological investigation has shed light on this latter aspect. Following McCauley (2011), De Cruz distinguishes between maturational and practiced intuitions. Maturational intuitions arise early in development, typically in infancy or early childhood, from practices and contents that are mastered without formal instruction, "emerging through mundane interactions between a child and her social and physical environment" (De Cruz 2015, p. 239). Some examples are the linguistic intuitions that allow one to create sentences she has never heard before, or the intuitions that enable one to interpret facial expressions. Practiced intuitions, by contrast, emerge later and only after explicit training or instruction. "The most obvious illustrations are the sorts of good judgments that experts in any field can make in a snap, whether it is an engineer knowing what building material to use in a structure, a chess master knowing what move to make in order to avoid his or her opponent's trap, or a long-term commuter knowing how the fares work on his or her local transit system" (McCauley 2011, p. 5). Practiced intuitions are a manifestation of expertise; they originate from extensive experience in a specialized domain.
In line with Williamson (2011), De Cruz (2015 suggests that the intuitions philosophers rely on to assess thought experiments are of the practiced kind. By the same token, the post-theoretical intuitions of logicians and philosophers of logic can be viewed as practiced, whereas laypeople's intuitions about "what follows from what" are likely to be maturational. Just as an engineer is able to intuit, even before "doing the math," that a certain material is not adequate for a certain kind of structure, a trained logician may be able to intuit that a certain argument is valid (or invalid) even before formalizing and probing it. Since these logical intuitions are not innate nor a priori, but acquired by training and hence post-theoretical, they are likely to be biased by the logician's training and preferences, as argued above.
Philosophers, scientists, and perhaps some mathematicians who were not trained in logic will not have practiced post-theoretical logical intuitions, but perhaps their experience with argumentation within their own disciplines may somehow "sharpen" their reasoning skills in a way that can produce intuitions about what logicians call deductive validity. This may be so if argumentation in their disciplines instantiates key aspects of what Dutilh Novaes (2021) calls "Prover-Skeptic dialogues." According to Dutilh Novaes, dialogical practices of this kind gave rise, in historical terms, to the creation of deductive logical theories. Thus, it may be that practitioners of this kind of dialogue could somehow acquire intuitions about deductive reasoning without needing explicit instruction in a particular logical theory.
Prover-Skeptic dialogues are a specialized kind of argumentative dialogue that takes place when interlocutors are jointly seeking for an indefeasible chain of inferences in order to demonstrate whether a certain conclusion follows necessarily from certain premises. Dutilh Novaes's model of these dialogues involves two characters, Prover and Skeptic. Prover wants to establish a given conclusion Q from some given set of premises . Skeptic, however, is not convinced at the beginning of the dialogue that entails Q. Prover's objective is to convince Skeptic of this entailment. To this end, Prover starts the dialogue by asking Skeptic to endorse the premises in . If Skeptic endorses them and does not present a global counterexample to the proposed entailment, Prover proceeds by putting forward a sequence of further statements that she claims follow necessarily from the premises Skeptic has endorsed. These are the intermediate inference steps that, Prover hopes, will eventually lead to Q. At each of these intermediate steps, Skeptic can ask for further clarification (if she thinks the inference is not sufficiently perspicuous), propose a counterexample to the inference, or simply endorse the inferential move. In response to Skeptic's objections, Prover may provide further clarification or modify the inference so as to avoid the counterexample. If both Prover and Skeptic are fully successful in playing their roles, at the end of the dialogue they will have produced a chain of valid inferences showing that entails Q. The contribution of Skeptic in this is fundamental, since it is her disposition to contrive counterexamples and thus identify invalid inference steps that help Prover produce a chain of inferences wherein each link is immune to counterexamples.
Dutilh Novaes's Prover-Skeptic model of deductive dialogues, outlined above, presupposes a strictly controlled interplay of giving and asking for reasons. Dialogues fitting this model are not to be found everywhere, in daily life conversations. Even so, Prover-Skeptic dialogues are still less specialized than formal proofs in a logical theory, since participants to these dialogues are not supposed to use formal tools or explicit logical rules to defend their claims. The conversation about "whats follows from what" in these dialogues may be totally informal and, most important for my purposes here, this conversation does not need to be informed by any logical theory. In this sense, there may be instances of Prover-Skeptic dialogues that are completely "pre-theoretical" with respect to logical theories, i.e., dialogues in which participants do not know logical theories at all and therefore cannot ground their assessments of validity on any logical theory. For example, think of a theoretical physicist trying to show that a certain prediction follows from a given physical theory. Surely, she will use some mathematics, but her inferential steps are unlikely to be explicitly informed by any logical theory. Insofar as other physicists (acting as "Skeptics") scrutinize her inferences, ask for further clarifications, and perhaps propose some counterexamples, there will be an instance of a Prover-Skeptic dialogue going on where no logical theory plays a role. Another example may be a theologian trying to prove the existence of God from some premises, and having her argument scrutinized by agnostic or atheist Skeptics. I submit that occasions like these are likely to give rise to pre-theoretical deductive logical intuitions among practitioners of these dialogues.
It is worth considering the psychological process through which Prover-Skeptic dialogues can give rise to deductive logical intuitions. According to the results from psychology of reasoning we saw above, before a claim that Q follows from , people are likely to evaluate it under the influence of belief bias. Thus, since Prover is the one claiming that entails Q, under the influence of belief bias-or, more precisely, myside bias-she is less likely to find counterexamples to the entailment she herself is proposing (and to intermediate inference steps). By contrast, Skeptic is initially unconvinced that entails Q, and therefore the influence of myside bias predisposes her to contrive counterexamples to ' entails Q' (and to intermediate inference steps) more easily, if there are any. At the same time that myside bias makes Prover oblivious to problems in her argumentation, it opens Skeptic's eyes to failures in Prover's argumentation. Now it is easy to see that experience with this kind of dialogue is likely to produce intuitions about which inference steps are indefeasible, i.e., immune to counterexamples (at least in the informal settings where these dialogues take place). Over time, the practitioner of these dialogues learns that some inference steps are indefeasible because, in her experience, Skeptics never manage to contrive a counterexample to them. These intuitions will not be maturational but practiced, since their acquisition involves mastering a specialized dialogical argumentative practice.
In this way, scientists or philosophers who do not have formal training in logic can acquire practiced intuitive knowledge about which inferential steps are "good." Their intuitions are likely to be more neutral than logicians' and philosophers' of logic, since they do not come from training in specific logical theories. But can reasoners experienced in Prover-Skeptic dialogues be the reliable judgers of the validity of vernacular arguments that (Martin and Hjortland 2020, p. 17, quoted above) are looking for?
This does not seem to be the case. Insofar as logical intuitions do not come from a Platonic heaven, but rather originate from experience with Prover-Skeptic dialogues as I have been arguing, they cannot be more reliable than the dialogical practices that give rise to them. A comparison with other kinds of practiced intuitions illustrates this point. Consider the examples mentioned above from McCauley (2011), of an experienced commuter who is able to intuit the fare she will pay when going to every destination in her city, or an experienced engineer who can intuit the best material for a certain kind of structure. In both cases, if they really need absolutely certain information-because, say, the commuter wants to know the exact fare down to the last cent and the engineer is designing an aircraft-they would do better by calculating, rather than intuiting. In the commuter's case, she would have to consult the price list and the regulations of the transportation company and calculate the fare (or consult the company's app); in the engineer's case, she would have to calculate the resistance of the material, following the standard practices and theories of her field. After all, experience with these operations, information, and theories was what gave them their intuitions; but their intuitions are nothing more than educated guesses. When they need more than educated guesses, they need to effectively calculate. The same goes for the post-theoretical logical intuitions of logicians. A logician can intuit at a glance that a certain argument is valid; but in order to make sure that it is really valid, she has to formalize it and use a logical theory to prove its validity.
Experienced practitioners of Prover-Skeptic dialogues are in a similar position. In these dialogues, the main "move" concerning the notion of validity is the presentation of counterexamples to an inference, showing that it is defeasible. The fact that a certain kind of inference was never challenged by a counterexample in one's experience is what gives experienced practitioners of these dialogues the intuition that that kind of inference is valid. But, naturally, the fact that no one has ever come up with a counterexample does not imply that a counterexample does not exist. Thus, despite intuitions to the contrary, someone who really wants to probe an argument would do better by exposing it to the challenges of Skeptics. Here, again, intuitions are no more than educated guesses and therefore unreliable.
Even the scrutiny by Skeptics, however, will not conclusively show that the argument is valid, since, again, the fact that no Skeptic has come up with a counterexample so far does not imply that a counterexample does not exist. Recall that the dialogues that give rise to truly pre-theoretical logical intuitions proceed without the aid of formal tools, logical theories or proof techniques. Just as logicians have to make use of a logical theory to prove conclusively that a certain argument is valid (in a given logical system), full certainty about the validity of an argument accepted by all Skeptics would demand its formalization and proof in a logical system. After all, this is the point of having a logical theory. Logicians create logical theories to, among other reasons, prove that certain kinds of arguments are valid under certain conditions. In the vernacular, we do not have either clarity about what makes an argument valid or fixed and precise definitions of implication, negation, disjunction, etc., on which we could rely to prove that an argument is valid. 4 A logical theory defines these concepts precisely, and then proofs of validity are made possible. Of course, such proofs do not show that the given argument is absolutely valid; demonstrations of validity are always relative to the logical theory where they are made, and depend on the definitions of concepts such as implication, negation, and validity, whether truth-value gaps or truth-value gluts are accepted, and so on. This brings us back to the debate about theory choice in logic. What is clear (or so I hope) is that reliance on pre-theoretical intuitions will not help here, since they are less reliable than the theories in competition (in the sense that intuitions do not allow for proofs of validity).
In the next section I argue that logical theories do not simply capture pre-theoretical intuitions about validity; rather, they improve on these intuitions so as to prove that certain kinds of inference are valid under certain conditions and explicate why they are so.
Carnapian explication and ameliorative logical theorizing
The discussion of the previous sections casts doubt on abductivism and predictivism as methods for theory choice in logic. Both accounts presuppose the existence of some data in the form of pre-theoretical logical intuitions against which logical theories could be compared. I have been arguing that there is no pre-theoretical logical intuitions that could serve this purpose. Laypeople's intuitions about "what follows from what" are non-monotonic. Although they could be seen as favoring non-monotonic logics, it does not seem adequate to rule out deductive logics on the basis that they do not correspond to laypeople's intuitions. Logicians' and philosophers' of logic intuitions are a product of indoctrination and, as such, are biased. Really pre-theoretical logicaldeductive intuitions may be found in specialists such as philosophers, scientists, and mathematicians (without extensive training in logic) who are experienced in some sort of Prover-Skeptic dialogues. But their intuitive judgments of validity are unlikely to be more reliable than proofs of validity in a logical system. All things considered, it seems that the invocation of intuitions as a reason to rule out a logical theory seems unwarranted.
To be sure, if anti-exceptionalism about logic is right, logic really could not be different from other sciences in this respect. The history of science is full of episodes where intuitive judgments have been shown to be mistaken. Some examples: the intuition that the Earth is flat; the intuition that continents are static; the intuition that light should behave either as a wave or as particles. Scientific theories are not responsible to pre-theoretical intuitions or common sense; rather, they usually come up with findings and theories that confront our intuitions radically. The same goes for mathematics. Hahn (1980) presents a number of intuitive notions that were proven wrong in geometry. After showing how several intuitive ideas about the behavior of curves turned out to be mistaken, Hahn concludes: Because intuition turned out to be deceptive in so many instances, and because propositions that had been accounted true by intuition were repeatedly proved false by logic, mathematicians became more and more sceptical of the validity of intuition. They learned that it is unsafe to accept any mathematical proposition, much less to base any mathematical discipline on intuitive convictions (Hahn 1980, p. 93).
Why logic should be different in this regard? 5 Both the abductivist and predictivist, insofar as they think of intuitions as the data logical theories have to account for, are exceptionalist in this sense. But if intuitions are not the data, what is the data? Is there any data that logicians should account for when answering the question what is validity? Haslanger (2012, p. 367) identifies "three common ways to answer 'What is X?' questions: conceptual, descriptive, and ameliorative." Only the first two aim primarily at giving an account of some "data." A conceptual account of X aims at revealing our concept of X "and looks to a priori methods such as introspection for an answer. Taking into account intuitions about cases and principles, one hopes eventually to reach a reflective equilibrium" (Haslanger 2012, p. 367). Intuitions, in this case, constitute the relevant data. A descriptive account of X aims at giving an accurate account of the phenomenon that the concept X is supposed to refer to. The relevant data, in this case, depend on the nature of the phenomenon in question; e.g., if the phenomenon is physical, then empirical data will be relevant. Abductivists and predictivists about logic seem to approach the question 'what is validity?' either conceptually or descriptively. If logical intuitions themselves are the phenomenon they intend to account for, their approach is conceptual; if intuitions are seen as conveying information about an underlying phenomenon (be it validity abstractly conceived or the most general aspects of the world, as for Williamson), then their approach is descriptive. Either way, they think of logical theories in representational terms.
The ameliorative approach to what-is-X questions is not representational. Rather, it starts by asking " [w]hat is the point of having the concept in question …What concept (if any) would do the work best?" (Haslanger 2012, p. 367). In this approach, the objectives of conceptual analysis are, first, to identify what purposes the concept in question is supposed to serve and, second, to improve on the available concept, or to replace it by a new one, so that it can serve that purposes better. One example is what Haslanger herself does with the concepts of race and gender. Another example, I submit, is what logicians do with the concept of validity.
In the previous section we have seen that pre-theoretical logical intuitions are engendered by experience acquired in some sort of Prover-Skeptic dialogues. These intuitions are best conceived of as educated guesses. Although we may intuit that a certain argument is valid, we cannot be sure that it is really so because the fact that no one has found a counterexample to it may be our fault rather than a demonstration that there are no counterexamples. The development of a logical theory is a way of going deeper in the investigation of validity; the use of formal techniques will ultimately reveal a counterexample or allow us to prove that, under certain assumptions, there is no counterexample. This is the point of having a rigorous account of the concept of validity such as those provided by logical theories.
In this sense, logical theorizing is primarily ameliorative. Logicians do not simply want to faithfully capture pre-theoretical intuitions about validity, but rather to provide a better understanding of what validity is-one which is superior to pre-theoretical notions because it can prove that certain principles and rules of inference are valid under certain assumptions, and also because it reveals what needs to be assumed (which logical constants, how they are to be defined, etc.) if certain principles and rules are to be valid.
The idea that logical theorizing is ameliorative is not new. It can be found already in Carnap (1950), in his conception of logical and scientific theorizing as explicative. 6 On Carnap's view, the explication of a pre-scientific concept does not aim at faithfully capturing intuitive notions about it; rather, "[t]he task of explication consists in transforming a given more or less inexact concept into an exact one or, rather, in replacing the first by the second" (Carnap 1950, p. 3). Carnap calls the vernacular notion in need of explication the explicandum, and the more exact theoretical concept that replaces it the explicatum. In logical theorizing, the explicandum is the pre-theoretical intuitive notion of "what follows from what," and the explicatum is a logical theory.
In the Carnapian approach, a mismatch between the pre-theoretical intuitive notion and the theory is exactly what is expected. If there is ambiguity in the explicandum, then in order to make it exact some of the meanings associated to it have to go. Therefore, "an explicatum that aspires to be exact will necessarily misrepresent the inexact explicandum" (Dutilh Novaes and Reck 2017, p. 202). It is not difficult to see that there is plenty of ambiguity in the intuitive notion of validity. (Smith 2011, p. 27) makes this point: If you think that there is [an exact pre-theoretical 'intuitive' notion of valid consequence], start asking yourself questions like this. Is the intuitive notion of consequence constrained by considerations of relevance?-do ex falso quodlibet inferences commit a fallacy of relevance? When can you suppress necessarily true premisses and still have an inference which is intuitively valid? What about the inference 'The cup contains some water; so it contains some H 2 O molecules'? That necessarily preserves truth (on Kripkean assumptions): but is it valid in the intuitive sense?-if not, just why not?
If the pre-theoretical notion of "what follows from what" allows for so many different precisifications, logical theories should not be judged according to how much of pre-theoretical intuition they account for, as if similarity to intuitions constituted evidence for a logical theory. According to Carnap, similarity is required only to the extent that it justifies the claim that the explicatum is an explication of the explicandum. In Dutilh Novaes and Reck's (2017, p. 203) view, "the issue of similarity in explication (and in formalization more generally) is partly an issue of intentionality, an issue of aboutness." That is, the final product of logical theorizing may be very far away from the intuitive notions which motivated it, provided that it is still about "what follows from what" in a suitable sense. Smith's (2011) general point is that a rigorous theoretical notion can be said to capture faithfully an informal vague notion only if some elucidation, sufficiently precise even if yet informal, of the informal notion has already been provided. This initial refinement of the intuitive notion corresponds to Carnap's requirement that an explication should start with a preliminary narrowing of the meaning of the explicandum "in order to prevent the discussion of the problem from becoming entirely futile" (Carnap 1950, p. 4). Without this preliminary clarification, theorists risk talking past each other, since the polysemy of the explicandum may allow for quite different approaches.
An example of such preliminary narrowing is Tarski's condition of material adequacy for logical consequence. (Tarski 2002, p. 176) notices that "the concept of following [in everyday language] is not distinguished from other concepts of everyday language by a clearer content or more precisely delimited denotation, the way it is used is unstable," and therefore the task of capturing and reconciling all the murky, sometimes contradictory intuitions connected with that concept has to be acknowledged a priori as unrealizable, and one has to reconcile oneself in advance to the fact that every precise definition of the concept under consideration will to a greater or lesser degree bear the mark of arbitrariness (Tarski 2002, p. 176).
That said, Tarski assumes as "the point of departure" for his analysis of the concept of logical consequence "certain considerations of an intuitive nature" (Tarski 2002, p. 183), namely, his conditions of material adequacy for accounts of logical consequence: necessary truth-preservation and validity-preserving schematic substitution. In doing so, he selects two intuitions about validity among the possibilities contained in the broader intuitive notion of validity. But a different preliminary clarification could select different intuitions as a point of departure. An example is the criterion of containment, according to which in a valid inference, the conclusion must be causally or epistemically contained in the premises. Precisifications of the notion of validity that include the criterion of containment were common in medieval discussions on consequence and are related to the development of relevance logics in contemporary times (Dutilh Novaes 2020b).
The claim that the pre-theoretical notion of validity can be specified in different manners has been used to defend pluralism about logic (e.g., Beall and Restall 2006;da Costa and Arenhart 2018). However, this is not my point here. My point is just that, since the pre-theoretical notion of validity allows for different precisifications, any argument to the effect that one, some, or all of these precisifications are correct with respect to the pre-theoretical notion is unconvincing. The pre-theoretical notion does not provide evidence for logical theories. It is only an inspiration, the starting point, but not a criterion for correction or theory choice. As Carnap remarks, given that in a problem of explication the datum, viz., the explicandum, is not given in exact terms … it follows that, if a solution for a problem of explication is proposed, we cannot decide in an exact way whether it [the explicatum] is right or wrong [about the explicandum]. Strictly speaking, the question whether the solution is right or wrong makes no good sense because there is no clear-cut answer. The question should rather be whether the proposed solution is satisfactory, whether it is more satisfactory than another one, and the like (Carnap 1950, p. 4-5).
Satisfaction has to do with one's goals. A certain theoretical approach may be satisfactory with regard to some goals and unsatisfactory with regard to others. According to Carnap, a goal shared by any scientific explication is fruitfulness, which he understands as the potential of the explicatum to allow for the establishment of universal laws and connections between the intended phenomenon and other phenomena, a potential that the explicandum may not show to the same degree.
In other words, Carnap's view seems to be that an explication is useful or fruitful when it delivers 'results' that could not be delivered otherwise (or with much more difficulty), i.e. with the explicandum alone. What this suggests is a conception of explication as a method for discovery … The goal is to produce new knowledge about the phenomena to which the explicandum pertains (Dutilh Novaes and Reck 2017, p. 206).
This, again, is a reason for accepting a larger degree of mismatch between explicandum and explicatum. The Carnapian view of logical theorizing is diametrically opposed to abductivism and predictivism in this regard; whereas the latter seek conformity between vernacular and theoretical concepts, the former wants fruitful theoretical concepts even at the cost of a mismatch with the vernacular. The explicatum cannot give rise to new knowledge if it remains faithful to the explicandum. "In this way, explication reveals itself as a cognitive tool leading to discoveries and new insights" (Dutilh Novaes and Reck 2017, p. 206). This happens regularly in logical theorizing, when logical theories reveal links between definitions, principles, and theorems that were not previously known nor knowable by means of pre-theoretical notions only.
Exactness and fruitfulness are not the only goals of explication. According to Brun (2016), in Carnap's later presentation of his conception of explication in Carnap (1963), Carnap "takes a decidedly more pragmatic perspective on explication." For the later Carnap, "choosing an adequate explicatum is a practical decision which has to be taken in view of the specific problems the explicatum is expected to solve and in view of the role it is expected to play in the target theory" (Brun 2016(Brun , p. 1225).
Here we can draw a closer connection between Carnap's explication and Haslanger's ameliorative approach. For Haslanger, as for the later Carnap, an explication is judged more satisfactory than others according to the ends we want the concept to serve. There is no absolutely correct answer when it comes to choosing among different explications; only when a goal is provided, can we select an explication as the most satisfactory with respect to that goal. In the next section, I argue that the satisfaction of one's investigative goals is a more realistic criterion for theory choice in logic.
Goals and theory choice
For abductivists and predictivists, logicians concerned with philosophical logic (in the sense defined by Williamson 2017) want to capture the pre-theoretical notion of validity. Philosophical logical theorizing, however, is conducted with a variety of philosophical aims in mind. For example, one may want to identify which rules of inference secure necessary truth preservation when the notion of implication is relevant; or when the modal notions of necessity and possibility are taken into account; or when temporal aspects are considered; or when premises are contradictory; even if some of these aspects are not salient features of an intuitive notion of validity. There are also extra-logical aims. For example, moved by ontological concerns, one may be interested in laying down logical principles and definitions which secure necessary truth preservation when empty names are allowed in, or when only constructible objects are allowed in.
It is uncontroversial that logicians engage in logical investigation with these goals in mind and develop logical systems to meet them. Even so, the discussion about theory choice in logic is usually seen as being about which of these various logical systems are correct or true. But correct or true about what? Is there any fact of the matter as to, say, whether singular terms must denote? The discussion above should have shown that intuitions about these matters should not be relied on. Free logics can be seen as laying down the conditions under which inferences involving empty names retain the property of necessary truth preservation, rather than describing real facts or intuitions about singular terms. The same goes for other logics; modal logics investigate under which conditions inferences involving modalities retain the property of necessary truth preservation, and paraconsistent logics investigate under which conditions inferences involving contradictions do not trivialize. Putting aside Platonist accounts of logical truth (which anyway seem to be incompatible with anti-exceptionalism), and insofar as facts about vernacular argumentation do not constitute evidence for logical theories, as I have been arguing, the decisive factor for theory choice in logic seems to be one's investigative goals. The logician selects or develops the theory that best fits her investigative aims.
I submit that this is what logicians and philosophers of logic really do in their actual practice, though sometimes under the guise of "appealing to intuitions." The following passage by Resnik (2004, p. 181) provides an illustration of this point: For a case where intuitions play a major role, take the common view among logicians that no formalism should count 'There are at least two individuals' as a logical truth. Some logicians base this upon the normative intuition that our inferential practice should not in itself decide questions of existence. While others appeal to the metaphysical intuition that there could be a universe containing fewer than two individuals, and some may appeal to both intuitions.
Is this really a matter of intuitions? That there are at least two individuals is intuitively true for most people, as far as I am concerned. What is counter-intuitive is the possibility of not existing at least two individuals; only philosophers entertain this possibility seriously, after years of philosophical training. At any rate, it is reasonable not to count 'There are at least two individuals' as a logical truth, but the reason is primarily methodological. Logicians do not want to address such a specific ontological question-how many objects there exist?-qua logicians, and therefore it is better not to make such a proposition a consequence of logical theories.
One aspect of Shapiro's (2006) and Cook's (2010) account of logical theories is in line with my claim that the logic one chooses depends on one's investigative aims. As (Cook 2010, p. 500) puts it, "[d]ifferent logics, viewed as models of various linguistic phenomenon, are correct relative to different theoretical goals." In contrast with the Carnapian view I am defending here, though, Shapiro and Cook see logical theories as modeling a previously given phenomenon. In Shapiro's (2006, p. 49) words, "a formal language is a mathematical model of a natural language." Here, though, logical theories do not model a previously given phenomenon, but improve pre-theoretical notions, and therefore necessarily depart from them. When a logician is investigating vagueness, for example, she is not (or should be not) concerned with providing a faithful account of how vagueness is treated by speakers in everyday situations, but rather with providing a reliable method of making inferences in contexts where vagueness is present.
If this is so, the relevant data for theory choice are not facts about pre-theoretical notions, but facts about the very logical theories in dispute. For example, if one wants to avoid commitment with the existence of non-constructible objects in mathematics, a logic that allows proofs by reductio ad absurdum does not meet her purpose. The relevant data here, then, is whether or not a logical theory admits such proofs. More generally, the relevant data for theory choice includes theorems and meta-theorems that show how the system in question works, which principles and rules of inference fail or hold in the system, how the system can be integrated with other theories, such as mathematical theories and theories of truth, and the like. These are the data that inform the choices of logicians, mathematicians and philosophers, according to their investigative aims. In addition, data about the intended context of application and about what features of a logical theory best serve the intended context are also relevant.
Notice that, in the Carnapian view I am defending here, logics are investigative tools used by specialists with specific purposes in mind. Therefore, the selection of a logical theory among many to fulfill certain investigative goals never has (or should not have) the effect of imposing a way of reasoning as the correct one for all purposes, in all situations. The choice is always instrumental, to fulfill certain investigative purposes in specific contexts. 7 For example, the intuitionistic mathematician does not choose (or should not choose) an intuitionistic logic because she thinks that it is the correct logic for every situation, but just because she wants to make sure that, when she is developing intuitionistic mathematical theories, she will not unnoticeably make claims that could commit her with non-constructible objects.
This instrumental use of logics seems to lead to a pluralist stance. But this is not necessarily so. The debate on pluralism versus monism is usually understood in representational terms. As a rule, pluralism is conceived of as either the view that there are multiple genuine representations of one and the same phenomenon (e.g., substantial logical pluralism as defined in Cook 2010) or, conversely, the view that there are multiple phenomena to be represented by logical theories (e.g., Beall and Restall's 2006 logical pluralism). The monist view opposite to these two kinds of pluralism is that there is only one logic that correctly codifies the unique relation of logical consequence in natural language. Either way, both the monist and the pluralist agree that a logical theory aims at representing some previously existing phenomenon or phenomena. 8 My suggestion here is that logical theories are not representational; they lay down definitions, axioms, and rules of inference with the purpose of securing necessary truth preservation under certain conditions. If these conditions are restrictive-for example, if one wants to restrict the intended domain to constructible objects only-then various logical theories will be needed, each one satisfactory with regard to the specific goals that imposed such restrictions. However, the monist may aim at providing a logical theory that accounts for valid inferences under no restrictions, that is, inferences that are universally valid no matter what. In this case, the goal of the investigation would be exactly the identification of these universally valid forms of inference. Insofar as Priest (2006) claims that deductive validity is necessary truth preservation in all situations, he can be seen as one pursuing such a universal logical theory. Williamson (2017) is another case in point, given his claim that logical laws are unrestricted generalizations, true of absolutely everything. Priest and Williamson, both monists and both taking logical theories to be representational, think that the "one true logic" should include only universally true laws. In the current non-representational account, there is no matter of fact about whether there are universally true logical laws. Here the cogency of monism becomes a technical matter: is it possible to provide a logical theory that secures necessary truth preservation in all situations? If the monist can survive the challenge of logical nihilism (Cotnoir 2018) and provide such a logic, even if a very weak one, she can call this the most satisfactory logic under the assumption that validity is truth-preservation in all situations.
Conclusion
We have seen that pre-theoretical logical intuitions cannot provide the kind of data that could be relevant to theory choice in logic. Other than what some philosophers claim, competence in deductive reasoning is not a widespread feature of human rationality. Untrained human rationality is non-monotonic, but this should not be seen as a reason to reject deductive logics as wrongheaded. Deductive logical intuitions must be acquired by training, be it in a logical theory or in some sort of Prover-Skeptic dialogues. In the first case, logical intuitions are likely to be reliable, but they should not guide theory choice on pain of biased choices. In the second case, the intuitions acquired by means of experience in Prover-Skeptic dialogues are not biased by any specific logical theory, but, on the other hand, are not reliable. What makes an experienced practitioner of such dialogues have the intuition that a certain inferential step is valid is the absence of counterexamples to that inferential step in her experience. This does not imply, however, that that inferential step is really valid, since the absence of counterexamples in her experience may be due to the fact that the Skeptics she have met have failed to contrive one. These considerations show that both abductivism and predictivism are not viable accounts of theory choice in logic, at least insofar as they take adequacy to the data as a desideratum and take pre-theoretical logical intuitions to be the relevant data.
In order to achieve full certainty about the validity of a certain kind of inference (under certain conditions) one has to investigate it by using regular logical techniques, i.e., by developing a logical theory. This logical theory, however, will not simply capture the pre-theoretical intuitions that motivated it; rather, it will represent an improvement over those intuitions. Relying on the theory, now one can be sure that her pre-theoretical intuitions were right, or can correct them otherwise. In this sense, logical theorizing is ameliorative, rather than descriptive. A general goal of logical theorizing is to prove that certain kinds of inference are valid under certain conditions and to explicate why they are so.
In light of these observations, and putting aside issues concerning the metaphysics of logic, I submit that there is no pre-theoretical data logical theories should account for. Logical theories are not representational, but ameliorative. True enough, amelioration and representation are not mutually exclusive; the natural sciences are both ameliorative (they improve our thoughts and practices) and representational. But my point here is that logic is not like science in this regard: it is ameliorative (improves our reasoning techniques) without being committed to a description of our pre-logical ways of reasoning. This does not turn logic into an exception among other human epistemic undertakings. Logic can still be seen as similar to epistemic activities such as engineering or technological development, in that both logic and engineering aim at improving human practices in response to certain needs. The choice of a logical theory, just as the choice of a technical solution, is (or should be) guided by one's goals and informed by data about the very logical theories or technical solutions in dispute. Just as one chooses the technical solution that best suits her practical aims, one chooses the logical theory that best suits her scientific, mathematical, logical, or philosophical aims. | 16,612.4 | 2021-08-02T00:00:00.000 | [
"Philosophy"
] |
Quantum interference and imaging using intense laser fields
The interference of matter waves is one of the intriguing features of quantum mechanics that has impressed researchers and laymen since it was first suggested almost a century ago. Nowadays, attosecond science tools allow us to utilize it in order to extract valuable information from electron wavepackets. Intense laser fields are routinely employed to create electron wave packets and control their motion with sub-femtosecond and sub-nanometer precision. In this perspective article, we discuss some of the peculiarities of intense light-matter interaction. We review some of the most important techniques used in attosecond imaging, namely photoelectron holography and laser-induced electron diffraction. We attempt to ask and answer a few questions that do not get asked very often. For example, if we are interested in position space information, why are measurements carried out in momentum space? How to accurately retrieve photoelecron spactra from the numerical solution of the time-dependent Schr\"odinger equation? And, what causes the different coherence properties of high-harmonic generation and above-threshold ionization?
Introduction
Scientific progress has been fueled by the dream to visualize objects or phenomena that are either too small or too fast to be directly perceived by our senses. For example, femtosecond laser pulses offer the opportunity to freeze femtosecond dynamics. This capability has provided unique insights into ultrafast processes such as chemical reactions [1]. Moreover, attosecond electron dynamics have been resolved with the help of short-wavelength attosecond light sources [2,3,4]. Alternatively, the time resolution can be pushed beyond the femtosecond duration of a laser pulse by using interferometric techniques, e.g. [5,6,7].
This article is a follow up to Quantum Battles in Attoscience 2020 virtual workshop and focuses on Quantum Interference and Imaging of molecular structure by means of photoelectrons. The pivotal idea is to image atomic-scale structures by utilizing electrons to overcome the ∼ 1 µm diffraction limit of infrared laser light. This Send offprint requests to: a<EMAIL_ADDRESS>is achieved by exploiting the high intensity of the laser pulses to create coherent electron wave packets (EWPs) that are driven by the laser field. With their short de Broglie wavelength, electrons allow one to push the spatial resolution toångström scales, while maintaining the femtosecond (or even attosecond) time resolution dictated by the highly non-linear light-matter interaction. This concept is depicted in Fig. 1. Here, the laser-created electron wave packet diffracts upon recollision [8] with the parent ion, encoding information about the scattering potential.
The unique combination of ultrahigh spatial and temporal resolution makes intense laser pulses extremely attractive for time-resolved imaging. In the present article, we shall focus on techniques that rely on the direct detection of the photoelectrons. The capabilities of this approach can be extended by exploiting high-harmonic generation (HHG), which allows one to transfer the favorable properties of the laser-driven electron wave packet into a beam of high-energy photons. The high-harmonic beam can be directly analyzed or utilized in secondary experiments; both approaches have been extremely fruitful but are beyond the scope of the present paper. Fig. 1. Quantum mechanical illustration of the creation and rescattering of a photoelectron wave packet in an intense infrared laser field. Calculated using QProp [10]. Figure and caption reproduced from Ref. [11].
The underlying process for the creation of electron wave packets using infrared light is strong-field ionization. Its hallmark feature is the appearance of a series of socalled above-threshold ionization (ATI) peaks in the photoelectron energy spectrum, spaced by the photon energy [9]. An intuitive explanation for these peaks is given in the photon picture: in order to overcome the ionization potential I P , an atom may need to absorb n photons. Because of energy conservation this leads to a discrete photoelectron energy ω, E 0 = n ω − I P . However, if the field contains sufficiently many photons, the atom may also absorb m (m = 0, 1, 2, 3, 4, ... ) excess photons, leading to a series of discrete energies, E m = (n + m) ω − I P . The companion process of HHG can be interpreted in an analogous manner. Here, selection rules require the number of absorbed photons to be odd, such that a comb of only odd photon energies at E m = m ω (m = 1, 3, 5, ...) is observed.
The ATI peaks can also be understood in the wave picture as a result of quantum interference: at each field maximum an EWP is created and driven by the laser field. All wave packets will eventually interfere on the photoelectron detector. Because of the field periodicity T = 2π ω , and because time and energy are conjugate quantities, this interference can be observed in the photoelectron energy spectrum as a modulation with periodicity ω. Incidentally, the HHG peaks can be interpreted to be a result of quantum interference, as well. In this case, the EWP is driven back to parent ion and recombination leads to the emission of a photon burst [8]. Since equivalent recollisions occur twice per laser cycle, interference of the photon bursts spaced in time by T /2, leads to a modulation of 2 ω in the high harmonic energy spectrum. Close inspection of the electron dynamics in an intense laser field shows that there exist, in fact, two instances in each laser cycle which yield the same electron drivt momentum. This leads to another interference feature, so-called intracycle interference whose periodicity varies throughout the photoelectron spectrum [12,13]. Importantly, the periodicity of the intra-cycle interference fringes is always larger than ω, because the difference of the responsible ionization times is always smaller than T . Hence, the intra-cycle interferences create a superstructure on the ATI comb in the energy domain. This raises the questions: if the ATI comb corresponds to photons, what is the corresponding quantity of intra-cycle interferences? Do photons exist on sub-cycle time-scales?
In the past two decades, ATI experiments have progressed from 1D energy-domain to 3D momentum-space measurements. This has been a fruitful path since ATI is rich of interference features in the spatial domain, as well. Analogously to the arguments above, these spatial interference features will manifest in the Fourier domain, i.e., in momentum space. Before we examine such features in detail, we shall discuss in chapter 2 why measurements are, in fact, conducted in momentum space rather than position space; and, equally importantly, review some techniques used to carry out momentum space measurements, and results obtained therewith.
While measurements are performed in momentum space, our position-space minds desire position space results. In order to retrieve a position space image from a momentum space measurement via Fourier transform one requires the phase of the momentum wave function, which cannot be directly measured. In chapter 3, we address the question of how phases can be measured in the lab through quantum interference, and discuss an example for reconstructing bound wave functions by holographic interference. By interfering an unknown signal wave with a (known) reference wave, a hologram is created. The concept of holography has been applied to strong-field photoelectron spectroscopy: electron trajectories that scatter from the nucleus (signal) may interfere with trajectories that do not scatter (reference). The resulting hologram, i.e., the interference pattern in the photoelectron momentum distribution, may encode information on the scattering potential at the time of rescattering [14,15,16,17,18,19,20].
The process of rescattering itself alters quantum interference and encodes structural information of the target. In chapter 4, we discuss laser-induced electron diffraction (LIED), where an electron wave packet scatters from a molecule to create a diffraction pattern from it. The resulting diffraction pattern can be described as a superposition of the signal resulting from several point-scatterers at the internuclear distance R. If the electron wavelength is sufficiently short, the internuclear distance may be retrieved from the diffraction pattern [21]. Moreover, exploiting the intrinsic delay between ionization and rescattering, LIED can be seen as a pump-probe experiment, which has been used to probe nuclear motion not only in diatomic [22] but also polyatomic molecules [23,24]. Finally, electron diffraction without rescattering can probe electronic structure [21] and dynamics [25].
For the meaningful interpretation of experiments, it is often essential that experiment and theory go hand in hand. The gold standard in the field of quantum dynamics is the time-dependent Schrödinger equation (TDSE), ideally in all three dimensions [26]. Various implementations of the TDSE have been realized, specifically for the problem of intense light-matter interactions, see, e.g. Ref. [27]. However, a time propagation |ψ(t f ) = U (t i , t f )|ψ(t i ) of the multi-dimensional wave function is only half the battle. The other half is retrieving the physical observable of interest from the final wave function. Typically, that is the unbound part of the modulus-squared of the momentum-space wave function, i.e. |φ free (t f )| 2 , representing the photoelectron momentum spectrum. In chapter 5, we illuminate this particular problem that is imperative for the comparison of experimental and numerical results.
In chapter 6, we shall discuss the fundamental limitation of all ultrafast imaging methods, namely, decoherence. This occurs, for example in complex physical system where coupling to the environment can lead to the loss of coherence. It is the unspoken necessity of any attempt to resolve quantum dynamics that the dynamics are coherent with the exciting laser field. It is insightful to examine the coherence properties of laser-driven processes. For example, in ATI, the electron wave packets emitted by different atoms do not interfere with each other; i.e., interference takes place on the single-atom level. In HHG, on the other hand, all atoms in the focal volume radiate coherently. What is the underlying reason for this fundamental difference of these closely related effects?
The final chapter is dedicated to recent efforts to expand strong-field physics and related imaging techniques to the condensed phase, particularly quantum materials. These systems can exhibit pronounced coherence effects, and decoherence plays an important role. One key feature of solids, as compared to gases, is the periodicity of the binding potential. This has far-reaching consequences, leading to new quantum mechanical effects to be investigated with the ultrafast imaging toolbox.
Atomic units (a.u.; = 1, 4πε 0 = 1, e = 1, and m e = 1) are used throughout the paper, unless otherwise stated. The position of a bound electron in an atom (e.g. atomic hydrogen) is known to be in the vicinity of its ionic core. Thus, the average momentum of the electron relative to the ionic core must be zero because otherwise the electron would move away from the ionic core and the electron could not be bound. Therefore, the expectation value of the momentum is zero, p = 0. However, the electron possesses non-vanishing kinetic energy, i.e., p 2 > 0.
Upon ionization (e.g. because the atom is irradiated with an intense laser pulse) the electron is ejected from the atom and the liberated electron's position coordinate relative to its parent ion changes as a function of time. The liberated electron can be modeled by a wave packet that evolves with time. Since the electron wave packet carries valuable information of the physical system and its dynamics, its characterization is at the very heart of many approaches to study light-matter interaction. This gives rise to an important question: How to characterize the wave function of a freely propagating electron?
Before we answer this question it is important to be aware that position and momentum are conjugate variables and that the complex valued wave function's in position space and momentum space are linked by Fourier transformation. This implies that a given electronic state can be fully expressed by using only position space or by using only momentum space coordinates. Despite this equivalence of position and momentum space, there is a fundamental difference comparing momentum and position space when it comes to a freely propagating electron: the momentum of a freely propagating electron is conserved but the position of this electron changes as a function of time. Although this appears to be trivial from a theoretical perspective, it has far reaching consequences regarding the measurement of freely propagating electrons in real experiments.
Let Φ(p, t) be the complex valued electron wave function in momentum space that depends on the time t and the three-dimensional momentum p. In full analogy Ψ (x, t) is the complex valued electron wave function in position space. We exemplify the relationship of position and momentum space wavefunctions for a free electron in Fig. 2. |Ψ (x, t)| 2 is time-dependent and evolves on ultrafast time-scales whereas |Φ(p, t)| 2 is time-independent. This directly shows that the position space distribution has to be characterized as a function of time. In contrast, |Φ(p, t)| 2 is constant. Thus, for a freely propagating wave packet, the expression |Φ(p)| 2 is useful without specifying at which time it has been measured.
Example: Measuring 3D momentum distributions
Despite the theoretical considerations, how to measure the absolute square of the wave function in momentum space |Φ(p)| 2 in real experiments? The state-of-the art method is to use a COLd Target Recoil Ion Momentum Spectroscopy (COLTRIMS) reaction microscope [28,29] which makes use of the dispersion of the wave packet in position space that is illustrated in Fig. 2. Allowing the wave packets to evolve with time (typically for several nanoseconds instead of several attoseconds as in Fig. 2) in the presence of static external electric and magnetic fields results in a macroscopic distribution of the electron wave packet that has a size of several millimeters when the wave packet hits a time-and position-sensitive detector (see Fig. 3 for an illustration of a COLTRIMS reaction microscope). The position and the time are typically measured with a precision of several tens of micrometers and several 100 picoseconds. The additional knowledge about the initial time (typically with a precision on the order of 100 picoseconds) and position (typically with micrometer precision) of the electron in the spectrometer allows for the reconstruction of the three-dimensional momentum distribution |Φ(p)| 2 with a resolution of typically 1/100 atomic units.
This mapping of macroscopic position and time information (nanoseconds and millimeters) to momenta on the atomic scale is illustrated by Fig. 4. The same conceptual idea underlies the widely used technique of velocity map imaging (VMI) [30]. However, while VMI is similar to COLTRIMS, it (usually) does not resolve the time-offlight of the particles, resulting in 2D projections of 3D momentum space. However, it should be noted that, in fact, both techniques measure velocities and not momenta. Moreover, both techniques can be applied to measure not only electrons but also ions. In summary, for the momentum spectroscopy of liberated electrons it is made use of the fact that the amplitudes in momentum space do not change as a function of time because momentum is conserved for a freely propagating particle.
Amplitude information in electron momentum space
What can be learned from the measured electron momentum distribution |Φ(p)| 2 ? One famous example is the idea of the attoclock [33,34,35,36]: Using an elliptically polarized single-cycle laser pulse it is assumed that the most probable time at which the electron starts to tunnel [37] is at the maximum of the laser electric field. Further, the final electron momentum is considered to equal the integral of all laser-induced forces acting upon the electron after tunneling. Then, the final electron momentum of the electron can be used to retrieve the time at which the electron appeared at the exit of the tunnel. By evaluating the rotation of the final momentum distribution with respect to the polarization ellipse, the attoclock has been used to investigate the time the electron spends inside the tunnel. This interpretation has led to an ongoing debate [38], also because of conceptional difficulties regarding the bound part of the electron wave function [39], non-adiabaticity [40] and the long-range Coulomb-interaction of the electron and its parent ion [41]. Recent experiments have set an upper limit of 1.8 attoseconds to the tunneling delay time that is measured by the attoclock upon the strong field ionization of atomic hydrogen [36]. Further examples that study light-matter interaction by interpreting amplitudes in final momentum space are found in Refs. [42,43,44,45,25].
Momentum space imaging of electronic orbitals
Even in the absence of rescattering, the photoelectron momentum distribution (PMD) may encode structural information of its origin. The measured far-field photoelectron momentum distribution can be understood as a diffraction image of the source. Thus, in principle, it should be possible to retrieve structural information by analyzing the diffraction pattern. However, the source is not identical to the atomic or molecular orbital from which the electron is removed but rather the corresponding Dyson orbital. In addition, the details of strong-field ionization have a decisive impact. The momentum distribution along the laser polarization is, of course, determined by the timedependent laser field, leaving the perpendicular momentum distribution for potential imaging applications. These are, however, hampered by a markedly distorting filtering effect of tunnel ionization, which strongly suppresses large momenta in the direction perpendicular to the laser polarization. Specifically, the perpendicular-momentum wave function at the tunnel exit (z = z ex ) is related to the one at the tunnel entrance, i.e. the Dyson orbital (z = z in ) by [46,47] and E is the electric field strength. The decisive influence of the tunnel filter function (1) makes it necessary to eliminate it in order to retrieve a useful orbital image. This has been achieved in Refs. [21,48] by directly comparing PMDs recorded for parallel and perpendicular alignment, respectively. The difference images reveal clear structures that demonstrate that the PMDs recorded with linear laser polarization contain a filtered projection of the orbitals.
Notably, circular polarization can also be used to map out the orbital shape in combination with molecular orientation [49,50,51]. This approach, sometimes called "laser STM" (as in scanning tunneling microscope) is similar to the attoclock. Here, however, the unique mapping between the direction of tunneling and drift momentum is exploited to map out the angle-dependence of the tunneling probability of aligned or oriented molecules. The partial Fourier transform method [46,47] explains how perpendicular PMD and angle-dependent tunnel ionization yields are related to the orbital shape. The laser-STM technique has been utilized to resolve angular correlations in sequential double ionization due to a spin-orbit wave packet in Neon cations [52].
Direct imaging of a spin-orbit wave packet in Ar + has been recently achieved by combining tailored laser fields [25]. The delay-dependent PMDs recorded in coincidence with doubly charged ions deliver a movie of the electron motion shown in Fig. 5. In the experiment, a few-cycle pump pulse is used to ionize neutral Ar, producing Ar + in a coherent superposition of two spin-orbit states. This causes the vacancy in the Ar + valence shell to oscillate between the m = 0 and |m| = 1 states with a period Position-space images obtained by Fourier transform, assuming a flat phase in momentum space. The expected circular symmetry is broken by a stretch along px, due to an experimental artifact. The figure has been adapted from Ref. [25].
T SO = 23.3 fs, modulating the spatial electron density. The momentum space signatures of these modulations are seen in the experimental snapshots displayed in Fig. 5(a). After completion of a half-period 1/2 T SO , the vacancy is in the |m| = 1 state, and the m = 0 state, aligned with the laser polarization, is occupied by two electrons. At these delay values, the measured electron density in momentum space exhibits a small spot in the center of the momentum distribution. For alignment of the vacancy in the |m| = 0 state, at T SO , a ring-shaped electron density is observed. The ring shape can be understood as an image of the donut-shaped |m| = 1 orbital, while the spot in the center relates to the peanut-shaped m = 0 orbital. Shown in Fig. 5(b) are the spatial images obtained by Fourier transform of the measured momentum space distributions, assuming a flat phase. These spatial distributions do not correspond to the actual spatial orbitals but rather to their autocorrelation signals. The discrepancy with respect to the actual orbitals is most clearly seen for the donut-shaped distribution, which should not be filled in the center. This illustrates how phase information is crucial to reconstruct the spatial orbitals.
Phase information in electron momentum space
Unfortunately phases cannot be measured directly, and experimentally only |Φ(p)| 2 is accessible (see chapter 3). The phase of a wave packet in final momentum space is relevant if this wave packet is superimposed with a second wave packet which leads to interference. Here the absolute phase of the two wave packets does not change the observable quantities and it is the relative phase of the two wave packets that determines if interference is constructive or destructive. A few examples illustrating the relevance of relative phases in momentum space are briefly described below.
The interference of two electron wave packets that emerge from two different points in position space, e.g. the atoms in a diatomic molecule, can act like a doubleslit which gives rise to the well-known interference pattern in momentum space for such a two-path interference [53,54]. As for a macroscopic double slit also here the slitgeometry defines the observed interference pattern.
ATI for a multi-cycle laser pulse leads to discrete values for the electron energy that can be explained as a consequence of energy conservation or by an inter-cycle interference [13]. The time-dependent light field field acts as a grating in the time-domain which is defined by the frequency of the photons. This gives rise to interference in the energy-domain and the spacing of the peaks in the energy spectrum is proportional to the photon energy.
Sub-cycle interference occurs if two wave packets, which overlap in momentum space, are released at times that differ by a timespan that is smaller than the duration of an oscillation of the light field. Conceptionally, sub-cycle interference is very similar to inter-cycle interference [13,55,56]. Examples that use sub-cycle interference are twocolor attoclock interferometry [57,58] and holographic angular streaking of electrons [59,60]. Interestingly such approaches allow for the access of changes of the Wigner time delay in strong field ionization. The Wigner time delay is the derivative of the phase of the electron wave packet with respect to energy [61,62]. However, the modeling of sub-cycle interference by interference of electron wave packets that are born within less than one cycle of the laser field raises the question: Is it possible to model sub-cycle interference (as in Refs. [13,63,56]) in the energy domain or is the energy picture not suitable to model subcycle processes? Examples towards such a description are found in Refs. [64,65]. However, in order to calculate the coherent sum of all possible pathways in the energy domain, the need for the inclusion of all the corresponding phases and amplitudes of these pathways leads to a very high complexity.
Finally, laser-driven electron recollision leads to various types of interference and diffraction effects, which we discuss in the following chapters 3 and 4.
Quantum interference of electron wave packets
The interference of two plane waves Ψ 1,2 = A 1,2 exp (iφ 1,2 ), where A is the complex amplitude and φ is the corresponding phase, yields probability density which allows access to the relative quantum phase ∆φ = φ 1 − φ 2 , and thus to the natural space-time scales (atomic scales,ångström or nanometer and attosecond time) of electron dynamics. In a next step, we review Quantum Spectral Phase Interferometry for Direct Electron wave-packet Reconstruction (QSPIDER), which was introduced in Refs. [66,67]. In the experiment [66], EWPs are created by photoionization of atoms using an attosecond pulse train with a synchronized IR laser field, which induces a momentum shear to the EWP. Due to the periodicity of HHG, the sign of the momentum shear alternates between positive and negative for adjacent pulses. The resulting interference patterns allows for the reconstruction of the EWP's phase, and operates in close analogy to Spectral Phase Interferometry for Direct Electric-field Reconstruction (SPI-DER) [5] in optical metrology.
Figure (6) shows the four steps towards the recuperation of a single EWP phase and its amplitude. In a first step, it is assumed that two identical EWPs are produced by replica of same extreme ultraviolet (XUV) attosecond pulse with the simple difference that there exists a time delay between them, as shown in Fig. (6a). The XUVatom interaction at low intensities (10 10 − 10 12 W/cm 2 ) and for photon energies ω X > I p exceeding the ionization potential, I p , is described by perturbation theory. In the second QSPIDER step, a spectral shear is induced in the EWP by means of the weak infrared (IR) laser-field.
In the third step, the final momentum-space distribution is calculated. If the attosecond pulse duration is much shorter than an optical cycle of the IR pulse, T 0 , it can be expressed as the product of an amplitude and, importantly, a phase factor. For an attosecond pulse centered at τ 1 with respect to the IR laser, the EWP is described by the following dependencies [67]: along the common polarization direction of both the IR laser field and XUV-attosecond pulse, |Ψ 0 is the ground state of the atomic or molecular system, |p is the scattering continuum wave and −x is the dipole moment operator which is proportional to the position operator x. Thus, the physical interpretation of d(p) is the complex transition from the ground state |Ψ 0 to the continuum state |p mediated by the dipole operator (−x) [68]. In case of a single attosecond EWP, the amplitude in Eq. (3) A(p, t F , τ 1 ) is proportional to the real amplitude of the dipole transition matrix element |d(p)|. In Eq. (4), the dipole phase is ϕ d[p+A L (τ1)] which will be extracted as in Ref. [67]. In general, the laser-induced chirp (LIC) generated by the variations of the IR field around the time of ionization needs to be considered. However, in the case of short attosecond pulses (<200 as) and modest intensity (I 0 < 10 13 W/cm 2 ) the effects of the LIC phase are negligible. This phase depends on the value of the electric field at ionization time τ 1 and is zero if It will become relevant for streaking and interferometric measurements as it can become larger than the phase of the dipole. The most important aspect in this third step is to recover the dipole phase difference ∆ϕ d[p+A L (τ1,τ2)] which can be approximated as The last step in QSPIDER is to apply the Fourier algorithm to extract the derivative of the dipole phase and integrate it. In the next section, we will follow those steps in He + .
Quantum spectral phase interferometry for direct electron wave-packet reconstruction
The validity of the QSPIDER concept has been verified by a numerical simulation presented in Ref. [67]. To this end, two delayed copies of an EWP with a relative shear between them are used to construct an interferogram, similar to the optical SPIDER technique. This can be realized by focusing an attosecond pulse train (APT) with exactly two pulses centered at τ 1 and τ 2 , onto He + in the presence of a weak IR laser pulse with vector potential A L (t). One obtains two EWPs which are delayed relative to each other by approximately one optical cycle of the IR-laser. The IR laser streaks each of the EWPs resulting in a relative streaking, ∆A L = A L (τ 2 ) − A L (τ 1 ), between the two EWPs copies. The streaked and delayed copies produce an interferogram in the final momentum distribution which is conceptually equivalent to the interferogram of the SPI-DER technique (see Fig. 6c) [67].
By applying the 4-steps described in the previous section and the Fourier analysis (see Fig. 6d), we can extract the interesting dipole matrix phase of Eq. (5), which in certain limit is the momentum derivative of the dipole phase. We can also extract the EWP |A(p)| associated to the DC term in the limit A L (τ 1 ) ≈ A L (τ 2 ). In the next section, we will apply QSPIDER principles to He + and demonstrate the retrieval of the Dipole matrix elements.
Quantum spectral phase interferometry for direct electron wave-packet reconstruction in 2p states
The two XUV-ATP in the presence of a weak IR laser pulse interacting with He + are shown in Fig. 7. This configuration creates an ideal interferometry scenario to extract the dipole phase derivative and the amplitude.
By applying the Fourier Analysis of SPIDER to the AC component, the extraction of this dipole phase derivative is shown in Fig. 7(b) and 7(c) for negative and positive p-momentum up to the spectral range which the XUVpulse allow us. The EWP amplitude has a clear node at p ∼ ±1.5 a.u., as expected for 2p orbitals. The red and green dots show the QSPIDER reconstruction of the dipole matrix element derivative, which was obtained by dividing the EWP amplitude by the XUV-spectral amplitude (see Eq. (3) for the EWP amplitude A(p, t F , τ 1 )). Concerning the dipole phase reconstruction, we observe clearly a Dirac-like distribution, unique for a system in which the phase has a jump of π. This jump is shown in Fig. 7(d) and (e) for positive an negative momenta, and compared to the analytic dipole phase. Good agreement between the reconstruction and the expected dipole phase is found. We also performed TDSE calculation in 1D which are detailed in Ref. [67].
The example of QSPIDER demonstrates how quantum interference of EWPs provides access to phase information of the EWPs, and by extension, of the atomic or molecular system under study. While the first EWP propagates in the continuum, the second one remains bound until the second attosecond pulse arrives, meanwhile probing the system. The information carried by the second EWP is retrieved by considering the first EWP as a known reference wave. This is the concept of holography, and it is applicable to a larger number of experiments in strong-field and attosecond physics, in particular to electron rescattering in ATI.
Photoelectron holography and its limitations
When the liberated electron in ATI is driven back to the core, it scatters on the (ionic) potential. In the simplest case of a point-like scatterer, the scattered wave can be approximated as a spherical wave originating at the core. If we consider the unscattered wave as a plane wave and interfere it with the scattered spherical wave, we obtain an interference pattern similar to the one shown in Fig. 8(c). It closely resembles the well-known side lobes in the angular distribution of of ATI first reported in Ref. [69], which has been known as holographic interference pattern since the landmark papers from Spanner et al. and Huismans et al. [16,17]. The holographic interpretation of these features allows one to utilize them as a probe of the scattering potential and associated dynamics. Variations in holographic patterns have been used to probe ionization dynamics in two-color experiments [20,55,56,57,58,70], molecular dissociation [71], and bound electron and nuclear dynamics [19].
However, a word of warning comes from an important paper by Meckel, et al., who studied the effect of molecular alignment on the holographic fringes in PMD [18].
Extending on their pioneering work on LIED [21], the authors carefully varied the angle between the molecular axis and the laser polarization, and found a striking off-center holographic fringe pattern for an angle of 45 • that is reproduced in Fig 8 (a). This pattern agrees well with results obtained by numerically solving the TDSE, Fig. 8(b). For the interpretation of these results, and to understand the meaning of the off-center fringe pattern, simple wave packet scattering simulations are used. These demonstrate that it is not the tilt of the molecular axis that moves the fringes (Fig. 8(c, d). It is rather a property of the recolliding wave packet that explains the observations. Specifically, if the wave packet is given a spatial offset relative to the molecular axis, the off-center fringe pattern is obtained [ Fig. 8(f)]. This led the authors to conclude that in their "experiment, electron holography provides information about the continuum electron wave packet rather than the scattering object" [18]. This study demonstrates that it is important to know the relevant properties of the recolliding electron wave packet in order to adequately probe molecular structure.
Nevertheless, the example of QSPIDER from section 3.1 shows that recollisions are not a necessary prerequisite for holography. An example that harnesses photoelectron holography without recollision is holographic angular streaking of electrons (HASE) [59] where a co-rotating two-color laser field is used to create two electron wave packets that interfere and reveal properties of the phase of the electron wave packet in momentum space. Since for HASE the combined electric field is close to circularly polarized, the continuum wave function, the wave function at the tunnel exit and the bound electron wave function are closely related [60]. This is in contrast to linearly polarized light where recollision and sub-cycle interference lead to a non-trivial relationship of the continuum wave function and the wave function at the tunnel exit [72,73].
Laser-induced electron diffraction
The process of electron rescattering can lead to interference, even in the absence of an unscattered reference wave. This phenomenon is known as Laser-induced electron diffraction (LIED) [2,11,14,21,22,74,75,76,77,78] and is the strong-field-variant of ultrafast electron diffraction (UED) whereby a molecule is tunnel ionized to generate an EWP that is used to take a "selfie" of its molecular structure. LIED can retrieve the internuclear distances in a molecule with picometer and attosecond precision.
Notably, these properties have enabled LIED in the MIR to capture a sub-10-fs snapshot of deprotonation in dissociating C 2 H 2+ 2 . This was only possible with LIED's sub-optical-cycle probe of molecular structure together with its sensitivity to hydrogen scattering. Moreover, ultrafast changes on the rising edge of the LIED pulse have been shown to lead to significant structural deformation in C 60 [85], CS 2 [82] and OCS [88].
LIED can be well-described using the laser-driven electron-recollision framework [8,90,91,92] in which the emitted EWP is: (i) accelerated by the oscillating electric field of the intense laser pulse before (ii) returning and (iii) rescattering against the target ion. It is justified to consider only the dominant trajectory that leads to a given drift momentum after rescattering. In the case of LIED this is the so-called long trajectory, which is produced close to the peak of the electric field. In the quantum mechanical picture of LIED shown in Fig. 1, the emitted EWP is returned and rescattered against two scattering centers (i.e. two atoms in a molecule), leading to interference fringes in the detected electron momentum distribution. These interference fringes are described by the coherent molecular interference term, I M , which contains structural information and can be in the framework of the independent atom model (IAM) [75,93,94], as given by [11,95] Specifically, the phase factor of I M contains the internuclear distance between two atoms (i and j), R ij . I M is a function of the momentum transfer (i.e. q = 2k r × sin(θ r /2)) between the incoming EWP and target following scattering, where f i is the electron scattering amplitude on atom i. In fact, I M is detected as a sinusoidal signal for randomly oriented molecules as given by [11,95] which typically appears as oscillations in the detected momentum distribution of high-energy electrons. The full three-dimensional momentum distribution of rescattered electrons, as shown in Fig. 9(a), can be detected, for example, with a COLTRIMS reaction microscope (ReMi) [28,29,96,97] (also see section 2.2). Importantly, the ReMi can simultaneously detect electrons and ions in kinematic coincidence to select the electronion fragmentation channel that is generated during the intense-laser matter interaction in a two-step process. Firstly, the ion of interest (e.g. H 2 O + ) [86] is identified by selecting its corresponding ion time-of-flight (ToF) range from the ion ToF spectrum, see Fig. 9(b). Then the twodimensional electron momentum distribution parallel, p , and perpendicular p ⊥ of electrons generated with the ion of interest can be generated, see Fig. 9(c). Here, the return momentum, k r , at the time of scattering, t r , is obtained by subtracting the vector potential, A(t r ), of the laser field from the detected rescattered momentum, k resc . The differential cross-section (DCS; i.e. number of electrons scattered into a specific solid angle) is extracted by integrating the block arc (yellow) area in Fig. 9(c) at various different k r . Fig. 9(d) shows the measured electron yield from ionization of H 2 O [86] for all electrons (blue dashed) and electrons detected in coincidence with H 2 O + (black solid) as a function of kinetic energy in units of ponderomotive energy (U p ; i.e. the cycle-averaged kinetic energy of a free electron oscillating in the electric field of a laser pulse). Fig. 9(d) has two regions clearly distinguishable. Electrons that rescatter (do not rescatter) against the target ion are detected with a typical rescattering kinetic energy of 2 − 10 U p (0 − 2 U p ) and are referred to as "rescattered" ("direct") electrons as indicated by the orange (grey) shaded regions in Fig. 9(d). Thus, as a second step, only the 2 − 10 U p rescattered region is considered for LIED imaging. In this region, one can see that the sinusoidal signal of the I M is more pronounced in the H 2 O + -electron data as compared to the all-electron data. This demonstrates the capability of electron-ion coincidence detection with a ReMi to provide a more sensitive probe of the I M which would otherwise be washed out by background signal without coincidence selection.
In fact, highly-energetic electrons with detected kinetic energies of hundreds-of-eV range (i.e. U p 10 eV, see Fig. 10(a)) are required to achieve an appreciable momentum transfer, q, to penetrate beyond the valence electron cloud and scatter against the inner-most core electron shell close to the nuclei. To achieve the high kinetic electron energies, long wavelength driver sources (i.e. λ > 2 µm in the mid-infrared range) are needed to drive LIED experiments. Extracting the molecular interference signal I M from the total interference signal I T requires the subtraction of the background atomic I A signal. This can be achieved by either calculating the I A signal using the IAM or by applying a background empirical fit to the detected DCS signal, the latter of which is shown in Fig. 10(a). Doing so allows the I M molecular signal to be contrasted against the I A atomic signal through the molecular contrast factor, MCF, as given by [11,95] Fig. 10(b) shows the MCF as a function of momentum transfer, which provides a unique fingerprint of the molecular structure through the sinusoidal signal that is related to the I M . Fourier-transforming (FT) the MCF signal provides the one-dimensional radial distribution of internuclear distances that are present in the molecule. In this case for the LIED imaging of H 2 O, two FT peaks are clearly present that correspond to the O-H and H-H internuclear distances at 1.14 and 1.92Å when comparing to literature values. [86] There in fact exist two variants of LIED, as shown in Fig. 11(a): (i) FT-LIED [80,11] or also called fixed- angle broadband laser-driven electron scattering (FA-BLES) [78], and (ii) LIED based on the quantitative rescattering (QRS) model [98,75], referred to as QRS-LIED. In FT-LIED, the energy dependence of rescattering in back-rescattered electrons are only considered (i.e. varied k r at fixed θ r ≈ 180 • ). Here, the far-field detected electron momentum distribution can be related to the near-field image of the molecular structure through a FT relation. In QRS-LIED, only the angular dependence of rescattering (i.e. varied θ r at fixed k r ) is considered at various fixed k r enabling the measurement of the doubly differential cross-section (DDCS) of elastic scattering. Fig. 11B shows the angular-dependence of scattering in N 2 measured with LIED (blue squares) and with fieldfree conventional electron diffraction (CED; red line). [22] The very good agreement between LIED and CED demonstrates LIED's ability to extract field-free DCS from fielddressed measurements which are comparable to those measured with CED. Moreover, LIED's sensitivity to hydrogen scattering is demonstrated by its structural retrieval of many hydrogen-containing molecules such as in the deprotonation of C 2 H 2+ 2 (see Fig. 11C), as well as in C 2 H 2 , H 2 O, NH 3 , C 6 H 6 and more [11]. This is particularly pronounced at scattering angles other than forward scattering (i.e. θ r > 10 • ) where the scattering amplitude of hydrogen scattering is within an order of magnitude of carbon scattering in LIED due to the low kinetic energies of the rescattering LIED electron (see Fig. 11D). Whilst in UED, the scattering amplitude of hydrogen and carbon scattering at θ r > 10 • is orders-of-magnitudes lower than in LIED owing to the significantly higher electron kinetic energies used in UED. Although UED is limited to forward-scattering-only, time-resolved UED studies have demonstrated to be a very sensitive and powerful probe of molecular structure and photoinduced molecular dynamics using sub-150-fs MeV UED electron pulses [11,99,100,101]. A variety of complementary aspects between field-dressed LIED and field-free UED measurements exist, with many future opportunities to study a variety of gas-phase molecular structures and associated dynamics [11].
How to extract photoelectron momentum distribution from the ab initio calculations?
Over the last three decades numerical solutions of the time-dependent Schrödinger equation within the singleactive electron (SAE) approximation have emerged as one of the main theoretical tools used to study photoionization and strong-field phenomena. Due to the "black-box" nature of the TDSE, however, the underlying physics are often interpreted using alternative theoretical models [26] and approaches such as the strong-field approximation (SFA, for a review see Refs. [27,102]). Other approaches, which are often variants of the SFA, include quantum-orbit theory [103,104,105,106], Coulombcorrected SFA [107,108], Coulomb quantum-orbit strong field approximation [109,110,111], semiclassical two-step model [112,113,114], classical trajectory based Monte-Carlo method [115,116,117], quantum trajectory based Monte-Carlo method [118] and many more.
The majority of these methods have one aspect in common, namely, they use a trajectory based picture to describe the field-induced ionization process and the associated electron motion. Importantly, there may exist many different pathways for an electron to reach the detector with the same final linear momentum. Trajectory based methods allow us to explain features in the photoelectron momentum distribution as quantum interference of different pathways, yielding an intuitive physical interpretation of the photoionization process, which may not be readily available from a TDSE solution. Another advantage of these models is that they are often computationally much simpler than the numerical solution of the TDSE.
The advantage of solving the TDSE, on the other hand, is that it is the most rigorous tool that theorists use to predict and to validate experimental results (see for example Refs. [17,119,120,121,122,123]) and to compare with the predictions of above mentioned methods (see for example Refs. [109,110,111,124,125,126,127]). In this sense, numerical solutions of the TDSE are often used as a benchmark.
In this section we give a brief introduction to the numerical method for solving the TDSE within SAE approximation and dipole approximation (for more details see Ref. [128]) and give guidelines how to extract PMD from the time-dependent wave-packet calculations.
The initial state used as a starting point in the TDSE calculations is obtained by solving the stationary Schrödinger equation for an arbitrary spherically symmetric binding potential V (r) = V (r) in spherical coordinates: The solution ψ(r) can be written as where the Y m (Ω) are spherical harmonics. The radial function u n (r) is a solution of the radial Schrödinger equation: where n is the principal quantum number and is the orbital quantum number. The initial wave function ψ n m (r) is propagated under the influence of an intense laser field as described by the TDSE: where V I (t) = −iA(t)∂ z is the interaction operator in the dipole approximation and velocity gauge. We assume that the laser field is linearly polarized along the z axis, so that the vector potential is given by is the electric field of the laser pulse: Here E 0 is the electric field amplitude, ω = 2π/T is the laser-field frequency and T p = N c T is the pulse duration, with N c the number of optical cycles. At the end of the laser-atom interaction t = T p we obtain the timedependent wave function |Ψ (T p ) which contains all relevant information about the simulated process. The question is how do we extract this information from the final wave function |Ψ (T p ) ? The formally exact PMD can be extracted from |Ψ (T p ) by projecting it onto the continuum states of the field-free Hamiltonian H 0 having the linear momentum k = (k, Ω k ), Ω k ≡ (θ k , ϕ k ). We call this method the PCS (Projection onto Continuum States) method. In a typical photoionization experiment a photoelectron ends up in quantum states with a linear momentum k, so that corresponding continuum states must be localized in momentum space. The continuum states that describe such a quantum state obey the so called incoming boundary condition and can be written as the partial-wave expansion [129,130]: where ∆ is the scattering phase shift of the th partial-wave. The continuum states (15) merge with the plane wave at the time t → +∞: ψ . The probability P (E k , θ k ) of detecting the electron with kinetic energy E k = k 2 /2 emitted in the direction θ k is given by where k = (k x , k z ) = (k sin θ k , k cos θ k ). It is worthy to note that in some cases it can be cumbersome to obtain continuum states since the continuum states are only known in an analytical form for the pure Coulomb potential. If that is the case, an approximate PMD can be obtained by what we call the PPW (Projection onto Plane Waves) method. This approach for obtaining the PMD from the time-dependent wave-packet calculations has been introduced and discussed in details in Ref. [131].
After the laser field has been turned off, the wave function |Ψ (T p ) is propagated for a time τ under the influence of the field-free Hamiltonian H 0 . The time interval τ has to be large enough so that even the slowest part of the wave function |Ψ (T p + τ ) has reached the asymptotic region r > R where we can neglect the atomic potential, V (r) ≈ 0. By excluding the bound part of the wave function |Ψ (T p + τ ) which we assume is spatially localized in region r < R, we can obtain PMD by projecting the continuum part of the wave function onto a plane-wave: where we use plane wave Φ k (r) given as the partial-wave expansion: where j (kr) is the spherical Bessel function of order . The prime on the time-dependent wave function in (17) indicates that we take only part of the wave function |Ψ (T p + τ ) that has reached beyond the border of the asymptotic region, r > R. In all presented calculations we have set R = 40 a.u. Let us now compare these two approaches for obtaining PMD. As an example we use the fluorine negative ion F − . This choice is motivated by the fact that, after electron emission, there is no long-range (i.e., Coulombic) potential, and, hence, the applicability of the PPW method is clear. Within the SAE approximation we model the corresponding potential by the Green-Sellin-Zachor potential with a polarization correction included [132]: with Z = 9, D = 0.6708, H = 1.6011, α = 2.002, and r p = 1.5906. The 2p ground state of F − has the electron affinity equal to I p = 3.404 eV.
In Fig. 12 we show ionization probability in the direction θ k = 0 • for the laser field intensity I = 1.3 × 10 13 Wcm −2 , wavelength λ = 1800 nm, and four optical cycles laser pulse duration, N c = 4. The solid black line represent the ionization probability obtained by the PCS method. The red line represent the ionization probability obtained by the PPW method with the post-pulse propagation of the wave function equal to τ = 0 a.u. We can see that the low-energy part of the photoelectron spectrum agrees quite well with the exact result. On other hand, the high-energy part of the spectrum exhibits oscillations which are absent in the PCS results. These oscillations can be smoothed by increasing the time of the post-pulse propagation up to τ = 1500 a.u. The results for τ = 1500 a.u. are depicted by the blue solid line. Our experience tell us that general rule of thumb is that as we increase the post-pulse propagation time, the agreement between these two methods becomes better in the high-energy region, although this implies that we have to use a larger spatial grids on which the TDSE is numerically solved. Therefore, one has to compromise between the consumption of computing resources and obtaining fully converging photoelectron spectrum. In Fig. 13 we show full PMD obtained by the PCS (left panel) and PPW methods with τ = 1500 a.u (right panel). The laser field parameters are the same as in Fig. 12. As we can see these two methods produce identical PMDs.
In the case of atomic photoionization where the liberated electron moves in the modified Coulomb potential, it would be the natural choice to use the Coulomb waves as a good approximation to the true continuum states, but our calculations show that even in the case of atomic photoionization, the plane waves must be used as the final states of the detected electron since only the plane waves are eigenstates of the momentum operator. Analogous studies for atomic ionization are currently being prepared and will be published elsewhere.
Coherence, decoherence and incoherence
Coherence is defined as the capability of waves to interfere. The extension to waves of different frequencies is the foundation of mode locking in femtosecond lasers. Interestingly, we encounter coherence in both ATI and HHG, which are characterized by frequency combs of photoelectron or photon energies, respectively. The time-domain flip side of the frequency comb are the "pulse trains" of the continuum EWPs created around the peaks of the laser electric field. The sub-cycle EWPs themselves can be understood as multi-mode interference of coherent electron waves originating from the same laser half-cycle. In the present case of continuum wave packets, the frequency (momentum) spectrum of the wave packets is wide and continuous.
Ultrafast science is particularly interested in tracking the evolution of bound wave packets, as they allow microscopic insights into the dynamics occurring inside atoms, molecules or solids. Typically, bound wave packets have a discrete spectrum, implying a periodicity in time. Strongfield ionization has been shown to allow the preparation of coherences between different electronic states [133]. Wellknown examples of bound electron wave packets, that have been tracked with ultrahigh time resolution, include spin-orbit wave packets in rare-gas ions [25,52,134] and charge migration in polyatomic molecules [135,136]. Furthermore, periodic vibrational [137] and rotational wave packets [138,139,140] in molecules have been tracked.
Contrary to the periodic example considered above, processes such as chemical reactions are typically nonperiodic. In this context, we can consider a process as coherent if its time evolution clearly depends on some initiating event, usually the interaction with a pump laser pulse. This is the prerequisite to study ultrafast dynamics, for example using a pump-probe scheme.
Some processes can lead to the apparent loss of coherence, often referred to as decoherence. This may occur, for example, in the vicinity of conical intersections, where the electron and nuclear degrees of freedom are strongly coupled. Hence, energy stored in electronic degrees of freedom can be transferred to vibrational motion, and the relationship between the pump event and the ensuing dynamics is lost. A particularly interesting example is the charge migration process studied by Callegari, et al. [135]. They observed an oscillating ion yield due to electronic coherence immediately following XUV photoionization of phenylalanine [135]. After a few 10s of femtoseconds, the delay-dependent oscillations disappear and the measured signal became static. This is a clear signature of decoherence.
Generally, coherence is lost if a coherent system is coupled to a "bath", and it is interesting to scale the size of the bath. For example, consider electron emission from a pair of identical atoms at a fixed internuclear distance R. Emission could originate from either one of two atoms, leading to an interference term similar to the one in LIED, exp (ikR). In this example, the nuclei represent the bath that may be coupled to the electron motion. Kunitski, et al., realized such an experiment using Neon dimers exposed to an intense circularly-polarized laser field [54]. They found that sought-after interference can be observed when they keep track of the bath, i.e. measure the nuclei in coincidence with the electrons. Specifically, an interference structure only appears if one selects for the parity of the ionic state on which Ne + 2 dissociates [54]. Experiments on the dissociative multiphoton ionization of H 2 [141] can be interpreted in a similar manner. In the photoelectron spectrum, no clear ATI peaks are seen. However, when considering the energy transferred to the nuclei, the coherent peak structure of ATI is restored. Again, the energy absorbed by the nuclei only seemingly leads to decoherence but coherence is maintained if the measurement includes all relevant observables in the bath.
With increasing complexity, it becomes increasingly difficult to keep full track of the bath. For example, in polyatomic molecules a plethora of nuclear degrees of freedom exists, such that it is extremely challenging to measure all of them once nuclear motion sets in, which happens a few femtoseconds after the pump pulse. This challenge increases even more, if one allows interaction with the environment. An illustrative example is the work of Hackermüller, et al., who studied the decoherence of matter waves due to thermal emission of radiation [142]: as the temperature is increased, decoherence becomes stronger due to thermal emission of radiation. Similarly, lasing or superfluorescence are impeded by spontaneous emission or nonradiative transitions [143], while decoherence of molecular rotations occurs through collisions with the environment [144,145]. These decoherence phenomena relate to ongoing efforts to test the limits of quantum mechanics by studying interference phenomena in mesoscopic systems [146].
Let us return to the prototypical examples of ATI and HHG of gaseous atoms. Despite the absence of internal degrees of freedom that could lead to decoherence, ATI of different atoms is incoherent, i.e. all interference phenomena in ATI take place on the single-atom level. This is entirely different to the case of HHG where all atoms in the focal volume radiate harmonics coherently. As a consequence, phase matching is an important issue for the case of HHG [147], but irrelevant for ATI. Moreover, this "macroscopic coherence" of the HHG process is the reason for the sharp HHG peaks, while the contrast between ATI peaks is much lower. Given their close relationship, this difference between HHG and ATI is remarkable. But what is the underlying reason? An important difference between the two processes is the fact that ATI leads to the production of an ion, while the atom has returned to its ground state after HHG. In other words, it is possible to tell which atom has undergone ATI, but not which one has undergone HHG. This information is equivalent to measuring through which slit the particle passes in Young's double slit experiment. This picture agrees well with the Neon dimer experiment discussed above: if we know the ionic state, the interference is restored.
If the created ion destroys the coherence of ATI from multiple sources, this raises the question whether we can make ATI coherent by studying solid state systems, where no ion is created. Suitable candidates would be extended systems in the condensed phase, perhaps systems of nanostructures [148].
Toward nonlinear ultrafast spectroscopy of quantum materials
Recently, the HHG in solids has been attracting the attention of condensed matter physics, see [149] for a recent review. The effect can be observed at quite moderate intensities below the ionization threshold. As such it allows probing solid state sample without inflicting optical damage. HHG may be useful for the study of several transport charge and spin properties, not only in semiconductors but also in materials which exhibit unique and novel topological effects such as 2D and 3D topological Insulators. [150,151,152,153].
The character of physical laws on the atomic scale (Angstrom and nanometer) is dramatically different in solids than in gases [153,154,155,156,157,158]. Specifically, in the SFA, the energy of the ground state is assumed constant with respect to the momentum p, while the energy spectra of the continuum states at large distance from the parent ion, can be considered as parabolic in p [154]. In condensed matter terms, the former corresponds to an infinite hole mass. Thus, in gases, the ground state is localized, while the free continuum electron is moving in a trajectory driven by the laser field. Re-combination may take place at the ground state initial position in gases. On the contrary, in solids the situation is rather different due to periodicity of the static potential. The possibility of electron-electron correlation effects [159,160], electron-phonon effects, Spin-Orbit couplings [161,162,163] and spin orders' effects [164] offer a new and extremely attractive research field for ultrafast spectroscopy and laser control. For instance, the hole can be driven by the laser as well, since the energy dispersion of valence states is not zero [154,155]. This causes novel ultrafast dynamics, which differ from those in the gas phase, and are present not only in ordinary semiconductors but also in quantum materials [157,158,165,166].
Specifically, nonlinear spectroscopy has made it possible extracting information of the band structure in semiconductors using a pump-probe scheme and associating the HHG spectra to the inter-band mechanism [153]. Furthermore, ultrafast metrology at THz frequencies has allowed for the observation of electron-hole recombination and Bloch Oscillation at special valley points in MoS 2 or WS 2 [167,165]. Additionally, attosecond transient absorption experiments for core valence electrons in solids are nowadays in the focus of attention of attosecond condensed matter physics [168,167]. The rapidly emerging field of nonlinear spectroscopy in quantum materials [168,169,170], i.e. topological materials, Weyl semimetals, etc. [161,166,169,170] is attracting the attention of several experimental and theoretical research groups around the world. Those materials are extremely important, since their special features, i.e. topological conducting and isolating bands protected by the fundamental symmetries, are robust against energy dissipation and material perturbations [157,158]. These unique features promise interesting application of topological insulators (TIs) [171,172] in the optimization of electronic devices; more precisely the transistors and the logical operations defined in the electronic devices [165,173,174].
Attosecond science will expand its frontiers to new challenges, research fields and may open up new options to control transport and optical features in quantum materials [165]. For example, HHG or other nonlinear optical techniques may provide access to the electronic and dynamical properties of quantum materials. The topological invariant defines whether or not a material is topological. It is directly linked to the electron wave function from the crystalline structure, which characterizes the transversal current with respect to a longitudinal applied voltage, exhibiting quantum anomalous Hall effects. The open question which still remain a challenge is how the topological invariants might be associated with the HHG spectrum.
We give here an example from the area of topological ultrafast nonlinear spectroscopy, especially in the interaction of an ultrashort and intense MIR laser wavelength of 3.2 µm and intensity of 10 11 W/cm 2 . The Haldane model (HM) is the first which appears to predict quantum Hall effects (QHEs) without Landau levels or more precisely without magnetic fields [175]. This model belongs in the class of the so called Chern Insulator classification for topological materials which simply means the Chern number, ν = 1 2π BZ d 2 k · Ω(k), can be ν = −1, 0, +1. HM thus has three topological different phases, i.e. for ν = ±1 the system will show QHEs (or transversal quantized conductivity) and, for ν = 0 there will not be QHEs. In Ref. [166] a full revision of HM is done in the context of how HHG can encode the topological invariant ν by means of the circular dichroism, i.e. the produced asymmetry photon emission yield by right-and left-circularly polarized driven lasers, also a parallel work by Silva et al. [169] shown that HHG can be used to track topological phases and transitions in the similar model, but using as an observable the helicity of the HHG produced by linearly polarized driving lasers.
Recently, Baykusheva et al. has shown numerical results of HHG from Bi 2 Se 3 , a typical 3D-topological insulator (TI) [163]. In this work, an interesting anomalous enhancement in the non-linear optical responses of Bi 2 Se 3 was observed as the driving laser polarization is varied from linear to circular [166]. A theoretical method was developed, which splits the contributions from the topological surface states and bulk surface states, indicating that the responsible mechanism of that enhancement is the spin-orbit couplings of the surfaces states.
In another example in Fig. (14), we present the final crystal momentum in the first Brillouin zone for two different topological phases ν = 0 and ν = +1, respectively. The green and white points denote the K' and K points respectively. The final interferogram patterns are specially different at K-points for the topological phase with a thinner fringes for the topological phases than the trivial one.
This shows that the interesting topological features can be contained by the final momentum distribution of the conduction bands. Note however, the question of how to measure this distribution inside the material and how to extract the topological invariant ν are still open for ul-trafast sciences and also condensed-matter physics. Specially, and in abroad sense in quantum materials [165] such as Dirac and Weyl semimetals [176] in which two different and opposite Chern numbers lead to the generation of Fermi arcs or pseudo-magnetic mono-poles or Weyl-fermions with chirality features [176]. How nonlinear ultrafast can extract this information is still a complete research world for both theoretical and experimental science.
Authors contributions
All authors contributed to the preparation of the manuscript. All authors have read and approved the final manuscript. | 14,688.8 | 2021-03-09T00:00:00.000 | [
"Physics"
] |
Algebraic quantum field theory on spacetimes with timelike boundary
We analyze quantum field theories on spacetimes $M$ with timelike boundary from a model-independent perspective. We construct an adjunction which describes a universal extension to the whole spacetime $M$ of theories defined only on the interior $\mathrm{int}M$. The unit of this adjunction is a natural isomorphism, which implies that our universal extension satisfies Kay's F-locality property. Our main result is the following characterization theorem: Every quantum field theory on $M$ that is additive from the interior (i.e.\ generated by observables localized in the interior) admits a presentation by a quantum field theory on the interior $\mathrm{int}M$ and an ideal of its universal extension that is trivial on the interior. We shall illustrate our constructions by applying them to the free Klein-Gordon field.
Introduction and summary
Algebraic quantum field theory is a powerful and far developed framework to address modelindependent aspects of quantum field theories on Minkowski spacetime [HK64] and more generally on globally hyperbolic spacetimes [BFV03]. In addition to establishing the axiomatic foundations for quantum field theory, the algebraic approach has provided a variety of mathematically rigorous constructions of non-interacting models, see e.g. the reviews [BD15,BDH13,BGP07], and more interestingly also perturbatively interacting quantum field theories, see e.g. the recent monograph [Rej16]. It is worth emphasizing that many of the techniques involved in such constructions, e.g. existence and uniqueness of Green's operators and the singular structure of propagators, crucially rely on the hypothesis that the spacetime is globally hyperbolic and has empty boundary.
Even though globally hyperbolic spacetimes have plenty of applications to physics, there exist also important and interesting situations which require non-globally hyperbolic spacetimes, possibly with a non-trivial boundary. On the one hand, recent developments in high energy physics and string theory are strongly focused on anti-de Sitter spacetime, which is not globally hyperbolic and has a (conformal) timelike boundary. On the other hand, experimental setups for studying the Casimir effect confine quantum field theories between several metal plates (or other shapes), which may be modeled theoretically by introducing timelike boundaries to the system. This immediately prompts the question whether the rigorous framework of algebraic quantum field theory admits a generalization to cover such scenarios.
Most existing works on algebraic quantum field theory on spacetimes with a timelike boundary focus on the construction of concrete examples, such as the free Klein-Gordon field on simple classes of spacetimes. The basic strategy employed in such constructions is to analyze the initial value problem on a given spacetime with timelike boundary, which has to be supplemented by suitable boundary conditions. Different choices of boundary conditions lead to different Green's operators for the equation of motion, which is in sharp contrast to the well-known existence and uniqueness results on globally hyperbolic spacetimes with empty boundary. Recent works addressing this problem are [Zah15] and [IW03,IW04], the latter extending the analysis of [Wal80]. For specific choices of boundary conditions, there exist successful constructions of algebraic quantum field theories on spacetimes with timelike boundary, see e.g. [DNP16,DF16,DF18,BDFK17]. The main message of these works is that the algebraic approach is versatile enough to account also for these models, although some key structures, such as for example the notion of Hadamard states [DF18,Wro17], should be modified accordingly.
Unfortunately, model-independent results on algebraic quantum field theory on spacetimes with timelike boundary are more scarce. There are, however, some notable and very interesting works in this direction: On the one hand, Rehren's proposal for algebraic holography [Reh00] initiated the rigorous study of quantum field theories on the anti-de Sitter spacetime. This has been further elaborated in [DR03] and extended to asymptotically AdS spacetimes in [Rib07]. On the other hand, inspired by Fredenhagen's universal algebra [Fre90,Fre93,FRS92], a very interesting construction and analysis of global algebras of observables on spacetimes with timelike boundaries has been performed in [Som06]. The most notable outcome is the existence of a relationship between maximal ideals of this algebra and boundary conditions, a result which has been of inspiration for this work.
In the present paper we shall analyze quantum field theories on spacetimes with timelike boundary from a model-independent perspective. We are mainly interested in understanding and proving structural results for whole categories of quantum field theories, in contrast to focusing on particular theories. Such questions can be naturally addressed by using techniques from the recently developed operadic approach to algebraic quantum field theory [BSW17]. Let us describe rather informally the basic idea of our construction and its implications: Given a spacetime M with timelike boundary, an algebraic quantum field theory on M is a functor B : R M → Alg assigning algebras of observables to suitable regions U ⊆ M (possibly intersecting the boundary), which satisfies the causality and time-slice axioms. We denote by QFT(M ) the category of algebraic quantum field theories on M . Denoting the full subcategory of regions in the interior of M by R int M ⊆ R M , we may restrict any theory B ∈ QFT(M ) to a theory res B ∈ QFT(int M ) defined only on the interior regions. Notice that it is in practice much easier to analyze and construct theories on int M as opposed to theories on the whole spacetime M . This is because the former are postulated to be insensitive to the boundary by Kay's F-locality principle [Kay92]. As a first result we shall construct a left adjoint of the restriction functor res : QFT(M ) → QFT(int M ), which we call the universal extension functor ext : QFT(int M ) → QFT(M ). This means that given any theory A ∈ QFT(int M ) that is defined only on the interior regions in M , we obtain a universal extension ext A ∈ QFT(M ) to all regions in M , including those that intersect the boundary. It is worth to emphasize that the adjective universal above refers to the categorical concept of universal properties. Below we explain in which sense ext is also "universal" in a more physical meaning of the word.
It is crucial to emphasize that our universal extension ext A ∈ QFT(M ) is always a bona fide algebraic quantum field theory in the sense that it satisfies the causality and time-slice axioms. This is granted by the operadic approach to algebraic quantum field theory of [BSW17]. In particular, the ext ⊣ res adjunction investigated in the present paper is one concrete instance of a whole family of adjunctions between categories of algebraic quantum field theories that naturally arise within the theory of colored operads and algebras over them.
A far reaching implication of the above mentioned ext ⊣ res adjunction is a characterization theorem that we shall establish for quantum field theories on spacetimes with timelike boundary. Given any theory B ∈ QFT(M ) on a spacetime M with timelike boundary, we can restrict and universally extend to obtain another such theory ext res B ∈ QFT(M ). The adjunction also provides us with a natural comparison map between these theories, namely the counit ǫ B : ext res B → B of the adjunction. Our result in Theorem 5.6 and Corollary 5.7 is that ǫ B induces an isomorphism ext res B/ ker ǫ B ∼ = B of quantum field theories if and only if B is additive from the interior as formalized in Definition 5.5. The latter condition axiomatises the heuristic idea that the theory B has no degrees of freedom that are localized on the boundary of M , i.e. all its observables may be generated by observables supported in the interior of M . Notice that the results in Theorem 5.6 and Corollary 5.7 give the adjective universal also a physical meaning in the sense that the extensions are sufficiently large such that any additive theory can be recovered by a quotient. We strengthen this result in Theorem 5.10 by constructing an equivalence between the category of additive quantum field theories on M and a category of pairs (A, I) consisting of a theory A ∈ QFT(int M ) on the interior and an ideal I ⊆ ext A of the universal extension that is trivial on the interior. More concretely, this means that every additive theory B ∈ QFT(M ) may be naturally decomposed into two distinct pieces of data: (1) A theory A ∈ QFT(int M ) on the interior, which is insensitive to the boundary as postulated by F-locality, and (2) an ideal I ⊆ ext A of its universal extension that is trivial on the interior, i.e. that is only sensitive to the boundary. Specific examples of such ideals arise from imposing boundary conditions. We shall illustrate this fact by using the free Klein-Gordon theory as an example. Thus, our results also provide a bridge between the ideas of [Som06] and the concrete constructions in [DNP16,DF16,DF18,BDFK17].
The remainder of this paper is structured as follows: In Section 2 we recall some basic definitions and results about the causal structure of spacetimes with timelike boundaries, see also [CGS09,Sol06]. In Section 3 we provide a precise definition of the categories QFT(M ) and QFT(int M ) by using the ideas of [BSW17]. Our universal boundary extension is developed in Section 4, where we also provide an explicit model in terms of left Kan extension. Our main results on the characterization of additive quantum field theories on M are proven in Section 5. Section 6 illustrates our construction by focusing on the simple example of the free Klein-Gordon theory, where more explicit formulas can be developed. It is in this context that we provide examples of ideals implementing boundary conditions and relate to analytic results, e.g. [DNP16]. We included Appendix A to state some basic definitions and results of category theory which will be used in our work.
Spacetimes with timelike boundary
We collect some basic facts about spacetimes with timelike boundary, following [Sol06, Section 3.1] and [CGS09, Section 2.2]. For a general introduction to Lorentzian geometry we refer to [BEE96,ONe83], see also [BGP07, Sections 1.3 and A.5] for a concise presentation.
We use the term manifold with boundary to refer to a Hausdorff, second countable, mdimensional smooth manifold M with boundary, see e.g. [Lee13]. This definition subsumes ordinary manifolds as manifolds with empty boundary ∂M = ∅. We denote by int M ⊆ M the submanifold without the boundary. Every open subset U ⊆ M carries the structure of a manifold with (possibly empty) boundary and one has int U = U ∩ int M .
Definition 2.1. A Lorentzian manifold with boundary is a manifold with boundary that is equipped with a Lorentzian metric.
Definition 2.2. Let M be a time-oriented Lorentzian manifold with boundary. The Cauchy development D(S) ⊆ M of a subset S ⊆ M is the set of points p ∈ M such that every inextensible (piecewise smooth) future directed causal curve stemming from p meets S.
The following properties follow easily from the definition of Cauchy development.
Proposition 2.3. Let S, S ′ ⊆ M be subsets of a time-oriented Lorentzian manifold M with boundary. Then the following holds true: We denote by J ± M (S) ⊆ M the causal future/past of a subset S ⊆ M , i.e. the set of points that can be reached by a future/past directed causal curve stemming from S. Furthermore, we denote by I ± M (S) ⊆ M the chronological future/past of a subset S ⊆ M , i.e. the set of points that can be reached by a future/past directed timelike curve stemming from S. The following two definitions play an essential role in our work.
Definition 2.6. A spacetime with timelike boundary is an oriented and time-oriented Lorentzian manifold M with boundary, such that the pullback of the Lorentzian metric along the boundary inclusion ∂M ֒→ M defines a Lorentzian metric on the boundary ∂M .
Definition 2.7. Let M be a spacetime with timelike boundary.
(i) R M denotes the category whose objects are causally convex open subsets U ⊆ M and whose morphisms i : U → U ′ are inclusions U ⊆ U ′ ⊆ M . We call it the category of regions in M . ( (iii) R int M ⊆ R M is the full subcategory whose objects are contained in the interior int M . We denote by C int M ⊆ C M the Cauchy morphisms between objects of R int M . (c): We show that if D(S) contains a boundary point, so does S: Suppose p ∈ D(S) belongs to the boundary of M . By Definition 2.6, the boundary ∂M of M can be regarded as a time-oriented Lorentzian manifold with empty boundary, hence we can consider a future directed inextensible causal curve γ in ∂M stemming from p. Since ∂M is a closed subset of M , γ must be inextensible also as a causal curve in M , hence γ meets S because it stems from p ∈ D(S). Since γ lies in ∂M by construction, we conclude that S contains a boundary point of M . Relative to the fixed convex cover, the construction of quasi-limits allows us to obtain from {α n } an inextensible causal curve λ through p ∈ D(U ). Hence, λ meets U , say in q. By the construction of a quasi-limit, q lies on a causal geodesic segment between p k and p k+1 , two successive limit points for {α n } contained in some element of the fixed convex cover. It follows that either p k or p k+1 belongs to J + I (U ) ∩ J − I (U ), which is contained in U by causal convexity. Hence, we found a subsequence {α n j } of {α n } and a sequence of parameters {s j } such that {α n j (s j )} converges to a point of U (either p k or p k+1 ). By construction the sequence {α n j (s j )} is contained in I \ U , however its limit lies in U . This contradicts the hypothesis that U is open in I.
The causal structure of a spacetime M with timelike boundary can be affected by several pathologies, such as the presence of closed future directed causal curves. It is crucial to avoid these issues in order to obtain concrete examples of our constructions in Section 6. The following definition is due to [ Definition 3.1. An algebraic quantum field theory on M is a functor with values in the category Alg of associative and unital * -algebras over C, which satisfies the following properties: (i) Causality axiom: For all causally disjoint inclusions i 1 : U 1 → U ← U 2 : i 2 , the induced commutator is zero.
(ii) Time-slice axiom: For all Cauchy morphisms (i : is an Alg-isomorphism. We denote by qft(M ) ⊆ Alg R M the full subcategory of the category of functors from R M to Alg whose objects are all algebraic quantum field theories on M , i.e. functors fulfilling the causality and time-slice axioms. (Morphisms in this category are all natural transformations.) We shall now show that there exists an alternative, but equivalent, description of the category qft(M ) which will be more convenient for the technical constructions in our paper. Following by setting on objects and morphisms Furthermore, let us write for the full subcategory embedding.
Lemma 3.2. D and I form an adjunction (cf. Definition A.1)
whose counit is a natural isomorphism (in fact, the identity), hence R M [C −1 M ] is a full reflective subcategory of R M . Furthermore, the components of the unit are Cauchy morphisms.
is given by the inclusion U ⊆ D(U ) of U into its Cauchy development, which is a Cauchy morphism, see Proposition 2.3 (b) and Definition 2.
is given by the identity of the object D(V ) = V . The triangle identities hold trivially. where the vertical arrows are natural isomorphisms because the counit ǫ is an isomorphism.
It remains to prove that (3.9) is also surjective. Let χ : GD → HD be any natural transformation. Using Lemma 3.2, we obtain a commutative diagram where the vertical arrows are natural isomorphisms because the components of the unit η are Cauchy morphisms and D assigns isomorphisms to them. Let us define a natural transformation ξ : G → H by the commutative diagram where we use that ǫ is a natural isomorphism (cf. Lemma 3.2). Combining the last two diagrams, one easily computes that ξD = χ by using also the triangle identities of the adjunction D ⊣ I. Hence, the map (3.9) is surjective.
We note that there exist two (a priori different) options to define an orthogonality relation on the localized category Proof. For I this holds trivially, while for D see Proposition 2.5 (a).
With these preparations we may now define our alternative description of the category of algebraic quantum field theories.
is zero.
Theorem 3.6. By pullback, the adjunction D ⊣ I of Lemma 3.2 induces an adjoint equivalence (cf. Definition A.2) (3.14) In Next, we have to prove that this adjunction restricts to the claimed source and target categories in (3.14). Using Lemma 3.2, we obtain that the counit ǫ of the restricted adjunction (3.14) is an isomorphism. Furthermore, all components of η are Cauchy morphisms, hence η A = Aη is an isomorphism for all A ∈ qft(M ), i.e. the unit η is an isomorphism. This completes the proof that (3.14) is an adjoint equivalence.
Universal boundary extension
The goal of this section is to develop a universal construction to extend quantum field theories from the interior of a spacetime M with timelike boundary to the whole spacetime. (Again, we do not have to assume that M is globally hyperbolic in the sense of Definition 2.9.) Loosely speaking, our extended quantum field theory will have the following pleasant properties: (1) It describes precisely those observables that are generated from the original theory on the interior, (2) it does not require a choice of boundary conditions, (3) specific choices of boundary conditions correspond to ideals of our extended quantum field theory. We also refer to Section 5 for more details on the properties (1) and (3).
The starting point for this construction is the full subcategory inclusion R int M ⊆ R M defined by selecting only the regions of R M that lie in the interior of M (cf. Definition 2.7). We denote the corresponding embedding functor by and notice that j preserves (and also detects) causally disjoint inclusions, i.e. j is a full orthogonal subcategory embedding in the terminology of [BSW17]. Making use of Proposition 3.3, Lemma 3.2 and Remark 3.8, we define a functor J : Notice that J is simply an embedding functor, which acts on objects and morphisms as From this explicit description it is clear that J preserves (and also detects) causally disjoint inclusions, i.e. it is a full orthogonal subcategory embedding. The constructions in [BSW17, Section 5.3] (see also [BSW18] for details how to treat * -algebras) then imply that J induces an An important structural result, whose physical relevance is explained in Remark 4.2 below, is the following proposition. , which also provides the inclusion V k ⊆ V 1 ⊔ V 2 ⊆ D(V 1 ⊔ V 2 ) inducing j k , for k = 1, 2. Consider now the chain of inclusions V k ⊆ V 1 ⊔ V 2 ⊆ V corresponding to i k , for k = 1, 2. From the stability under Cauchy development of V 1 , V 2 and V , we obtain also the chain of inclusions V k ⊆ D(V 1 ⊔ V 2 ) ⊆ V , for k = 1, 2, that exhibits the desired factorization that we will describe now in detail. The direct sum (of vector spaces) in (4.9) runs over all tuples int M ] are interior regions. (Notice that the regions V k are not assumed to be causally disjoint and that the empty tuple, i.e. n = 0, is also allowed.) The vector space A(V ) is defined by the tensor product of vector spaces are given by pairs consisting of a tuple of morphisms i : V → V (with all V k in the interior) and an element a ∈ A(V ) of the corresponding tensor product vector space (4.10). The product on (4.11) is given on homogeneous elements by where (i, i ′ ) = (i 1 , . . . , i n , i ′ 1 , . . . , i ′ m ) is the concatenation of tuples. The unit element in (4.11) is ½ := (∅, 1), where ∅ is the empty tuple and 1 ∈ C, and the * -involution is defined by (i 1 , . . . , i n ), a 1 ⊗ · · · ⊗ a n * := (i n , . . . , i 1 ), a * n ⊗ · · · ⊗ a * 1 (4.13) and C-antilinear extension. Finally, the quotient in (4.9) is by the two-sided * -ideal of the algebra (4.11) that is generated by i i 1 , . . . , i n , a 1 ⊗ · · · ⊗ a n − i, A(i 1 ) a 1 ⊗ · · · ⊗ A(i n ) a n ∈ i:V →V A(V ) , (4.14) for all tuples i : -morphisms (possibly of length zero), for k = 1, . . . , n, and all a k ∈ A(V k ), for k = 1, . . . , n. The tuple in the first term of (4.14) is defined by composition i i 1 , . . . , i n := i 1 i 11 , . . . , i 1 i 1|V 1 | , . . . , i n i n1 , . . . , i n i n|V n | (4.15a) and the expressions A(i) a in the second term are determined by , a 1 ⊗ · · · ⊗ a n −→ A(i 1 ) a 1 · · · A(i n ) a n , where we used square brackets to indicate equivalence classes in (4.9).
Remark 4.4. The construction of the algebra ext A(V ) above admits an intuitive graphical interpretation: We shall visualize the (homogeneous) elements (i, a) in (4.11) by decorated trees V a 1 a n · · · (4.17) where a k ∈ A(V k ) is an element of the algebra A(V k ) associated to the interior region V k ⊆ V , for all k = 1, . . . , n. We interpret such a decorated tree as a formal product of the formal pushforward along i : V → V of the family of observables a k ∈ A(V k ). The product (4.12) is given by concatenation of the inputs of the individual decorated trees, i.e.
where the decorated tree on the right-hand side has n + m inputs. The * -involution (4.13) may be visualized by reversing the input profile and applying * to each element a k ∈ A(V k ). Finally, the * -ideal in (4.14) implements the following relations: Assume that (i, a) is such that the sub-family of embeddings (i k , i k+1 , . . . , i l ) : (V k , V k+1 , . . . , V l ) → V factorizes through some common interior region, say V ′ ⊆ V . Using the original functor A ∈ QFT(int M ), we may form the product A(i k )(a k ) · · · A(i l )(a l ) in the algebra A(V ′ ), which we denote for simplicity by a k · · · a l ∈ A(V ′ ). We then have the relation V a 1 a k a l a n · · · · · · · · · V a 1 a k · · · a l a n · · · · · · ∼ (4.19) which we interpret as follows: Whenever (i k , i k+1 , . . . , i l ) : (V k , V k+1 , . . . , V l ) → V is a sub-family of embeddings that factorizes through a common interior region V ′ ⊆ V , then the formal product of the formal pushforward of observables is identified with the formal pushforward of the actual product of observables on V ′ . △
Characterization of boundary quantum field theories
In the previous section we established a universal construction that allows us to extend quantum field theories A ∈ QFT(int M ) that are defined only on the interior int M of a spacetime M with timelike boundary to the whole spacetime. The extension ext A ∈ QFT(M ) is characterized abstractly by the ext ⊣ res adjunction in (4.3). We now shall reverse the question and ask which quantum field theories B ∈ QFT(M ) on M admit a description in terms of (quotients of) our universal extensions.
Given any quantum field theory B ∈ QFT(M ) on the whole spacetime M , we can use the right adjoint in (5.1a) Using our model for the extension functor given in (4.9) (and the formulas following this equation), In order to establish positive comparison results, we have to introduce the concept of ideals of quantum field theories. Proof. The fact that ker κ defines a functor follows from naturality of κ. Property (ii) in Definition 5.1 holds true by construction. Property (i) is a consequence of the fact that kernels of unital * -algebra morphisms κ V : B(V ) → B(V ′ ) are two-sided * -ideals.
Remark 5.4. Using the concept of ideals, we may canonically factorize (5.1) according to the diagram where both the projection π B and the inclusion λ B are QFT(M )-morphisms. △ As a last ingredient for our comparison result, we have to introduce a suitable notion of additivity for quantum field theories on spacetimes with timelike boundary. We refer to [Few13, Definition 2.3] for a notion of additivity on globally hyperbolic spacetimes. We can now prove our first characterization theorem for boundary quantum field theories. (1) The V -component of the canonical inclusion in (5.2) is an Alg-isomorphism.
(2) B is additive at the object is in the interior int M of M . Using our model for the extension functor given in (4.9) (and the formulas following this equation), we obtain an Alg-morphism (5.4) Composing this morphism with the V -component of ǫ B given in (5.1), we obtain a commutative diagram ext res B(V ) Next we observe that the images of the Alg-morphisms (5.4), for all i int : V int → V with V int in the interior, generate ext res B(V ). Combining this property with (5.5), we conclude that (ǫ B ) V is a surjective map if and only if B is additive at V . Hence, (λ B ) V given by (5.2), which is injective by construction, is an Alg-isomorphism if and only if B is additive at V . We shall now refine this characterization theorem by showing that QFT add (M ) is equivalent, as a category, to a category describing quantum field theories on the interior of M together with suitable ideals of their universal extensions. The precise definitions are as follows. Using the explicit expression for ǫ ext A/I given in (5.1) and the explicit expression for η A given by We shall illustrate this assertion in Section 6 below using the explicit example given by the free Klein-Gordon field.
Let us also note that there is a reason why our universal extension captures only the class of additive quantum field theories on M . Recall that ext A ∈ QFT(M ) takes as an input a quantum field theory A ∈ QFT(int M ) on the interior int M of M . As a consequence, the extension ext A can only have knowledge of the 'degrees of freedom' that are generated in some way out of the interior regions. Additive theories in the sense of Definition 5.5 are precisely the theories whose 'degrees of freedom' are generated out of those localized in the interior regions. △
Example: Free Klein-Gordon theory
In order to illustrate and make more explicit our abstract constructions developed in the previous sections, we shall consider the simple example given by the free Klein-Gordon field. From now on M will be a globally hyperbolic spacetime with timelike boundary, see Definition 2.9. This assumption implies that all interior regions R int M are globally hyperbolic spacetimes with empty boundary, see Proposition 2.11. This allows us to use the standard techniques of [BGP07, Section 3] on such regions.
Let M be a globally hyperbolic spacetime with timelike boundary, see Definition 2.9. The free Klein-Gordon theory on R int M [C −1 int M ] is given by the following standard construction, see e.g. [BD15,BDH13] for expository reviews. On the interior int M , we consider the Klein-Gordon operator where is the d'Alembert operator and m ≥ 0 is a mass parameter. When restricting P to regions V ∈ R int M [C −1 int M ], we shall write It follows from [BGP07] that there exists a unique retarded/advanced Green's operator int M ] it assigns the associative and unital *algebra K(V ) that is freely generated by Φ V (f ), for all f ∈ C ∞ c (V ), modulo the two-sided * -ideal generated by the following relations: To a morphism i : Alg assigns the algebra map that is specified on the generators by pushforward along i (which we shall suppress) (6.5) The naturality of τ (i.e. naturality of the causal propagator, cf. e.g. [BGP07, Section 4.3]) entails that the assignment K defines a quantum field theory in the sense of Definition 3.5.
Universal extension:
Using the techniques developed in Section 4, we may now extend the Klein-Gordon theory K ∈ QFT(int M ) from the interior int M to the whole spacetime M . In particular, using (4.9) (and the formulas following this equation), one could directly compute the universal extension ext K ∈ QFT(M ). The resulting expressions, however, can be considerably simplified. We therefore prefer to provide a more convenient model for the universal extension ext K ∈ QFT(M ) by adopting the following strategy: We first make an 'educated guess' for a theory K ext ∈ QFT(M ) which we expect to be the universal extension of K ∈ QFT(int M ). (This was inspired by partially simplifying the direct computation of the universal extension.) After this we shall prove that K ext ∈ QFT(M ) satisfies the universal property that characterizes ext K ∈ QFT(M ). Hence, there exists a (unique) isomorphism ext K ∼ = K ext in QFT(M ), which means that our K ext ∈ QFT(M ) is a model for the universal extension ext K.
Let us define the functor K ext : R M [C −1 M ] → Alg by the following assignment: To any region V ∈ R M [C −1 M ], which may intersect the boundary, we assign the associative and unital * -algebra K ext (V ) that is freely generated by Φ V (f ), for all f ∈ C ∞ c (int V ) in the interior int V of V , modulo the two-sided ideal generated by the following relations: Remark 6.1. We note that our partially-defined CCR are consistent in the following sense: int . Using the partially-defined CCR for both V int and V ′ int , we obtain the . This holds true due to the following argument: It follows by construction that supp(f ) ∪ supp(g) ⊆ V int ∩ V ′ int and hence due to naturality of the τ 's we obtain Hence, for any fixed pair f, g ∈ C ∞ c (int V ), the partially-defined CCR are independent of the choice of V int (if one exists). △ To a morphism i : Alg assigns the algebra map that is specified on the generators by the pushforward along i (which we shall suppress) (6.7) Compatibility of the map (6.7) with the relations in K ext is a straightforward check.
Recalling the embedding functor J : given in (4.2), we observe that the diagram of functors commutes via the natural transformation γ : K → K ext J with components specified on the generators by the identity maps Notice that γ is a natural isomorphism because int V int = V int and the partially-defined CCR on any interior region V int coincides with the CCR.
Regarding the partially-defined CCR, let We may choose the cover given by the single region i : V int → V together with its partition of unity. We obtain for the commutator (6.16) which implies that the partially-defined CCR are preserved.
Naturality of the components (6.13) is easily verified. Uniqueness of the resulting natural transformation ζ : K ext → B is a consequence of uniqueness of ζJ and of the fact that the Alg- This completes the proof.
Given any region V ∈ R M [C −1 M ] in M , which may intersect the boundary, we use the canonical inclusion i : V → M to define local retarded/advanced Green's operators where i * denotes the pushforward of compactly supported functions (i.e. extension by zero) and i * the pullback of functions (i.e. restriction). Since V ⊆ M is causally convex, it follows that J ± M (p) ∩ V = J ± V (p) for all p ∈ V . Therefore G ± V satisfies the axioms of a retarded/advanced Green's operator for P V on V . (Here we regard V as a globally hyperbolic spacetime with timelike boundary, see Proposition 2.11. J ± V (p) denotes the causal future/past of p in the spacetime V .) In particular, for all interior regions V int ∈ R int M [C −1 int M ] in M , by combining Proposition 2.11 and [BGP07, Corollary 3.4.3] we obtain that G ± V int as defined in (6.18) is the unique retarded/advanced Green's operator for the restricted Klein-Gordon operator P V int on the globally hyperbolic spacetime V int with empty boundary.
Consider any adjoint-related pair (G + , G − ) of Green's operator for P on M . For all V ∈ R M [C −1 M ], we set I G ± (V ) ⊆ K ext (V ) to be the two-sided * -ideal generated by the following relations: with G V := G + V − G − V the causal propagator and vol V the canonical volume form on V .
The fact that the pair (G + , G − ) is adjoint-related (cf. Definition 6.3) implies that for all V ∈ R M [C −1 M ] the causal propagator G V is formally skew-adjoint, hence τ V is antisymmetric.
Proposition 6.5. I G ± ⊆ K ext is an ideal that is trivial on the interior (cf. Definition 5.8).
Proof. Functoriality of I G ± : R M [C −1 M ] → Vec is a consequence of (6.18), hence I G ± ⊆ K ext is an ideal in the sense of Definition 5.1. It is trivial on the interior because for all interior regions V int ∈ R int M [C −1 int M ], the Green's operators defined by (6.18) are the unique retarded/advanced Green's operators for P V int and hence the relations imposed by I G ± (V int ) automatically hold true in K ext (V int ) on account of the (partially-defined) CCR.
Remark 6.6. We note that the results of this section still hold true if we slightly weaken the hypotheses of Definition 2.9 by assuming the strong causality and the compact double-cones property only for points in the interior int M of M . In fact, int M can still be covered by causally convex open subsets and any causally convex open subset U ⊆ int M becomes a globally hyperbolic spacetime with empty boundary once equipped with the induced metric, orientation and time-orientation. △ Example 6.7. Consider the sub-spacetime M := R m−1 ×[0, π] ⊆ R m of m-dimensional Minkowski spacetime, which has a timelike boundary ∂M = R m−1 × {0, π}. The constructions in [DNP16] define an adjoint-related pair (G + , G − ) of Green's operators for P on M that corresponds to Dirichlet boundary conditions. Using this as an input for our construction above, we obtain a quantum field theory K ext I G ± ∈ QFT(M ) that may be interpreted as the Klein-Gordon theory on M with Dirichlet boundary conditions. It is worth to emphasize that our theory in general does not coincide with the one constructed in [DNP16]. To provide a simple argument, let us focus on the case of m = 2 dimensions, i.e. M = R × [0, π], and compare our global algebra A BDS (M ) := K ext I G ± (M ) with the global algebra A DNP (M ) constructed in [DNP16]. Both algebras are CCR-algebras, however the underlying symplectic vector spaces differ: The symplectic vector space underlying our global algebra A BDS (M ) is C ∞ c (int M ) P C ∞ c (int M ) with the symplectic structure (6.19). Using that the spatial slices of M = R × [0, π] are compact, we observe that the symplectic vector space underlying A DNP (M ) is given by the space Sol Dir (M ) of all solutions with Dirichlet boundary condition on M (equipped with the usual symplectic structure). The causal propagator defines a symplectic map G : C ∞ c (int M ) P C ∞ c (int M ) → Sol Dir (M ), which however is not surjective for the following reason: Any ϕ ∈ C ∞ c (int M ) has by definition compact support in the interior of M , hence the support of Gϕ ∈ Sol Dir (M ) is schematically as follows supp Gϕ (6.20) The usual mode functions Φ k (t, x) = cos( √ k 2 + m 2 t) sin(kx) ∈ Sol Dir (M ), for k ≥ 1, are clearly not of this form, hence G : C ∞ c (int M ) P C ∞ c (int M ) → Sol Dir (M ) cannot be surjective. As a consequence, the models constructed in [DNP16] are in general not additive from the interior and our construction K ext I G ± should be interpreted as the maximal additive subtheory of these examples. It is interesting to note that there exists a case where both constructions coincide: Consider the sub-spacetime M := R m−1 × [0, ∞) ⊆ R m of Minkowski spacetime with m ≥ 4 even and take a massless real scalar field with Dirichlet boundary conditions. Using Huygens' principle and the support properties of the Green's operators one may show that our algebra A BDS (M ) is isomorphic to the construction in [DNP16]. together with natural transformations η : id C → GF (called unit) and ǫ : F G → id D (called counit) that satisfy the triangle identities We call F the left adjoint of G and G the right adjoint of F , and write F ⊣ G.
Definition A.2. An adjoint equivalence is an adjunction for which both the unit η and the counit ǫ are natural isomorphisms. Existence of an adjoint equivalence in particular implies that C ∼ = D are equivalent as categories.
Localizations:
Localizations of categories are treated for example in [KS06, Section 7.1]. In our paper we restrict ourselves to small categories. | 9,384.2 | 2017-12-18T00:00:00.000 | [
"Mathematics"
] |
Development of a high data-throughput ADC board for the PROMETEO portable test-bench for the upgraded front-end electronics of the ATLAS TileCal
The Large Hadron Collider (LHC) is preparing for a major Phase-II upgrade scheduled for 2022 [1]. The upgrade will require a complete redesign of both on- and off-detector electronics systems in the ATLAS Tile hadron Calorimeter (TileCal) [2]. The PROMETEO (A Portable ReadOut ModulE for Tilecal ElectrOnics) stand-alone test-bench system is currently in development and will be used for the certification and quality checks of the new front- end electronics. The Prometeo is designed to read in digitized samples from 12 channels simultaneously at the bunch crossing frequency while accessing quality of information in realtime. The main board used for the design is a Xilinx VC707 evaluation board with a dual QSFP+ FMC (FPGA Mezzanine Card) module for read-out and control of the front-end electronics. All other functions are provided by a HV board, LED board and a 16 channel ADC daughter board. The paper relates to the development and testing of the ADC board that will be used in the new Prometeo system.
The ATLAS Tile Calorimeter
The ATLAS (A Toroidal LHC ApparatuS) experiment is a general purpose particle detector at the LHC at CERN. The experiment is designed to record proton-proton collisions occurring at a bunch crossing frequency of 40 MHz. At such high frequencies large amounts of raw data are generated leading to data flows in the Tbps range. The ATLAS experiment comprises of several sub-detector systems each aimed at recording different aspects of a collision.
The TileCal is the central hadronic calorimeter sub-detector in ATLAS (Figure 1a.) [3]. TileCal comprises of four barrel sections, two central barrels (LBA and LBC) and two extended barrels (EBA and EBC). Each section is divided into 64 azimuthal slices with each slice being further segmented by a lattice of alternating steel plates and plastic scintillating tiles ( Figure 1b). When particles travel trough TileCal they collide with steel plates causing them to break up into showers of charged and neutral particles. As these particles travel through the scintillating tiles small amounts of energy is deposited which generates photons. The photons are picked up by wavelength shifting optical fibres which carry the signal to the front-end electronics found within the outermost part of the sub-detector. Photomultiplier tubes (PMTs) are used covert these faint light pulses into analog electrical signals. These signals are then amplified, digitised and stored in temporary pipeline buffers while they wait for trigger selection. Only samples of these events that are selected by the L1 trigger are sent to the back-end electronics for further processing. After three levels of trigger selection the data flow is reduced to a few 100 Mbps which is stored for off-line analysis.
Phase II upgrade
The Phase II upgrade will increase the design luminosity of the LHC by a factor of 5 to 7.
[1] The ATLAS experiment will need to undergo some substantial changes if it is to meet these demands. A complete redesign the detector electronics will be needed in order to meet increased radiation tolerances and higher data processing requirements [4]. The current read-out electronics uses pipeline memories in the font-end to store digitised samples while they wait for trigger selection. After the upgrade the read-out system will transmit all data directly to the back-end using gigabyte optical links during every bunch crossing. This will provide digitally calibrated information with enhanced precision and granularity to the first level trigger in order to improve both trigger latency and resolution.
The current read-out hardware is located in 2 m long "superdrawers" within each module of the TileCal. [5] The read-out electronics systems in each superdrawer are daisy-chained together forcing them to share the same data connection to the back-end and same power supply. A new front-end architecture has been proposed that breaks each superdrawer down into 4 "minidrawers". This architecture will merged several of the old boards onto a more compact 3 board system that will include massive levels of redundancy to increase the reliability of the system. Each minidrawer will be completely independent of the others, having its own powers supply and optical links to the back-end.
Prometeo
The new front-end electronics being developed for the Phase II upgrade will need a compatible test-bench system. The PROMETEO (Portable ReadOut ModulE for Tilecal ElectrOnics) is being developed to replace the existing MobiDICK [6] test-bench which has been in operation service since 2003. Prometeo (Figure 2a) is a stand-alone test-bench system that will be used for the full certification of the new front-end electronics of the ATLAS TileCal. The TileCal has 256 modules each one will need to be independently serviced and evaluated using the Prometeo portable test-bench. All tests must be completed during LHC shut-downs to make sure the sub-detector is ready for data-taking periods. The Prometeo is high data-throughput system required to process all the information produced by one minidrawer in real-time. A test setup needs to be able to analyse data from 12-PMTs at LHC bunch crossing frequency assessing the quality of the data in real time and diagnose malfunctions [7].
Hardware Design
When designing the system it was important that it was easy to upgrade when components become obsolete but still remain relatively cheap to build [7]. The motherboard used in Prometeo is the Xilinx VC707 evaluation board which contains a Virtex 7 FPGA. The board was chosen for it high IO capabilities and ability to process large amounts data in real-time. Several other boards and modules are able to connect to the board through multiple connections. A high speed QSFP+ module attaches to one of the two FMC connectors. It is used for the communication between the minidrawers and test-bench. The other FMC connector supports a custom ADC board which was developed to digitise the trigger signals coming from the front-end adderboards.
Additionally there is a HV board to provide power to the PMTs, a LED driver board to inject light pulses into the PMTs during tests, a router used to communicate to the users and a power supply unit to power the test-bench. The VC707 is programmed with the IPBUS protocol which is capable of storing trigger data samples which can be retrieved upon a trigger request
ADC daughter Board
The ADC daughter boards (Figure 2b) function is to digitise analog trigger signals coming from the front-end electronics adder-cards [7]. The board communicates with a FPGA located on the VC707 through a high speed FMC connection. The board has two high-performance ADC571 chips each with 8 differential input channels used data sampling. The ADCs have a 12-bit resolution and are capable of sampling data at a rate of 50 Mega Samples a Second (MSPS). In the Proemteo system the FPGA sends a 40 MHz clock to a clock buffer chip on the ADC board which cleans the signal and sends it to both the ADCs. This clock is used by the ADCs as a reference for data sampling. Several Different voltage regulators are used on the board to improve signals integrity and remove signal coupling problems. Indicator LEDs on the board inform the user that all the voltages are at the correct level. Analog data being sent to ADC board is first conditioned and filtered before being passed to the ADCs for digitisation. ADCs can then digitise and serialise the data from each channel and send it to the FPGA. A 240 MHz bit clock is also created by ADCs and sent alongside the data. This clock is used is used for the de-serialisation of the samples. Once the serial data arrives in the FPGA it is de-serialised and stored in temporary buffer in the VC707 RAM. If data is seen to be interesting it will be called from the buffers and further processed otherwise it will be overwritten. Each differential channel coming from the ADCs transports data at 480 Mbps. There is 16 channels in total coming from the ADC board leading to a total data transfer rate of 7680 Mbps.
Firmware
Custom firmware was developed for the FPGA so it can control and manage the data readout of the ADC board. Firmware was described using VHSIC Hardware Description Language (VHDL) using Xilinx ISE design software. The firmware for the Prometeo is highly modular which enables different sections to be tested and used independently. As the same ADC chips were used in both MobiDick and Prometeo, similar algorithms could be used for the data readout. Figure 3: Data signals between an ADC and FPGA Figure 3 shows all the data connections needed between a single ADC and the FPGA. Each one of these signals needs to implemented and simulated within firmware. Initial testing of the ADC board involved configuring the ADCs to output the test pattern 10101010 (Deskew) from each channel. To do this serialised commands are sent from the FPGA through the control lines using a set of strict timing requirements in order to configure the ADC to send the pattern. These pattern tell us if ADCs are receiving our commands, if they are sending data at the correct speeds and if the 8 data lines are synchronised correctly. Figure 4a shows the basic order of events when doing this test.
PCB
The first prototype for the new ADC board was manufactured in 2014 in South Africa. The board went through several tests to see if it performed as expected. Chipscope IP cores were used in the firmware to allow viewing of the data signals between the FPGA and ADC board during operation. This kind of testing allows the developer to confirm that the firmware and the hardware is interacting as designed. The Figure 3b shows actual commands being sent to the ADC board to configure it to send the Deskew patten. The top 8 line show the 4 commands lines of both ADCs while the bottom 6 show the high speed data being sent back.
After testing the two prototype ADCs boards at the WITS High Throughput Electronics Laboratory many design faults could be identified. The circuit design and PCB layout of the board could be modified and improved for the next generation of ADC board. Significant changes included adjusting the power regulators, reducing noise through component and wire shifting
Conclusion
The Prometeo is in the process of being designed for the certification of the front-end electronics for the Phase-II upgrade. The high throughput system will be capable of performing all the testing needed during shut-down periods. The ADC board is still under going further testing before it will ready for use on the Prometeo system. The next generation of the board has be designed is currently being manufactured in South Africa. Once the new board is produced final firmware testing can be completed and the board will be certified for use. | 2,518.6 | 2015-10-15T00:00:00.000 | [
"Physics"
] |
An approach to prospective consequential life cycle assessment and net energy analysis of distributed electricity generation
Increasing distributed renewable electricity generation is one of a number of technology pathways available to policy makers to meet environmental and other sustainability goals. Determining the efficacy of such a pathway for a national electricity system implies evaluating whole system change in future scenarios. Life cycle assessment (LCA) and net energy analysis (NEA) are two methodologies suitable for prospective and consequential analysis of energy performance and associated impacts. This paper discusses the benefits and limitations of prospective and consequential LCA and NEA analysis of distributed generation. It concludes that a combined LCA and NEA approach is a valuable tool for decision makers if a number of recommendations are addressed. Static and dynamic temporal allocation are both needed for a fair comparison of distributed renewables with thermal power stations to account for their different impact profiles over time. The trade-offs between comprehensiveness and uncertainty in consequential analysis should be acknowledged, with system boundary expansion and system simulation models limited to those clearly justified by the research goal. The results of this approach are explorative, rather than for accounting purposes; this interpretive remit, and the assumptions in scenarios and system models on which results are contingent, must be clear to end users. & 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
a b s t r a c t
Increasing distributed renewable electricity generation is one of a number of technology pathways available to policy makers to meet environmental and other sustainability goals. Determining the efficacy of such a pathway for a national electricity system implies evaluating whole system change in future scenarios. Life cycle assessment (LCA) and net energy analysis (NEA) are two methodologies suitable for prospective and consequential analysis of energy performance and associated impacts. This paper discusses the benefits and limitations of prospective and consequential LCA and NEA analysis of distributed generation. It concludes that a combined LCA and NEA approach is a valuable tool for decision makers if a number of recommendations are addressed. Static and dynamic temporal allocation are both needed for a fair comparison of distributed renewables with thermal power stations to account for their different impact profiles over time. The trade-offs between comprehensiveness and uncertainty in consequential analysis should be acknowledged, with system boundary expansion and system simulation models limited to those clearly justified by the research goal. The results of this approach are explorative, rather than for accounting purposes; this interpretive remit, and the assumptions in scenarios and system models on which results are contingent, must be clear to end users.
& 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
The challenges posed by pressing environmental concerns, such as climate change, often prompt long term goals and targets for stakeholders in large systems such as a national energy infrastructure. As the ultimate concern in these circumstances is an overall change in the performance of a system, commensurate with regional, national or supranational targets, understanding future, system-wide impacts of an intervention is a priority for decision makers.
A shift to distributed renewable electricity generation is considered to be one pathway to meeting environmental objectives and social goals, including resilience to supply disruption (Barnham et al., 2013;Ruiz-Romero et al., 2013). The principle distributed generation technologies considered for the decarbonisation of electricity generation in developed countries are gridconnected solar photovoltaics (PV) and small scale or micro wind generators (Nugent and Sovacool, 2014). Distributed generation may be integrated with a building (i.e. installed on a rooftop or mounted nearby and connected to a building's electricity supply), or deployed in relatively small arrays (typically o 50 MW) connected to the electricity distribution network. While these technologies cause negligible environmental impact in their use phase, other phases of their life cycles, particularly manufacturing, do entail environmental burdens. Furthermore, increasing distributed generation leads to a change in the utilisation of electricity networks, and additional power flows on local networks may require modifications to this infrastructure. Increasing intermittent renewable electricity generation, has consequential impacts on the use of centralised thermal generation and back up capacity which may offset some environmental benefits from a grid level perspective (Pehnt et al., 2008;Turconi et al., 2014). A switch to distributed renewables therefore implies a shifting of resource use and environmental impacts both spatially and temporally (e.g. GHG emissions arising 'upfront' in the country of product manufacture, rather than during the operational life in the country of deployment), and potential reconfiguration throughout the electricity system. These dynamics pose a challenge for the accounting of change in the system in relation to environmental goals when distributed renewables replace incumbent generation.
This paper considers two methodological traditions that can be used for prospective whole system analysis and can therefore be applied to exploring the implications of increased distributed generation uptake: life cycle assessment (LCA) and net energy analysis (NEA). Both approaches share similar procedural features, but have important conceptual differences that provide distinct and complementary results (Arvesen and Hertwich, 2015;Raugei et al., 2015). Integration of the NEA and LCA has been argued for in the recent literature (Leccisi et al., 2016;Raugei and Leccisi, 2016), and, specifically, the International Energy Agency has made an effort to standardise and homogenise the parallel application of the two methods when applied to photovoltaics (Frischknecht et al., 2016;Raugei et al., 2016). However, applying NEA and LCA jointly in a prospective whole system level study has not been fully realised so far, and therefore this paper provides a detailed conceptual approach to doing so.
The overarching aim of an LCA is to provide information on the environmental impacts of a product or system for a number of impact categories (Klöpffer, 2014) and, in the case of a comparative analysis, to inform on the relative environmental benefits and detriments of the analysed alternatives. LCA may therefore be used to provide a long-term perspective on whether scenarios of distributed renewable electricity generation deployment or alternative grid development pathways minimise: (a) the overall depletion of nonrenewable primary energy reserves, as measured by the non-renewable cumulative energy demand (nr-CED) indicator (Frischknecht et al., 1998(Frischknecht et al., , 2015; and (b) the cumulative emission of climate-altering greenhouse gases, as measured by the global warming potential (GWP 100 ) indicator (IPCC, 2013;Soimakallio et al., 2011).
NEA by contrast was developed with the aim of evaluating the extent to which an energy supply system is able to provide a net energy gain to society by transforming and upgrading a 'raw' energy flow harvested from a primary energy source (PES) into a usable energy carrier (EC), after accounting for all the energy 'investments' that are required in order to carry out the required chain of processes (i.e. extraction, delivery, refining, etc.) (Chambers et al., 1979;Cleveland, 1992;Herendeen, 1988;Herendeen, 2004;Leach, 1975;Slesser, 1974). The principal indicator of NEA is the energy return on energy investment (EROI), defined as the ratio of the gross EC output (in this case, electricity) to the sum total of the aforementioned energy investments (expressed in terms of equivalent primary energy). Notably, the perspective of NEA is intrinsically short-term, since EROI measures the effectiveness of the energy exploitation chain without consideration for the ultimate sustainability of the PES that is being exploited.
LCA and NEA thus seek answers to different questions, and as a result often end up being unnecessarily siloed in the literature. However, their common methodological structure means that they can be implemented in tandem to provide a valuable broader perspective on system change. This is particularly significant for understanding the short-and long-term implications of a potentially rapid shift to distributed renewables, where there are concerns about resource management and overall efficacy in decarbonisation at a system level. Decision makers can gain a more nuanced understanding of the potential environmental and sustainability implications of change within a system by being presented with co-derived EROI and life cycle environmental impact metrics.
This paper proposes a combined LCA and NEA methodological approach to the consequential assessment of distributed generation uptake in an electricity system. The existing literature on LCA and NEA is reviewed to establish salient methodological and conceptual considerations for a consequential approach to change within a system. These considerations are then applied to provide a common framework for consequential assessment of high levels of distributed renewable generation. Recommendations are made about system boundary, scenario development, the modelling of relationships between system components and the allocation of environmental burdens. The paper concludes with a discussion of the challenges and benefits of a combined LCA and NEA approach and future research objectives.
Lessons from consequential life cycle assessment
A LCA consists of four main stages: goal and scope definition; life cycle inventory (LCI); life cycle impact assessment (LCIA); and interpretation (ISO, 2006a(ISO, , 2006b. There are two types of LCA discussed widely in the literature, namely attributional LCA (ACLA) and consequential LCA (CLA). An ALCA attributes a defined allocation of environmental impacts to a product or process unit (Brander et al., 2009;Klöpffer, 2012). For example, for a solar panel the environmental impacts from the mining, refining, manufacturing, distribution, operation and disposal stages are attributed accordingly. Studies such as Searchinger et al. (2008) and Slade et al (2009) have however demonstrated the value of expanding LCA approaches beyond an ALCA, in order to consider wider system effects of change. Approaches to LCA that focus on changes within a system are most frequently referred to as CLCA (Earles and Halog, 2011;Ekvall, 2002;Zamagni, 2015;Zamagni et al., 2012). Brander et al. (2009 define CLCA as distinct from standard ALCA in four ways: CLCA expands the scope of LCA to the total change in a system (however that system is defined) arising from the product or process being investigated. This means the system boundary in a CLCA is potentially very broad, depending on what impacts are considered significant. It has been likened by Ekvall and Weidema (2004) to observing the ripples in a pool of water after throwing a stone, in that all the associated disruptions 'radiating' from the product or process should be of interest to the study.
Unlike an ALCA, a CLCA will overlap with the boundaries of other LCA's, meaning there would be double counting if multiple CLCAs were added together.
CLCA uses marginal data 1 rather than average data to quantify 1 Marginal data are those pertaining to the technologies which are assumed to be directly (or indirectly) affected by the change(s) in the analysed system. For instance, one MWp of additional PV capacity may be assumed to replace the same nominal capacity of combined cycle gas turbines (CCGT); accordingly, the impact of each kWh of generated PV electricity may be algebraically added to the impact of the corresponding kWh of CCGT electricity that is displaced. Average data on the other hand is representative of the full mix of technologies currently deployed in the country or region of interest to produce the same output (i.e. the average grid mix). | 2,570.4 | 2017-01-01T00:00:00.000 | [
"Engineering"
] |
KLF4 down‐regulation resulting from TLR4 promotion of ERK1/2 phosphorylation underpins inflammatory response in sepsis
Abstract Sepsis is a systemic inflammatory response to invading pathogens, leading to high mortality rates in intensive care units worldwide. Krüppel‐like factor 4 (KLF4) is an important anti‐inflammatory transcription factor. In this study, we investigate the anti‐inflammatory role of KLF4 in caecal ligation and puncture (CLP)‐induced septic mice and lipopolysaccharide (LPS)‐induced RAW264.7 cells and its potential mechanism. We found that KLF4 was down‐regulated in CLP‐induced septic mice and in LPS‐induced RAW264.7 cells, and that its overexpression led to increased survival rates of septic mice along with inhibited inflammatory response in vivo and in vitro. ITGA2B was up‐regulated in the setting of sepsis and was inhibited by KLF4 overexpression. ITGA2B knock‐down mimicked the effects of KLF4 overexpression on septic mice and LPS‐induced RAW264.7 cells. TLR4 promoted the phosphorylation of ERK1/2 and then up‐regulated the ubiquitination and the degradation of KLF4, thereby elevating the expression of ITGA2B. Moreover, TLR4 knock‐down or treatment with PD98059 (a MEK inhibitor) inhibited inflammatory response in the setting of sepsis in vivo and in vitro. Furthermore, this effect of PD98059 treatment was lost upon KLF4 knock‐down. Collectively, these results explain the down‐regulation of KLF4 in sepsis, namely via TLR4 promotion of ERK1/2 phosphorylation, and identify ITGA2B as the downstream gene of KLF4, thus highlighting the anti‐inflammatory role of KLF4 in sepsis.
| INTRODUC TI ON
Sepsis and septic shock accompanied by following multi-organ failure are considered are among the principle reasons for deaths of adults in intensive care units. 1 Based on data from high-income nations, it is tentatively extrapolated that 31.5 million cases of sepsis are diagnosed annually, of which 19.4 million are severe cases leading to death in as many as 5.3 million individuals per year. 2 Despite advancements of management achieved in the past few years, patients who have survived sepsis often suffer from persistent physical, physiologic and psychological sequelae, including neurocognitive and functional decline, and have an increased overall risk of mortality, 3,4 highlighting the need to identify novel therapeutic targets and strategies against sepsis.
Given the involvement of innate and adaptive immune responses in sepsis, investigations focusing on regulatory mechanism underlying immune responses are expected to yield better understanding and improvement of long-term clinical outcomes of patients with sepsis. 5 Krüppel-like factor 4 (KLF4), which is as an evolutionarily conserved transcription factor containing zinc fingers, is known for its mediatory role in various cellular processes, including proliferation, differentiation and inflammation. 6 In addition, KLF4 has been reported to be expressed in overt inflammatory conditions. 7 Upon exposure to lipopolysaccharide (LPS), RAW264.7 cells show a suppression of KLF4 expression along with enhanced release of pro-inflammatory cytokines, 8 implying that KLF4 may be engaged in the immune response in sepsis.
RNA-sequencing analysis performed by Xiong et al 9 revealed downstream genes targeted by KLF4, including the ITGA2B gene. Its protein transcript, integrin Alpha 2B (ITGA2B), was increased in circulating platelets and associated with higher mortality of patients and mice with sepsis. 10 Accordingly, ITGA2B merits study as the downstream of KLF4 in sepsis. Besides, phosphorylation of KLF4 has been indicated to be mediated by extracellular regulated kinase 1/2 (ERK1/2) in embryonic stem cells through ubiquitination and degradation. 11 The activation of ERK1/2 has been elucidated to participate in the regulatory role of insulin-like growth factor bind protein 7 in LPS-induced HK-2 cells and in the mouse sepsis model entailing caecal ligation and puncture (CLP). 12 Furthermore, the phosphorylation of ERK1/2 can be diminished by blocking toll-like receptor 4 (TLR4) in myeloid-derived suppressor cells, which are enriched in chronic inflammatory conditions, thus implicating ERK1/2 phosphorylation in disorders related to inflammation. 13 Also, activation of TLR4 has been detected as part of the inflammatory response in patients with liver injury induced by sepsis. 14 Herein, we investigate the hypothesis that the TLR4/ERK1/2/ KLF4/ITGA2B axis mediates the inflammatory response of sepsis.
To test this hypothesis, we established CLP-induced septic mice and LPS-induced RAW264.7 cells and characterized the role of the TLR4/ ERK1/2/KLF4/ITGA2B axis in the setting of sepsis in vivo and in vitro.
| Ethics statement
The study was approved by the ethics committee of experimental animal centre of Chongqing Medical University. All experiments were in accordance with the Declaration of Helsinki.
| Establishment of sepsis mouse model
In total, 240 male C57BL/6 mice (age: 6-8 weeks, weight: 20-25 g) were fasted for 12 hours and then anaesthetized through intraperitoneal injection of 2.5% pentobarbital sodium (2 mL/kg) prior to surgery. In this group, 20 mice served as controls with surgical laparotomy to isolate the distal pole of the caecum and mesentery, followed by abdominal closure. The remaining 220 mice were subjected to CLP surgery for induction of high-grade sepsis. In brief, the abdomen was conventionally disinfected and opened by creating a 2-cm incision in the middle to expose the caecum. Next, the distal pole of the caecum was separated from the mesentery to avoid damaging the mesenteric vessels.
Subsequently, sterile No.4 thread was ligated at 3/4 of the way from the distal pole of the caecum. Then, the caecum was perforated by single through-and-through puncture at the midway between the ligation and the tip of the caecum using a sterile No.7 pipette. Finally, the abdomen was closed with layer-by-layer suturing. Lentiviruses harboring KLF4 overexpression plasmid (oe-KLF4), ITGA2B overexpression plasmid (oe-ITGA2B), short hairpin RNA (shRNA) against ITGA2B (sh-ITGA2B), shRNA against TLR4 (sh-TLR4), shRNA against KLF4 (sh-KLF4), or the corresponding negative controls (NCs) (oe-NC and sh-NC) (1.5 × 10 9 TU) were injected into the CLP-induced septic mice via a tail vein at 4 hours after the surgery. 15,16 Dimethyl sulphoxide (DMSO) or PD98059 (ERK1/2 inhibitor; Sigma-Aldrich) dissolved in DMSO was intraperitoneally injected into the CLP-induced septic mice (10 mg/kg), while control mice were treated with administration of DMSO alone. At 48 hours after surgery, the mice were killed with sodium pentobarbital overdose (100 mg/kg; P3761, Sigma-Aldrich).
| Real-time quantitative polymerase chain reaction
Extraction of total RNA was made by using TRIZOL (Invitrogen).
The primer sequences of TLR4, KLF4 and ITGA2B, which were designed and constructed by Invitrogen, are shown in Table 1. The obtained total RNA was reversely transcribed to cDNA following the instructions of the High-Capacity cDNA Reverse Transcription Kit (4368813; Thermo Fisher Scientific). Real-time quantitative polymerase chain reaction (RT-qPCR) was carried out using SYBR ® Premix Ex Taq™ (Tli RNaseH Plus) kit (RR820A, TaKaRa) on an ABI7500 qPCR instrument (Thermo Fisher Scientific). The expression of the target gene relative to GAPDH was analysed by 2 -ΔΔCt method. 18
The immunoblots were visualized with secondary goat anti-rabbit antibody to immunoglobulin G (ab150077, 1:1000; Abcam) and enhanced chemiluminescence, and analysed by Image J software. The relative expression of protein was expressed as the ratio of grey value of the target band to that of internal reference (GAPDH).
| Protein stability assay
To detect the degradation of KLF4, tunicamycin (654380; Sigma-Aldrich) was applied to treat the RAW264.
| Immunoprecipitation
After 48 hours of transfection, cells were washed by pre-cooled phosphate buffer saline (PBS) and incubated with immunoprecipitation (IP) lysis. After lysis on ice for 30 minutes, the supernatant was collected after centrifugation at 7446 g and 4°C for 20 minutes.
Next, the protein concentration was measured by BCA method. 1 mg protein was incubated with the corresponding primary antibody at 4°C overnight. The next day, the protein was incubated for another 2 hours after the addition of 20 μL Protein A + G beads. Elution was followed using IP lysis through centrifugation at 1258 g and 4°C for 5 minutes, 5 times in total. With 20 μL 2 × Loading buffer in each well, samples were denatured by placement in a bath at 100°C for 5 minutes. The IP sample was subjected to Western blot analysis using antibodies to KLF4 (ab129473, 1:1000, Rabbit; Abcam) and ubiquitin (ab7780, 1:1000; Abcam).
| Bacterial load test
At 48 hours after the operation, orbital blood samples were collected. Then, peritoneal lavage fluid (PLF) was harvested using 3.5 mL sterile normal saline. After gradual dilution, the blood and PLF were applied on trypsin blood agar plates and cultured at 37°C. The number of bacterial colonies was counted 24 hours later.
| Haematoxylin-eosin staining
The freshly collected liver and lung tissues were fixed with 4% paraformaldehyde at room temperature for at least 16 hours, embedded with paraffin and sliced into 3-µm sections. The sections were treated with xylene I for 20 minutes, xylene II for 20 minutes, absolute alcohol I for 15 minutes, absolute alcohol II for 5 minutes, and 75% alcohol for 5 minutes. The sections were then stained with haematoxylin for 3-5 minutes, followed by differentiation. After being blued, sections were dehydrated by 85% and then 95% alcohol for 5 minutes each. The sections were then stained with eosin for 5 minutes, followed by clearing with absolute alcohol I, absolute alcohol II, absolute alcohol III, xylene I and xylene II (5 minutes each).
The mounted sections were then observed using a microscope. Immunohistochemistry was scored as described in a previous study, where absent inflammation scored as 0 points, neutrophil infiltration as 1 point, oedema as 2 points, cellular disorder as 3 points and bleeding as 4 points. 19
| Enzyme-linked immunosorbent assay
After the different treatments, RAW264.7 cells were cultured for 48 hours and the supernatant was collected, which was then sub-
| Statistical analysis
Data analysis was performed using SPSS 21.0 software (IBM). A P value of <.05 was considered as statistically significant.
| KLF4 overexpression protected mice against CLP-induced sepsis
The mouse model of sepsis was established to explore the functional role of KLF4. At the same time, lentiviruses harboring oe-KLF4 were injected into the CLP-induced septic mice. As shown in ( Figure 1A), the all CLP-induced septic mice eventually died.
Comparatively, the overexpression of KLF4 improved their survival rate to more than 50%. In addition, results of RT-qPCR and Figure 1I). Taken together, KLF4 was expressed at a low level in mice with sepsis, but elevating KLF4 expression brought considerable alleviation of sepsis.
| KLF4 overexpression protected mice against CLP-induced sepsis by down-regulating ITGA2B
The focus was then shifted to the expression pattern of ITGA2B in CLP-induced septic mice. Results of RT-qPCR and Western blot analysis showed that ITGA2B expression significantly increased in the liver and lung tissues of CLP-induced septic mice, although the treatment of oe-KLF4 down-regulated the levels of ITGA2B (Figure 2A,B). As shown in Figure 2C, sh-ITGA2B-1 or sh-ITGA2B-2 both significantly knocked-down ITGA2B expression.
After overexpressing KLF4 or the knock-down of ITGA2B, the survival rate of CLP-induced septic mice was significantly increased to more than 50%, while subsequent treatment to impose ITGA2B overexpression reversed the benefits of KLF4 overexpression on the survival rate ( Figure 2D). The protein levels of ITGA2B were
| KLF4 overexpression alleviated LPS-induced inflammatory response by down-regulating ITGA2B in RAW264.7 cells
Next, the role of KLF4 in inflammatory response was explored in LPS-induced RAW264.7 cells. The expression of KLF4 and ITGA2B was measured by RT-qPCR and Western blot analysis, which showed that after the induction using LPS, KLF4 expression was reduced, while ITGA2B expression was elevated ( Figure 3A,B). According to ELISA analysis, the higher levels of TNF-α, IL-1β and IL-6 induced by LPS tended to decline once KLF4 was overexpressed or when ITGA2B was knocked down ( Figure 3C). What is more, the cotreatment of oe-KLF4 and oe-ITGA2B efficiently neutralized the effects of overexpressing KLF4 alone. These findings revealed that LPS-induced inflammatory response could be alleviated by KLF4 via inhibition of ITGA2B.
| TLR4 promotion of phosphorylation of ERK1/2 negated the inhibitory effects of KLF4 on ITGA2B in RAW264.7 cells
According to results of Western blot analysis, the induction of LPS significantly elevated the extent of ERK1/2 phosphorylation, but the protein expression of p-ERK1/2 declined following the addition of the pharmacological inhibitor (PD98059) ( Figure 4A).
Next, results of IP assay showed that KLF4 could interact with β-transducin repeat-containing protein 1 (βTrCP1) and ERK1/2 in RAW264.7 cells, while the interaction was promoted by LPS induction and weakened by PD98059 treatment (Figure 4B,C).
Additionally, the addition of PD98059 to the medium suppressed
| The TLR4/ERK1/2/KLF4/ITGA2B axis mediated inflammatory response in CLP-induced septic mice
As illustrated in Figure 6A, the survival rate of mice was improved following CLP surgery in mice with TLR4 knock-down or PD98059 treatment. Further, TLR4 knock-down or PD98059 treatment resulted in reduced protein expression of ITGA2B, a reduced extent of ERK1/2 phosphorylation, but elevated KLF4 protein expression ( Figure 6B). Furthermore, TLR4 knock-down or PD98059 treatment triggered lower levels of TNF-α, IL-1β and IL-6 in the serum and in
| D ISCUSS I ON
It is estimated that more than one million deaths occur annually in China due to sepsis, with a higher burden of mortality associated with male sex, ageing and comorbidity, together posing a great burden to the public. 20 Treatments addressing the immunomodulatory mechanisms underlying sepsis are believed to improve the clinical outcomes of patients over a long period of time. 5 During the current investigation, we F I G U R E 4 TLR4 promotion of phosphorylation of ERK1/2 negated the inhibitory effects of KLF4 on ITGA2B in RAW264.7 cells. A, The protein expression of ERK1/2 and the extent of ERK1/2 phosphorylation in RAW264.7 cells determined by Western blot analysis, normalized to GAPDH. B, The binding between βTrCP1 or ERK1/2 and KLF4 in RAW264.7 cells after induction of LPS detected by IP assay. C, The binding between βTrCP1 or ERK1/2 and KLF4 in RAW264.7 cells after addition of PD98059 detected by IP assay. D, The deubiquitination of KLF4 in RAW264.7 cells in the presence of PD98059 detected by Western blot analysis. E, The KLF4 protein stability in RAW264.7 cells in the presence of PD98059 and CHX. F, The silencing efficacy of TLR4 in RAW264.7 cells determined by RT-qPCR. G, The protein expression of TLR4 and ERK1/2 and the extent of ERK1/2 phosphorylation in RAW264.7 cells determined by Western blot analysis, normalized to GAPDH. H, The deubiquitination of KLF4 in RAW264.7 cells when TLR4 is silenced detected by Western blot analysis. I, The KLF4 protein stability in RAW264.7 cells when TLR4 is silenced in the presence of CHX. J, The protein expression of KLF4 and ITGA2B in RAW264.7 cells when TLR4 is silenced determined by Western blot analysis, normalized to GAPDH. In panels E and I, *P < .05 vs the RAW264.7 cells treated with LPS + DMSO. In panel F, *P < .05 vs the RAW264.7 cells treated with LPS + sh-NC. In panel G, *P < .05 vs the RAW264.7 cells without any treatment; # P < .05 vs the RAW264.7 cells treated with LPS + DMSO. Data were expressed as mean ± standard deviation. Data among multiple groups were compared by one-way ANOVA, followed by Tukey's post hoc test. The experiments were conducted 3 times independently aimed to prove the hypothesis that the TLR4/ERK1/2/KLF4/ITGA2B axis is capable of mediating inflammatory response of sepsis.
A primary finding of our study was that KLF4 was down-regulated in LPS-induced RAW264.7 cells as well as in the liver and lung tissues of CLP-induced septic mice, accompanied by activated release of pro-inflammatory cytokines (TNF-α, IL-1β and IL-6) and unfavourable histological changes. Lung injury is known to occur frequently as a secondary clinical event to sepsis and can be responsible for higher mortality and morbidity. 21 In addition, liver injury either before or after sepsis is recognized to be of great significance, such that any rescue of liver injury and restoration of liver function might help to reduce mortality and morbidity. 22 KLF4 is a crucial mediator in the biology of neutrophils, which are key players in innate immune response, as well as a regulator of pro-inflammatory signalling in macrophages. 23,24 The production of TNF-α has been demonstrated to be indicative of sepsis in mice following LPS administration and in HL-1 cells following LPS treatment. 25 Also, the reduction in abundance of IL-1β has been observed to play a role in the suppression of inflammation associated with sepsis in mice. 26 As a promising biomarker for sepsis, higher IL-6 concentration correlates with the severity of sepsis. 27,28 LPS stimulation yielded similar results in bone marrow-derived macrophages and murine RAW264.7 cells, provoking inhibited KLF4 expression accompanied by increased levels of pro-inflammatory cytokines (TNF-α, IL-1β and IL-6), while opposite results are seen when KLF4 is overexpressed by treatment with pG-MLV-KLF4. 8 Consistent with that report, we found that transduction of lentivirus harboring overexpressed KLF4 in mice or delivery of overexpressed KLF4 in RAW264.7 cells tended to block the effects of CLP surgery in vivo or LPS in vitro.
Further mechanistic investigations showed that the action of KLF4 depended on the down-regulation of highly expressed ITGA2B. The expression of ITGA2B was elevated approximately ninefold in platelets from patients with sepsis, but this aberrant expression resolves in patients who survive sepsis. 10 Moreover, LPS induction and CLP surgery significantly promoted TLR4 expression and the extent of ERK1/2 phosphorylation. Additionally, the inhibitory effects of KLF4 on ITGA2B were revealed to be repressed by TLR4 through promotion of the extent of ERK1/2 phosphorylation. TLR4 signalling plays an important part in autophagy of RAW264.7 cells induced by LPS, the effects of which are abrogated by TLR4 knock-down. 29 Notably, TLR4 activity has the potential to exert suppressive effects on immune dysfunction in sepsis. 30 Downregulation of TLR4 has been proposed as an indicator for amelioration of lung injury in CLP-induced septic mice after treatment with hesperidin. 31 Besides, TLR4 knock-down has been shown to inhibit the expression of KLF4 in human vascular smooth muscle cells. 32 In neutrophils, deficient KLF4 has been documented to impair the TLR4 signalling, thus underscoring the significance of KLF4/TLR4 in inflammatory reactions. 23 Following induction of sepsis, the extent of ERK1/2 phosphorylation has been found to be elevated, but declined upon alleviation of liver injury. 12 Under inflammatory conditions, blocking TLR4 in myeloid-derived suppressor cells have been verified to diminish the phosphorylation of ERK1/2, 13 thus supporting present results.
Collectively, the results in our investigation shed light on the anti-inflammatory role of KLF4 in CLP-induced septic mice and LPS-induced RAW264.7 cells, and give insight into the potential mechanism of these effects (Figure 7). KLF4 down-regulation resulting from promotion by TLR4 of ERK1/2 phosphorylation leads to elevated ITGA2B expression, which underpins the inflammatory response in sepsis. Nevertheless, additional efforts are warranted to determine the translational potential of these findings into the clinical setting.
ACK N OWLED G EM ENTS
The authors would like to acknowledge the helpful comments on this paper received from the reviewers.
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
Research data are not shared.
F I G U R E 7
The mechanism diagram depicting that TLR4 activates ERK1/2 to promote the ubiquitination of βTrCP1 on KLF4, which down-regulates the expression of KLF4. The downregulated KLF4 promotes expression of ITGA2B, thereby increasing expression of TNF-α,IL-1β and IL-6, which finally aggravates sepsis. Ub, Ubiquitination | 4,449.2 | 2020-12-27T00:00:00.000 | [
"Medicine",
"Biology"
] |
Automated Machine Learning System for Defect Detection on Cylindrical Metal Surfaces
Metal workpieces are indispensable in the manufacturing industry. Surface defects affect the appearance and efficiency of a workpiece and reduce the safety of manufactured products. Therefore, products must be inspected for surface defects, such as scratches, dirt, and chips. The traditional manual inspection method is time-consuming and labor-intensive, and human error is unavoidable when thousands of products require inspection. Therefore, an automated optical inspection method is often adopted. Traditional automated optical inspection algorithms are insufficient in the detection of defects on metal surfaces, but a convolutional neural network (CNN) may aid in the inspection. However, considerable time is required to select the optimal hyperparameters for a CNN through training and testing. First, we compared the ability of three CNNs, namely VGG-16, ResNet-50, and MobileNet v1, to detect defects on metal surfaces. These models were hypothetically implemented for transfer learning (TL). However, in deploying TL, the phenomenon of apparent convergence in prediction accuracy, followed by divergence in validation accuracy, may create a problem when the image pattern is not known in advance. Second, our developed automated machine-learning (AutoML) model was trained through a random search with the core layers of the network architecture of the three TL models. We developed a retraining criterion for scenarios in which the model exhibited poor training results such that a new neural network architecture and new hyperparameters could be selected for retraining when the defect accuracy criterion in the first TL was not met. Third, we used AutoKeras to execute AutoML and identify a model suitable for a metal-surface-defect dataset. The performance of TL, AutoKeras, and our designed AutoML model was compared. The results of this study were obtained using a small number of metal defect samples. Based on TL, the detection accuracy of VGG-16, ResNet-50, and MobileNet v1 was 91%, 59.00%, and 50%, respectively. Moreover, the AutoKeras model exhibited the highest accuracy of 99.83%. The accuracy of the self-designed AutoML model reached 95.50% when using a core layer module, obtained by combining the modules of VGG-16, ResNet-50, and MobileNet v1. The designed AutoML model effectively and accurately recognized defective and low-quality samples despite low training costs. The defect accuracy of the developed model was close to that of the existing AutoKeras model and thus can contribute to the development of new diagnostic technologies for smart manufacturing.
Introduction
The application of computer-aided design and analysis in certain domains, such as signal processing and simulation, has gradually increased. Manual product inspections require considerable labor, and inaccurate testing results might be obtained because of human error which can affect the quality of manufactured products. Therefore, the use of automated optical inspection has increased. With advancements in hardware and software, deep learning models have been combined with optical inspection systems to relieve the bottleneck of defect detection in manufacturing. Technology used to detect metal surface defects has surpassed the limits of the human eye. Image classification through deep learning can improve the accuracy of image detection [1,2]. In addition, with the advancement of graphics processing units, the computing power of hardware has considerably increased. The You Only Look Once algorithm [3] and deep learning frameworks, such as TensorFlow [4] and PyTorch [5], have been used for defect detection. Synergistic development using a kernel filter, pooling, or activation function in image classification has promoted advances in deep learning technology. Many studies have employed convolutional neural networks (CNNs) to classify images [6][7][8]. CNNs have deep learning structures and can be easily trained [9,10]. Such networks have been used to effectively inspect products and detect defects in images [11].
Theoretically, the number of hidden layers of an NN strongly influences network performance. With more layers, a network can work with and extract more complex feature patterns and therefore achieve superior results. However, the accuracy of a network peaks at a certain number of layers and even decreases thereafter. ResNet [12] uses residual learning to resolve this problem and it contains shortcuts. Therefore, ResNet can suppress the accuracy drop caused by multiple layers in deep networks. When a large kernel is used for feature extraction in convolution operations, numerous parameters are required. MobileNet [13] uses depthwise separable convolution to divide the convolution kernel into single channels. It can convolve each channel without changing the depth of the input features. Moreover, the aforementioned model can produce output feature maps with the same number of channels as the input feature maps. This model can increase or reduce the dimensionality of feature maps to reduce computational complexity and accelerate calculation while maintaining high accuracy.
A deep learning approach was developed for an optical inspection system for surface defects on extruded aluminum [14]. A simple camera records extruded profiles during production, and an NN distinguishes immaculate surfaces from surfaces with various common defects. Metal defects can vary in size, shape, and texture, and the defects detected by an NN can be highly similar. In [15], an automatic segmentation and quantification method using customized deep learning architecture was proposed to detect defects in images of titanium-coated metal surfaces. In [16], a U-Net convolutional network was developed to segment biomedical images through appropriate preprocessing and postprocessing steps; specifically, the network applied a median filter to input images to eliminate impulse noise. Standard benchmarks were used to evaluate the detection and segmentation performance of the developed model, which achieved an accuracy of 93.46%.
In [17], a 26-layer CNN was developed to detect surface defects on the components of roller bearings, and the performance of this network was compared with that of MobileNet, VGG-19 [18], and ResNet-50. VGG-19 achieved a mean average precision (mAP) of 83.86%; however, its processing time was long (i.e., 83.3 ms). MobileNet exhibited the shortest processing speed but the lowest mAP because of the small number of parameters and necessary calculations. The 26-layer CNN achieved a better balance between mAP and processing efficiency than the other three models, with the mAP of this network nearly equal to the highest mAP of ResNet-50. Moreover, the 26-layer CNN required less time for detection than ResNet-50 or VGG-19. In [19], an entropy calculation method was used in a self-designed DarkNet-53 NN model, and the most suitable kernel size was selected for the convolutional layer. The model was highly accurate in recognizing components and required only a short training time.
In [20], two types of residual fully connected NNs (RFCNNs) were developed: RFCN-ResNet and RFCN-DenseNet. The performance of these networks in the classification of 24 types of tumors was compared with that of the XGBoost and AutoKeras automated machine-learning (AutoML) methods. RFCN-ResNet and RFCN-DenseNet featured enhancements in feature propagation and encouragement for the reuse of RFCN architectures, while new RFCN architecture generation achieved accuracies of 95.9% and 95.7%, respectively, outperforming XGBoost and AutoKeras by 4.8% and 4.9%, respectively. In another comparison, RFCN-ResNet and RFCN-DenseNet achieved respective accuracies of 95.9% and 96.5% and outperformed XGBoost and AutoKeras by 6.1% and 5.5%, respectively, indicating that RFCN-ResNet and RFCN-DenseNet considerably outperform XGBoost and AutoKeras in modeling genomic data.
In [21], AutoKeras and a self-designed model were used to analyze water quality. Compared to that of AutoKeras, the accuracy of the developed model was 1.8% and 1% higher in the classification of two-class and multiclass water data, respectively. However, the AutoKeras model exhibited higher efficiency than the developed model and required no manual effort.
The authors of [22] proposed that random trials are more efficient than trials based on a grid search for optimizing hyperparameters. In Gaussian process analysis, different hyperparameters are crucial for different datasets. Thus, a grid search is a poor choice for the configuration of algorithms for new datasets.
In smart manufacturing, quickly adapting to new complex manufacturing processes and designing appropriate and efficient optimization networks have become crucial. In the present study, industrial machine vision and deep learning were combined to construct an AutoML model to detect defects on metal surfaces to reduce costs in smart manufacturing. The proposed model can be used to develop highly adaptable visual inspection techniques to overcome the bottlenecks caused by current image-processing techniques and thereby advance smart manufacturing.
VGGNet
VGGNet was developed by the Visual Geometry Group of Oxford University and placed second in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014. VGGNet contains more layers than AlexNet did in 2012. The VGG block architecture used by VGGNet contains a repeated 3 × 3 kernel size for the convolutional layers and a 2 × 2 kernel size for the max-pooling layer. Four varieties of VGGNet with different numbers of layers exist: VGG-11; VGG-13; VGG-16; and VGG-19. Among these, VGG-16 and VGG-19 exhibit excellent results. In the present study, the VGG-16 model with few parameters was used for training. VGG-16 contains five VGG blocks as presented in Table 1. Two convolutional layers are used in each of the first two VGG blocks, and three convolutional layers are used in each of the last three VGG blocks. VGG-16 consists of 3 fully connected layers in addition to the 13 convolutional layers.
ResNet
ResNet was developed by Microsoft Research in 2015 and won the ILSVRC that year Beyond a certain point, the accuracy of NNs does not increase with the number of layers. As displayed in Figure 1, the training error of a 56-layer NN is higher than that of a 20-layer NN. This eventual decrease in training accuracy with network depth is the degradation problem of NNs. When the depth of an NN increases, gradients vanishing during backpropagation becomes more likely; thus, certain gradients cannot be transmitted to the next node to update the weights, resulting in a decrease in training accuracy.
ResNet
ResNet was developed by Microsoft Research in 2015 and won the ILSVRC that year Beyond a certain point, the accuracy of NNs does not increase with the number of layers. As displayed in Figure 1, the training error of a 56-layer NN is higher than that of a 20layer NN. This eventual decrease in training accuracy with network depth is the degradation problem of NNs. When the depth of an NN increases, gradients vanishing during backpropagation becomes more likely; thus, certain gradients cannot be transmitted to the next node to update the weights, resulting in a decrease in training accuracy. To solve the degradation problem, ResNet employs a residual learning structure. As displayed in Figure 2, the input x is passed through two branches. In one branch (right), the input x is passed across the network layers through a shortcut, and no operation is performed. In the other branch (middle), the output F(x) is obtained after an operation is performed on x. The final output is the sum of the outputs of the two branches, namely To solve the degradation problem, ResNet employs a residual learning structure. As displayed in Figure 2, the input x is passed through two branches. In one branch (right), the input x is passed across the network layers through a shortcut, and no operation is performed. In the other branch (middle), the output F(x) is obtained after an operation is performed on x. The final output is the sum of the outputs of the two branches, namely F(x) + x. This method can prevent gradient vanishing during convolution operations because the shortcut allows gradients to be passed to the next layer to update the weights.
As displayed in Table 2, the input of ResNet is first passed through a 7 × 7 × 64 convolutional layer and then passed through four residual connection blocks in sequence. The higher the number of layers, the higher the training cost; therefore, a 1 × 1 convolutional layer is placed before a 3 × 3 convolutional layer. The 1 × 1 convolutional layer reduces the number of channels, which can reduce the number of parameters required during training. volutional layer and then passed through four residual connection blocks in sequence. The higher the number of layers, the higher the training cost; therefore, a 1 × 1 convolutional layer is placed before a 3 × 3 convolutional layer. The 1 × 1 convolutional layer reduces the number of channels, which can reduce the number of parameters required during training.
MobileNet
Since the introduction of CNNs, the depth of networks has increased. Numerous layers incur a high computational cost, limiting the application of NNs. In 2017, Google developed MobileNet, which is a lightweight NN applied to mobile terminals. The architecture of MobileNet is presented in Table 3. Moreover, depthwise separable convolution is average pool, 1000-d fc, softmax FLOPs 1.8 × 10 9 3.6 × 10 9 3.8 × 10 9 7.6 × 10 9 11.3 × 10 9
MobileNet
Since the introduction of CNNs, the depth of networks has increased. Numerous layers incur a high computational cost, limiting the application of NNs. In 2017, Google developed MobileNet, which is a lightweight NN applied to mobile terminals. The architecture of MobileNet is presented in Table 3. Moreover, depthwise separable convolution is used in place of conventional convolution. The difference between depthwise convolution and general convolution is that depthwise convolution involves splitting the convolution kernel into single-channel forms. Depthwise separable convolution splits kernels to undergo depthwise convolution and pointwise convolution. Even with the same input feature depth, the convolution of each channel enables the generation of output feature maps with the same number of channels as the input feature maps. Pointwise convolution is a 1 × 1 convolution that can increase or reduce the dimensionality of a feature map ( Figure 3). Table 3. Architecture of MobileNet v1 [13].
Type/Stride
Filter Shape Input Size with the same number of channels as the input feature maps. Pointwise conv 1 × 1 convolution that can increase or reduce the dimensionality of a feature m 3).
AutoML
The training steps involved in traditional ML are displayed in Figure 4. A training dataset is created, a suitable model is selected for training, and finally, the hyperparameters are adjusted after the training results have been evaluated. This process is repeated with many models and parameters until the most effective ones are identified.
AutoML
The training steps involved in traditional ML are displayed in Figure 4. A training dataset is created, a suitable model is selected for training, and finally, the hyperparameters are adjusted after the training results have been evaluated. This process is repeated with many models and parameters until the most effective ones are identified. During model selection, aspects such as the dataset size and type, as well as hardware limitations, must be considered. Model evaluation requires an adjustment of the hyperparameters, such as the learning rate and optimizer, as well as the parameters related to the model architecture, such as the number of layers and operation of each layer. In general ML, the aforementioned parameters must be set manually. If the training results are not ideal, transfer learning (TL) can be applied. TL reduces the time required for parameter adjustment. Modeling requires considerable time. Hyperparameter adjustment is the process of searching for combinations, but this process can be automated. AutoML involves using artificial intelligence algorithms to conduct ML automatically.
AutoML enables developers familiar or unfamiliar with NNs, or lacking relevant domain knowledge, to use ML and deep learning techniques. Many tools are available for AutoML. In this study, AutoKeras was used to implement AutoML. AutoKeras is an Au-toML system based on the Keras deep learning framework and uses an efficient neural architecture search (ENAS) [23] for automated modeling. AutoKeras employs three common methods to optimize hyperparameters: grid search; random search; and Bayesian search ( Figure 5). Grid search is a brute-force method used to check all possible combinations of the range of hyperparameters provided by network designers. For example, if the learning rate is 0.01 or 0.1 and the batch size is 10 or 20, the four possible parameter combinations are produced in sequence for the training method. Random search is similar to grid search, but the combinations of hyperparameters are produced in a random order. Bayesian search involves searching for hyperparameters on the basis of Bayes' theorem [24], and only parameter combinations that maximize a certain probability function are considered. During model selection, aspects such as the dataset size and type, as well as hardware limitations, must be considered. Model evaluation requires an adjustment of the hyperparameters, such as the learning rate and optimizer, as well as the parameters related to the model architecture, such as the number of layers and operation of each layer. In general ML, the aforementioned parameters must be set manually. If the training results are not ideal, transfer learning (TL) can be applied. TL reduces the time required for parameter adjustment. Modeling requires considerable time. Hyperparameter adjustment is the process of searching for combinations, but this process can be automated. AutoML involves using artificial intelligence algorithms to conduct ML automatically.
AutoML enables developers familiar or unfamiliar with NNs, or lacking relevant domain knowledge, to use ML and deep learning techniques. Many tools are available for AutoML. In this study, AutoKeras was used to implement AutoML. AutoKeras is an AutoML system based on the Keras deep learning framework and uses an efficient neural architecture search (ENAS) [23] for automated modeling. AutoKeras employs three common methods to optimize hyperparameters: grid search; random search; and Bayesian search ( Figure 5). Grid search is a brute-force method used to check all possible combinations of the range of hyperparameters provided by network designers. For example, if the learning rate is 0.01 or 0.1 and the batch size is 10 or 20, the four possible parameter combinations are produced in sequence for the training method. Random search is similar to grid search, but the combinations of hyperparameters are produced in a random order. Bayesian search involves searching for hyperparameters on the basis of Bayes' theorem [24], and only parameter combinations that maximize a certain probability function are considered. with many models and parameters until the most effective ones are identified. During model selection, aspects such as the dataset size and type, as well as ha limitations, must be considered. Model evaluation requires an adjustment of the hy rameters, such as the learning rate and optimizer, as well as the parameters related model architecture, such as the number of layers and operation of each layer. In g ML, the aforementioned parameters must be set manually. If the training results ideal, transfer learning (TL) can be applied. TL reduces the time required for par adjustment. Modeling requires considerable time. Hyperparameter adjustment is t cess of searching for combinations, but this process can be automated. AutoML in using artificial intelligence algorithms to conduct ML automatically.
AutoML enables developers familiar or unfamiliar with NNs, or lacking relev main knowledge, to use ML and deep learning techniques. Many tools are availa AutoML. In this study, AutoKeras was used to implement AutoML. AutoKeras is toML system based on the Keras deep learning framework and uses an efficient architecture search (ENAS) [23] for automated modeling. AutoKeras employs thre mon methods to optimize hyperparameters: grid search; random search; and Ba search ( Figure 5). Grid search is a brute-force method used to check all possible co tions of the range of hyperparameters provided by network designers. For exampl learning rate is 0.01 or 0.1 and the batch size is 10 or 20, the four possible paramete binations are produced in sequence for the training method. Random search is sim grid search, but the combinations of hyperparameters are produced in a random Bayesian search involves searching for hyperparameters on the basis of Bayes' th [24], and only parameter combinations that maximize a certain probability funct considered. In 2016, Google developed the neural architecture search (NAS) with reinforcement learning [25]. The NAS system consists of three main components as displayed in Figure 6. This system includes different types of network layers, including convolutional and fully connected layers. These layers are connected to form network architecture, which generally requires a manual design. In Google's NAS, various candidate network architectures are tested, and the optimal architecture is selected on the basis of various evaluation indicators. NAS can be used to evaluate chosen strategies based on different models and tests in addition to candidate models. From target data, NASNet [26] constructs a high-accuracy, high-complexity, an multilayered NN model for image classification. When the number of data is large, t network consumes considerable computation resources. Therefore, it begins searching small dataset for suitable network layer units and then searches a larger dataset (Figu 7). NASNet judges whether the NN architecture can produce suitable gradient-desce results and modifies the probability of selecting that network architecture according to judgment. It then selects the optimally performing network model. ENAS is an improv variant of NASNet which allows a parent model to share weights with its submode thus, training need not be restarted from scratch for the submodels.
Workpiece for Experimental Detection
An experiment was performed to detect defects on a workpiece made of 304 stainle steel parts (Figure 8). A burr generated around a chamfered hole (blue square in Figure during machining with computer numerical control (CNC) was the detection target. B cause wires would eventually pass through such a hole, defects must be detected to pr From target data, NASNet [26] constructs a high-accuracy, high-complexity, and multilayered NN model for image classification. When the number of data is large, the network consumes considerable computation resources. Therefore, it begins searching a small dataset for suitable network layer units and then searches a larger dataset (Figure 7). NASNet judges whether the NN architecture can produce suitable gradient-descent results and modifies the probability of selecting that network architecture according to its judgment. It then selects the optimally performing network model. ENAS is an improved variant of NASNet which allows a parent model to share weights with its submodels; thus, training need not be restarted from scratch for the submodels.
Sensors 2022, 22, x FOR PEER REVIEW 6. This system includes different types of network layers, including convolu fully connected layers. These layers are connected to form network architectu generally requires a manual design. In Google's NAS, various candidate netw tectures are tested, and the optimal architecture is selected on the basis of vario tion indicators. NAS can be used to evaluate chosen strategies based on differe and tests in addition to candidate models. From target data, NASNet [26] constructs a high-accuracy, high-compl multilayered NN model for image classification. When the number of data is network consumes considerable computation resources. Therefore, it begins s small dataset for suitable network layer units and then searches a larger datas 7). NASNet judges whether the NN architecture can produce suitable gradie results and modifies the probability of selecting that network architecture accor judgment. It then selects the optimally performing network model. ENAS is an variant of NASNet which allows a parent model to share weights with its s thus, training need not be restarted from scratch for the submodels.
Workpiece for Experimental Detection
An experiment was performed to detect defects on a workpiece made of 30 steel parts (Figure 8). A burr generated around a chamfered hole (blue square in during machining with computer numerical control (CNC) was the detection cause wires would eventually pass through such a hole, defects must be detect vent wire scratching.
Workpiece for Experimental Detection
An experiment was performed to detect defects on a workpiece made of 304 stainless steel parts (Figure 8). A burr generated around a chamfered hole (blue square in Figure 8) during machining with computer numerical control (CNC) was the detection target. Because wires would eventually pass through such a hole, defects must be detected to prevent wire scratching. The experimental system comprised a 1280 × 1024-pixel camera (B 60gm GigE, CMOS, Ahrensburg, Germany) with a 50 + 15 mm extension used a shadowless ring light to provide illumination from different angles. the tested workpiece was composed of opaque reflective material. Using a source makes the incident angle equal to the reflection angle, thus produci Using a ring light source can prevent reflection and highlight defects, ther solving the reflection problem caused by direct illumination. The optimal li sition and camera inclination angle (α) can be determined through iterati and experimentation. The angular positioning of the camera relative to the the present study is illustrated in Figure 9. The experimental system comprised a 1280 × 1024-pixel camera (Basler acA1280-60gm GigE, CMOS, Ahrensburg, Germany) with a 50 + 15 mm extension ring lens. We used a shadowless ring light to provide illumination from different angles. The surface of the tested workpiece was composed of opaque reflective material. Using a common light source makes the incident angle equal to the reflection angle, thus producing a reflection. Using a ring light source can prevent reflection and highlight defects, thereby effectively solving the reflection problem caused by direct illumination. The optimal light source position and camera inclination angle (α) can be determined through iterative adjustment and experimentation. The angular positioning of the camera relative to the light source in the present study is illustrated in Figure 9. The experimental system comprised a 1280 × 1024-pixel camera (Basler acA1280-60gm GigE, CMOS, Ahrensburg, Germany) with a 50 + 15 mm extension ring lens. We used a shadowless ring light to provide illumination from different angles. The surface of the tested workpiece was composed of opaque reflective material. Using a common light source makes the incident angle equal to the reflection angle, thus producing a reflection. Using a ring light source can prevent reflection and highlight defects, thereby effectively solving the reflection problem caused by direct illumination. The optimal light source position and camera inclination angle (α) can be determined through iterative adjustment and experimentation. The angular positioning of the camera relative to the light source in the present study is illustrated in Figure 9.
Dataset
The original image size was 1280 × 550 pixels (Figure 10). The focuses of okay (OK) and not good (NG) images are shown in the green and red boxes, respectively, in Figure 11. After obtaining 300 OK and 300 NG images, we cropped the images to 186 × 189 pixels
Dataset
The original image size was 1280 × 550 pixels ( Figure 10). The focuses of okay (OK) and not good (NG) images are shown in the green and red boxes, respectively, in Figure 11. After obtaining 300 OK and 300 NG images, we cropped the images to 186 × 189 pixels and used them for training. We defined rough surfaces (e.g., the irregular white parts in the red boxes in Figure 11) as defects.
Experimental Architecture
Three experiments were conducted using Python on Google Colaboratory through the website. In Experiment 1, the results of training different models through TL were evaluated. In each training process, the following hyperparameters were used: the optimizer was Adam; the batch size was 10; the number of epochs was 5; and the learning rate was 0.0001. Datasets for two different training and validation distribution ratios of 5:5 and 8:2 were tested, and the ratio achieving superior results was used for Experiments 2 and 3. Experiment 2 involved the design of this research for an AutoML model. Experiment 3 involved the use of commercial AutoKeras software to import the same dataset for training and a comparison of the model architecture and calculation results with those of our designed AutoML model.
Design of AutoML Model
We extracted feature modules from the three models used for TL in Experiment 1,
Experimental Architecture
Three experiments were conducted using Python on Google Colaboratory through the website. In Experiment 1, the results of training different models through TL were evaluated. In each training process, the following hyperparameters were used: the optimizer was Adam; the batch size was 10; the number of epochs was 5; and the learning rate was 0.0001. Datasets for two different training and validation distribution ratios of 5:5 and 8:2 were tested, and the ratio achieving superior results was used for Experiments 2 and 3. Experiment 2 involved the design of this research for an AutoML model. Experiment 3 involved the use of commercial AutoKeras software to import the same dataset for training and a comparison of the model architecture and calculation results with those of our designed AutoML model.
Design of AutoML Model
We extracted feature modules from the three models used for TL in Experiment 1, Figure 11. Cropped OK image (left) and NG defect (right) training images.
Experimental Architecture
Three experiments were conducted using Python on Google Colaboratory through the website. In Experiment 1, the results of training different models through TL were evaluated. In each training process, the following hyperparameters were used: the optimizer was Adam; the batch size was 10; the number of epochs was 5; and the learning rate was 0.0001. Datasets for two different training and validation distribution ratios of 5:5 and 8:2 were tested, and the ratio achieving superior results was used for Experiments 2 and 3. Experiment 2 involved the design of this research for an AutoML model. Experiment 3 involved the use of commercial AutoKeras software to import the same dataset for training and a comparison of the model architecture and calculation results with those of our designed AutoML model.
Design of AutoML Model
We extracted feature modules from the three models used for TL in Experiment 1, namely VGG-16, ResNet, and MobileNet as V, R, and M, respectively. In Figure 12, the VGG block, residual connection block of ResNet, and depthwise separable convolution block of MobileNet (i.e., the extracted feature modules) are denoted by V, R, and M, respectively. The designed AutoML model contained a network of the aforementioned blocks. Adam, SGD (stochastic gradient decent), and Adagrad (Adaptive gradient) were selected as the optimizers for this model, and the learning rates were 0.01, 0.001, and 0.0001. In [22], random search was more effective than grid search for the selection of hyperparameters. Therefore, we used random search to select the model architecture, optimizer, and learning rate. If the model accuracy was insufficient, a new architecture and new hyperparameters were used for retraining. In Experiment 1, VGG-16 exhibited an accuracy of 91% when TL was used; thus, 91% was established as the standard for retraining the designed AutoML model. The operation of the designed AutoML model is illustrated in Figure 12. hyperparameters. Therefore, we used random search to select the model architecture, optimizer, and learning rate. If the model accuracy was insufficient, a new architecture and new hyperparameters were used for retraining. In Experiment 1, VGG-16 exhibited an accuracy of 91% when TL was used; thus, 91% was established as the standard for retraining the designed AutoML model. The operation of the designed AutoML model is illustrated in Figure 12.
Experimental Results and Discussion
The following text describes the training accuracy, validation accuracy, loss charts, and confusion matrices obtained for each model with the number of epochs being 5. The confusion matrix comprises true positives (TPs), true negatives (TNs), false positives (FPs), and false negative (FNs). First of all, in Experiment 1, the TPs were OK images correctly identified as OK. The TNs were NG images correctly identified as NG. The FPs were NG images incorrectly identified as OK. Finally, the FNs were OK images incorrectly identified as NG. Figure 13 displays the accuracy and loss of VGG-16, ResNet-50, and MobileNet v1 in training and validation. The training and validation accuracy of VGG-16 approached 1 when the number of epochs was 3 and decreased marginally when the number of epochs was 4. When the number of epochs was 5, the accuracy was higher and lower than when the number of epochs was 4 and 3, respectively. The training loss was approximately 0.6 with 4 epochs. The training and verification accuracy of ResNet-50 approached 1 when the number of epochs was 5 and 4 respectively. The validation accuracy increased to 0.6 when the number of epochs was 3. When the number of epochs was 5, the validation loss was poor at 0.6, which is not favorable. The training accuracy of MobileNet v1 increased from 0.5 at the beginning to 0.8. The validation accuracy initially increased with training accuracy; however, when the number of epochs was 3, the validation accuracy was only approximately 0.4. The validation and training losses of MobileNet v1 were close to 0.8 and above 0.4, respectively, when the number of epochs was 5. Thus, by TL, the phenomenon of apparent convergence in prediction accuracy, followed by divergence in valida-
Experimental Results and Discussion
The following text describes the training accuracy, validation accuracy, loss charts, and confusion matrices obtained for each model with the number of epochs being 5. The confusion matrix comprises true positives (TPs), true negatives (TNs), false positives (FPs), and false negative (FNs). First of all, in Experiment 1, the TPs were OK images correctly identified as OK. The TNs were NG images correctly identified as NG. The FPs were NG images incorrectly identified as OK. Finally, the FNs were OK images incorrectly identified as NG. Figure 13 displays the accuracy and loss of VGG-16, ResNet-50, and MobileNet v1 in training and validation. The training and validation accuracy of VGG-16 approached 1 when the number of epochs was 3 and decreased marginally when the number of epochs was 4. When the number of epochs was 5, the accuracy was higher and lower than when the number of epochs was 4 and 3, respectively. The training loss was approximately 0.6 with 4 epochs. The training and verification accuracy of ResNet-50 approached 1 when the number of epochs was 5 and 4 respectively. The validation accuracy increased to 0.6 when the number of epochs was 3. When the number of epochs was 5, the validation loss was poor at 0.6, which is not favorable. The training accuracy of MobileNet v1 increased from 0.5 at the beginning to 0.8. The validation accuracy initially increased with training accuracy; however, when the number of epochs was 3, the validation accuracy was only approximately 0.4. The validation and training losses of MobileNet v1 were close to 0.8 and above 0.4, respectively, when the number of epochs was 5. Thus, by TL, the phenomenon of apparent convergence in prediction accuracy, followed by divergence in validation accuracy, could happen, especially when the selected network model and content of the image profile is not known prior. Table 4. In terms of accuracy and loss, the VGG-16 model exhibited optimal performance; however, it required a high number of parameters (i.e., 134,268,738) and a long training time. The MobileNet v1 model required the shortest training time of only 77 s but achieved low accuracy. Table 5 presents the training results obtained with VGG-16, ResNet-50, and Mo-bileNet v1 under two distribution ratios. The highest values for the 5:5 and 8:2 data distributions were 100 and 20, respectively. The accuracy achieved on the 5:5 dataset was higher than that achieved on the 8:2 dataset; therefore, the 5:5 dataset distribution was used in Experiments 2 and 3. A possible reason for these results is that the defects in the original images comprised with the characteristics of uncomplicated profile as simply as Table 4. In terms of accuracy and loss, the VGG-16 model exhibited optimal performance; however, it required a high number of parameters (i.e., 134,268,738) and a long training time. The MobileNet v1 model required the shortest training time of only 77 s but achieved low accuracy. Table 5 presents the training results obtained with VGG-16, ResNet-50, and MobileNet v1 under two distribution ratios. The highest values for the 5:5 and 8:2 data distributions were 100 and 20, respectively. The accuracy achieved on the 5:5 dataset was higher than that achieved on the 8:2 dataset; therefore, the 5:5 dataset distribution was used in Experiments 2 and 3. A possible reason for these results is that the defects in the original images comprised with the characteristics of uncomplicated profile as simply as line edges or curves. Overfitting is likely when the traditional 8:2 or 7:3 ratio is used.
Results of Experiment 2
As previously mentioned, the phenomenon of apparent convergence in prediction accuracy, followed by divergence in validation accuracy, which may cause a problem in the selection of an appropriate network model when using TL, sounds reasonable. As presented in Table 5, among the compared models, VGG-16 exhibited the highest accuracy of 0.91 in Experiment 1. Therefore, in Experiment 2, we used this accuracy as the criterion for evaluating the necessity of retraining. Table 6 presents the results obtained with the designed AutoML model during training as well as the hyperparameters, model architecture, training time, accuracy, loss, and confusion matrix for different iterations. Table 6 lists the model architecture and parameters randomly selected by the designed AutoML model in each of the six training iterations. The RMV (quoted name as the abstracted module adopted from Resnet-50, VGG-16, and MobileNet v1, respectively) model was selected three times, and different optimizers were used for training. However, the training results were poor: the model achieved an accuracy lower than that achieved in Experiment 1. The third training iteration required the highest number of parameters (>2 million) and longest training time (>1 h) but achieved the second highest accuracy (i.e., 90%). In the sixth iteration (Table 7), the MRV model was used with a learning rate of 0.001, Adam was selected as the optimizer, and a considerably lower parameter number was required than those of the first five iterations. The developed AutoML model required 13,242 s (nearly 4 h) to train six models and achieved a final accuracy of 95.5%. The designed AutoML model by MRV is demonstrated in Figure 14.
Results of Experiment 3
AutoKeras has three preset model architectures, namely ResNet-50, EfficientNet B7 [27], and a CNN composed of two convolutional layers. In the present study, Au-toKeras selected the CNN model for training. The details of the architecture of the CNN model are presented in Table 8. The dataset input to the CNN model was normalized and then passed through the following layers in sequence: two 3 × 3 convolutional layers; a max-pooling layer; a dropout layer with a dropout rate of 0.25; a flat layer; a dropout layer with a dropout rate of 0.5; and a fully connected layer (named dense), which provided the output. The Adam optimizer and a learning rate of 0.001 were used. The training results of the AutoKeras model are displayed in Figure 15. This model was trained twice, and no validation dataset was used in the second training iteration. Because AutoKeras merged the training and validation datasets for the final training iteration, Figure 14 displays only a training chart and no validation chart. The trained model weights were added to the test dataset, and a confusion matrix was obtained. The test dataset comprised 100 OK and 100 NG images, and all pictures were identified correctly. Because the AutoKeras model was trained twice, it required a long training time (i.e., 1388 s). The final accuracy and loss of the aforementioned model were 0.9983 and 0.0063, respectively. Because AutoKeras merged the training and validation datasets for the final training iteration, Figure 14 displays only a training chart and no validation chart. The trained model weights were added to the test dataset, and a confusion matrix was obtained. The test dataset comprised 100 OK and 100 NG images, and all pictures were identified correctly. Because the AutoKeras model was trained twice, it required a long training time (i.e., 1388 s). The final accuracy and loss of the aforementioned model were 0.9983 and 0.0063, respectively. Figure 15. Accuracy, loss, and confusion matrix for AutoKeras model.
Conclusions
In this study, we detected a burr around the chamfered hole of 304 stainless steel parts produced through CNC machining. To prevent the scratching of the wires that pass through this hole, defects should be detected through imaging; thus, images of the hole Figure 15. Accuracy, loss, and confusion matrix for AutoKeras model.
Conclusions
In this study, we detected a burr around the chamfered hole of 304 stainless steel parts produced through CNC machining. To prevent the scratching of the wires that pass through this hole, defects should be detected through imaging; thus, images of the hole were used as the training data in this study. A CNN that can perform TL and AutoML, as well as adopt the AutoKeras model, was designed. Experiments were conducted using these three networks and a training dataset.
The TL model was trained with the following fixed hyperparameters: the Adam optimizer; batch size = 10; number of epochs = 5; and learning rate = 0.0001. This model used VGG-16, ResNet-50, or MobileNet v1. VGG-16 had the highest accuracy among these in the dataset used in this study. The training and validation accuracies of this model were high. Although the training accuracy of ResNet-50 eventually reached 1, its validation accuracy was low. Moreover, large fluctuations in prediction accuracy were observed when the number of epochs was 3. Presumably, larger network layers should cause the result of apparent stability in prediction convergence. However as the number of epochs increase, actually unstable prediction accuracy was observed when the detection defect is not complicated. The training and validation accuracies of MobileNet v1 were considerably lower than those of the other tested models. The designed AutoML model used random search to obtain a combination of modules to construct the optimal model architecture. It then obtained the hyperparameters after training and established a retraining mechanism so that a new architecture could be selected and retrained if the accuracy of the training results was low. The designed AutoML model trained six models and achieved a final training accuracy of 95.5%. The AutoKeras model required a longer training time but constructed the neural architecture search model in a shorter time. The accuracy and two-layer architecture of the convolutional model selected by AutoKeras indicate that the dataset used was simple and did not require a complex model.
The VGG-16, ResNet-50, and MobileNet v1 models all exhibit the advantage of small architecture that prevents gradient vanishing. Based on the spirit of TL, deploying the above model can provide preliminary prediction accuracy in a few trials. Our designed AutoML model composed of a core layer module, obtained by combining the modules of VGG-16, ResNet-50, and MobileNet v1, can effectively improve defect detection and reduce the associated training costs. Our model has considerable advantages in deploying proof-of-concept in defect detection with the selection of a bettering candidate for the CNN. The results of this study can act as a reference for the development of new diagnostic technology for cutting-edge smart manufacturing. | 9,885.4 | 2022-12-01T00:00:00.000 | [
"Materials Science",
"Computer Science"
] |
Chromosomal Differentiation in Genetically Isolated Populations of the Marsh-Specialist Crocidura suaveolens (Mammalia: Soricidae)
The genus Crocidura represents a remarkable model for the study of chromosome evolution. This is the case of the lesser white-toothed shrew (Crocidura suaveolens), a representative of the Palearctic group. Although continuously distributed from Siberia to Central Europe, C. suaveolens is a rare, habitat-specialist species in the southwesternmost limit of its distributional range, in the Gulf of Cádiz (Iberian Peninsula). In this area, C. suaveolens is restricted to genetically isolated populations associated to the tidal marches of five rivers (Guadiana, Piedras, Odiel, Tinto and Guadalquivir). This particular distributional range provides a unique opportunity to investigate whether genetic differentiation and habitat specialization was accompanied by chromosomal variation. In this context, the main objective of this study was to determinate the chromosomal characteristics of the habitat-specialist C. suaveolens in Southwestern Iberia, as a way to understand the evolutionary history of this species in the Iberian Peninsula. A total of 41 individuals from six different populations across the Gulf of Cádiz were collected and cytogenetically characterized. We detected four different karyotypes, with diploid numbers (2n) ranging from 2n = 40 to 2n = 43. Two of them (2n = 41 and 2n = 43) were characterized by the presence of B-chromosomes. The analysis of karyotype distribution across lineages and populations revealed an association between mtDNA population divergence and chromosomal differentiation. C. suaveolens populations in the Gulf of Cádiz provide a rare example of true karyotypic polymorphism potentially associated to genetic isolation and habitat specialization in which to investigate the evolutionary significance of chromosomal variation in mammals and their contribution to phenotypic and ecological divergence.
Introduction
Large-scale chromosomal changes, such as inversions, translocations, fusions and fissions, contribute to the reshuffling of genomes, thus providing new chromosomal forms on which natural selection can work. In this context, genome reshuffling has important evolutionary and ecological implications, since gene flow can be reduced within the reorganized regions in the heterokaryotype, thus affecting co-adapted genes locked within the rearrangement that, if advantageous, increase in frequency in natural populations (reviewed in [1]). In fact, evidence on the role of large-scale chromosomal changes in adaptation and diversification has been reported, especially in the case of inversions [2][3][4][5]. As for chromosomal fusions, however, empirical studies are limited to the house mouse (Mus musculus domesticus) and shrews (Sorex araneus), two mammalian systems where the presence of chromosomal fusions (either fixed or in polymorphic state within populations) are widespread [6][7][8][9][10][11][12][13]. Understanding the genetic and mechanistic basis of these processes will provide insights into how biodiversity originates and is maintained.
Shrews (family Soricidae) represent a clear example of chromosomal diversification within mammals, with diploid numbers ranging from 2n = 19 (Blarina hylophaga) to 2n = 68 (Crocidura yankariensis), showing both inter-and intra-specific chromosomal diversity [14]. Within Soricidae, the genus Crocidura represents a remarkable model where karyotypic variation correlates with phylogenetic relationships within the group. Initial studies based on nuclear and mitochondrial sequences (mtDNA) suggested a common ancestry of all Crocidura species [15,16], with a clear dichotomy between Afrotropical and Palearctic taxa, which display contrasting patterns of chromosomal differentiation [17]. With the exception of C. luna (2n = 28 or 36 [18]) Afrotropical species are characterized by high diploid numbers (from 2n = 42 to 2n = 68), 2n = 50 being the most common chromosomal form. Palearctic species, on the other hand, present a tendency for low diploid numbers (from 2n = 22 to 2n = 42, [17], with 2n = 40 as the predominant karyotype. It has been suggested that this bimodal distribution of diploid numbers within the genus Crocidura is probably the result of two contrasting tendencies in chromosomal evolution: (i) chromosomal fissions in Afrotropical species, and (ii) chromosomal fusions (tandem fusions centric fusions, and/or whole arm translocations) in Palearctic species [19].
Among Palearctic species, the lesser white-toothed shrew (C. suaveolens) is one of the most widely distributed shrews in Eurasia that presents the characteristic 2n = 40 chromosomal form [20,21]. Chromosomal polymorphisms, mainly involving pericentric inversions and heterochromatin distribution, have been reported as rare for white-toothed shrews [21]. Although continuously distributed from Siberia to Central Europe [22], the lesser white-toothed shrew is less common and has a more fragmented distribution in Western Europe, including the Iberian Peninsula [23]. In the southwesternmost limit of its distributional range, in the Gulf of Cádiz, C. suaveolens is rare, and only occurs in restricted, isolated populations associated to the tidal marches of five rivers (Guadiana, Piedras, Odiel, Tinto and Guadalquivir) [24] (see Figure 1). It has been proposed that its restricted distribution, the strict tidal marsh association and partly the genetic isolation of the populations, could be the consequence of competitive exclusion by the greater white-toothed shrew (C. russula, 2n = 42) [25,26]. In fact, recent studies based on mtDNA revealed the presence of two differentiated sub-lineages in Southwestern Iberia that diverged from other Iberian lineages around 140 Ka (50-240 Ka), and between themselves around 110 Ka . One of these sub-lineages occupies four river mouths closely located to one another (from 1 to 12 km), in the province of Huelva (Guadiana, Piedras, Odiel and Tinto; sub-lineage C3 [25]), whereas a second sub-lineage was present in the Guadalquivir river mouth (sub-lineage C4 [25]). Subsequent studies using microsatellites revealed that C. suaveolens populations from Guadiana, Piedras, Odiel and Tinto rivers showed low differentiation among them, but high differentiation with both the distant Guadalquivir population and the closely located Estero Domingo Rubio (EDR) population [24]. Overall, genetic data suggest that the observed genetic patterns are the result of both historical isolation and current restrictions to gene flow imposed by an adverse landscape matrix [24]. How chromosomal reorganizations have contributed (if so) to this genetic diverge is currently unknown. Despite their relevance, chromosomal data has often been neglected in phylogeographical studies (reviewed in [27]). The particular distributional range of the habitat-specialist C. suaveolens in Southwestern Iberia provides a unique opportunity to investigate whether the formation of genetically isolated populations and habitat specialization was accompanied by chromosomal differentiation. Given this context, in the present study, we aimed to do the following: (i) identify the extent of chromosomal variation C. suaveolens populations in Southwestern Iberia, and (ii) determine how chromosomal variation is distributed within and among genetic and phylogeographic groups. In this paper, we discuss the relative contribution of genetic drift and natural selection and the potential evolutionary significance of observed chromosomal differences.
Sampling
A total of 41 individuals from six different populations across the Gulf of Cádiz were collected in 2015 and 2016. This included different localities along the banks of the Guadalquivir, Guadiana, Odiel, Piedras and Tinto rivers (see Figure 1 and Table 1). The sample size ranged from two to six individuals per population. When possible, samples were obtained from both river banks, in order to evaluate whether rivers were acting as barriers to gene flow. Two additional individuals from the Northwest Iberian lineage (lineage B [24]) were also included in the study, for comparison (individuals from the Cáceres and Zamora populations, Table 1). Despite their relevance, chromosomal data has often been neglected in phylogeographical studies (reviewed in [27]). The particular distributional range of the habitat-specialist C. suaveolens in Southwestern Iberia provides a unique opportunity to investigate whether the formation of genetically isolated populations and habitat specialization was accompanied by chromosomal differentiation. Given this context, in the present study, we aimed to do the following: (i) identify the extent of chromosomal variation C. suaveolens populations in Southwestern Iberia, and (ii) determine how chromosomal variation is distributed within and among genetic and phylogeographic groups. In this paper, we discuss the relative contribution of genetic drift and natural selection and the potential evolutionary significance of observed chromosomal differences.
Sampling
A total of 41 individuals from six different populations across the Gulf of Cádiz were collected in 2015 and 2016. This included different localities along the banks of the Guadalquivir, Guadiana, Odiel, Piedras and Tinto rivers (see Figure 1 and Table 1). The sample size ranged from two to six individuals per population. When possible, samples were obtained from both river banks, in order to evaluate whether rivers were acting as barriers to gene flow. Two additional individuals from the Northwest Iberian lineage (lineage B [24]) were also included in the study, for comparison (individuals from the Cáceres and Zamora populations, Table 1). Tissue samples (ear and tail) were obtained from each specimen on site and transferred to the cell culture laboratory in transport medium (Dulbecco's Modified Eagle Medium-DMEM-supplemented with 2 mm L-glutamine, 10% fetal bovine serum, 1% 100× Penicillin/Streptomycin/Amphotericin B solution and 0.07 mg/mL Gentamicin). Animals were immediately released after sample collection. Captures were performed with official permits issued by the corresponding nature conservation institutions, and research was conducted with approval of the bioethics committee of the University of Huelva and Universitat Autònoma de Barcelona.
Cell Culture and Chromosomal Harvest
Tissue samples were mechanically and enzymatically disaggregated. Briefly, tissue was washed with 5 mL of DPBS (Dulbecco's Phosphate-Buffered Saline) solution (DPBS with 1% 100x Penicillin/Streptomycin/Amphotericin B solution and 1mg/mL of Gentamicin) for 10 min, at 37 • C, in an orbital shaker, at 200 rpm. Biopsies were then shredded into small pieces, using a scalpel, in a Petri dish with 1 mL of DMEM without supplements, and incubated for 45 min in DMEM with 0.25% Collagenase Type II at 37 • C and at 200 rpm. Cell suspension was then centrifuged for 10 min at 300 G. The reaming cells were resuspended in 5 mL of completed growth medium (DMEM, supplemented with 20% of fetal bovine serum, 2 mm L-Glutamine) seeded on 25 cm 2 T-flasks and cultured at standard conditions (37 • C, 10% CO 2 ) for four weeks. Cultivated cells proliferated as an adherent monolayer. Subcultures of the adherent cells at early passages (3rd and 4th) were used to obtain chromosomes.
Chromosomal harvest was conducted as previously described [28]. In order to enhance the dispersion of chromosomes, cells were incubated in a hypotonic solution (KCl 0.075 M) for 20 min at 37 • C, inverting every 5 min. Subsequently, the cells were centrifuged (5 min at 300 g) and transferred into 15 mL tubes. The cell pellet was washed twice by adding 5 mL of fixative solution (methanol, acetic acid at 3:1 concentration, freshly prepared) and centrifuged (5 min at 300 g). Cells were centrifuged again and diluted in 1 mL of fixative solution and stored at −20 • C until use.
Chromosomal Characterization
Chromosomal spreads were obtained by dropping 15 µL of cell suspension onto a clean dry slide. Slides were baked at 65 • C during one hour and kept at −20 • C until use. Metaphases were stained homogenously with Giemsa solution for the analysis of the modal karyotype and then G-banded for karyotyping, as previously described [28].
An optical microscope (model Zeiss Axioskop) equipped with a charged coupled device camera (ProgResR CS10Plus, Jenoptik Optical Systems, Jena, Germany) was used for the microscope analysis.
A minimum of 25 good-quality metaphases were captured per specimen with the program Progress Capture 2.7.7 and analyzed in order to obtain the modal karyotype. In order to construct representative karyotype of each specimen analyzed, chromosomes were ordered by morphology and decreasing size, resulting in a representative karyotype.
Karyotype Distribution across Lineages and Populations
Several mitochondrial lineages in Iberia were defined previously by Biedma and collaborators [25], two of them (sub-lineages C3 and C4) are present in the study area. Populations were defined as genetic clusters, with each corresponding to one of the disjunct marshes associated to each of the five main rivers in the region (Guadalquivir, Tinto, Odiel, Guadiana and Piedras), and a sixth genetically differentiated population in EDR. Samples were grouped by mitochondrial lineage or geographical population for population cytogenetic analyses.
For the analysis of cytogenetic diversity and differentiation, the A diploid number (autosomes and sex chromosomes) and the presence/absence of B (supernumerary) chromosomes were treated as separate diploid and haploid traits, respectively. GENEPOP on the web (https://genepop.curtin.edu. au; [29]) was used to estimate allele (karyotype) frequencies, observed and expected heterozygosities, and to test for departure from Hardy-Weinberg (HW) expectations. We also tested for genotypic linkage disequilibrium between the two traits (A-chromosomal number and presence/absence of B-chromosomes), using the log likelihood ratio statistics. Differentiation between populations was assessed by the exact G or Fisher's tests and by estimating Wright's FST index.
Chromosomal Diversity in C. suaveolens in the Gulf of Cádiz
Four karyotype variants were detected in the populations sampled: 2n = 40, 2n = 41 (i.e., 2n = 40 + B), 2n = 42 and 2n = 43 (i.e., 2n = 40 + B; Table 1). All nine individuals from the banks of the Guadalquivir River (sub-lineage C4) were characterized by presenting 2n = 40 (autosomal fundamental number, FNa = 46) (see Figure 2A), the same pattern found in the two individuals from the Northwest Iberian lineage (Cáceres and Zamora). The autosomes consisted of 15 pairs of acrocentric chromosomes and four pairs of bi-armed chromosomes, of which one pair was metacentric and three pairs sub-metacentric. The X-chromosome was a large sub-metacentric (see Figure 2A).
Specimens from three populations (Guadiana, Piedras and EDR) from sub-lineage C3 presented the same karyotype, consisting of 2n = 42 (FNa = 50) (see Figure 2B), with no polymorphic karyomorphs among the 18 individuals analyzed (see Table 1). The autosomes were 15 pairs of acrocentric chromosomes and five pairs of bi-armed chromosomes, of which one pair was metacentric and four pairs were sub-metacentric. The X-chromosome was a large sub-metacentric (see Figure 2B).
Interestingly, chromosomal polymorphisms were detected in both Odiel and Tinto river banks (Table 1), both populations also belonging to sub-lineage C3. In the case of the Odiel population, two distinct karyotypes were found in different proportion: 2n = 42 and 2n = 41. Three out of eight (37.5%) specimens presented the same 2n = 42 karyotype found in Guadiana, Piedras and EDR, whereas five individuals (62.5%) presented a karyotype consisting of 2n = 41 (FNa = 48) chromosomes (see Figure 2C). The 2n = 42 karyotype (FNa = 50) corresponded with the same one found in Guadiana, Piedras and EDR, and it was characterized by the presence of 15 pairs of acrocentric chromosomes and four pairs of bi-armed chromosomes, of which one pair was metacentric and three pairs were sub-metacentric. The 2n = 41 karyotype, however, corresponded to 15 pairs of acrocentric chromosomes and four pairs of bi-armed chromosomes, of which one pair was metacentric and three pairs were sub-metacentric, and the presence of one single B-chromosome was in all the individuals. Since the main difference with the 2n = 40 karyotype found in the Guadalquivir River was the presence of a single B-chromosome, we refer to the 2n = 41 karyotype from Odiel as 2n = 40 + B.
Guadalquivir River was the presence of a single B-chromosome, we refer to the 2n = 41 karyotype from Odiel as 2n = 40 + B. We also found two distinct karyotypes in the Tinto River population. Two out of six (33.3%) individuals surveyed in Tinto presented 2n = 43 (FNa = 52), whereas the rest (66.6%) were characterized by 2n = 42 (the same karyotype formula found in Guadiana, Piedras and EDR). The 2n = 43 karyotype was characterized by 15 pairs of acrocentric chromosomes and five pairs of bi-armed chromosomes, of which one pair was metacentric and four pairs were sub-metacentric, and the presence of a B-chromosome. The X-chromosome was a large sub-metacentric (see Figure 2D). Due to chromosomal G-banding homologies and the presence of a single B-chromosome, we refer to the 2n = 43 karyotype from Tinto as 2n = 42 + B.
G-banding comparison between 2n = 40 and 2n = 42 karyotypes suggests the presence of chromosomal fusion/fission events between bi-armed chromosomes.
Karyotype Distribution and Diversity
The four karyotypes observed were unevenly distributed across lineages and populations in the study area. The 2n = 40 A karyotype was the only one observed in the Guadalquivir population, where sub-lineage C4 occurs, and in Zamora and Cáceres samples, which are representative of the B lineage. In contrast, the 2n = 42 A karyotype was the most frequent in C3 populations, and the only one detected in EDR, Piedras and Guadiana (see Figure 3). The presence of B-chromosomes was restricted to the C3 lineage populations, where it occurred in the context of both 2n = 40 (five out of five occurrences) and 2n = 42 A karyotypes (two out of six occurrences). As a result, chromosomal differentiation between mtDNA subclades across both traits was highly significant (Fisher's exact test, Chi-Squared > 26. 46, d.f. = 4, p < 0.00002), but it was high and significant for A-chromosome We also found two distinct karyotypes in the Tinto River population. Two out of six (33.3%) individuals surveyed in Tinto presented 2n = 43 (FNa = 52), whereas the rest (66.6%) were characterized by 2n = 42 (the same karyotype formula found in Guadiana, Piedras and EDR). The 2n = 43 karyotype was characterized by 15 pairs of acrocentric chromosomes and five pairs of bi-armed chromosomes, of which one pair was metacentric and four pairs were sub-metacentric, and the presence of a B-chromosome. The X-chromosome was a large sub-metacentric (see Figure 2D). Due to chromosomal G-banding homologies and the presence of a single B-chromosome, we refer to the 2n = 43 karyotype from Tinto as 2n = 42 + B.
G-banding comparison between 2n = 40 and 2n = 42 karyotypes suggests the presence of chromosomal fusion/fission events between bi-armed chromosomes.
Karyotype Distribution and Diversity
The four karyotypes observed were unevenly distributed across lineages and populations in the study area. The 2n = 40 A karyotype was the only one observed in the Guadalquivir population, where sub-lineage C4 occurs, and in Zamora and Cáceres samples, which are representative of the B lineage. In contrast, the 2n = 42 A karyotype was the most frequent in C3 populations, and the only one detected in EDR, Piedras and Guadiana (see Figure 3). The presence of B-chromosomes was restricted to the C3 lineage populations, where it occurred in the context of both 2n = 40 (five out of five occurrences) and 2n = 42 A karyotypes (two out of six occurrences). As a result, chromosomal differentiation between mtDNA subclades across both traits was highly significant (Fisher's exact test, Chi-Squared > 26. 46, d.f. = 4, p < 0.00002), but it was high and significant for A-chromosome number Figure 4). Table 2. Distribution of karyotypes across populations and karyotypic diversity. The frequency distribution of karyotypes for the composite karyotype and the frequency and diversity statistics for the A-chromosome number and B-chromosome presence are shown for each population, mtDNA lineage and for the pool of samples from the Gulf of Cádiz. Diversity is measured as expected heterozygosity and haplotype diversity for A-chromosome number and B-chromosome, respectively. Overall, karyotypic differentiation among populations was extremely high (Exact G test, N = 41, p < 3.98×10 Table 3). Slightly different patterns were observed for B-chromosome, with highest differentiation between Guadalquivir and Odiel (FST = 0.591, N= 17, p = 0.0093) and lowest between Odiel and Tinto (FST = 0.013, N = 14, p = 0.591; Table 3). Table 3). Slightly different patterns were observed for B-chromosome, with highest differentiation between Guadalquivir and Odiel (F ST = 0.591, N= 17, p = 0.0093) and lowest between Odiel and Tinto (F ST = 0.013, N = 14, p = 0.591; Table 3). Table 3. Karyotypic differentiation among pairs of populations. F ST values are shown for A-chromosomal number (above the diagonal) and B-chromosome (below the diagonal). Asterisks indicate the significance of exact G tests (* p < 0.01; ** p < 0.001). Pairwise comparisons indicated by "-" could not be estimated due to lack of variation in the pair.
Overview of Chromosomal Evolution in Crocidura
Understanding how genomes are organized and which types of chromosomal rearrangements are implicated in macroevolutionary events are fundamental to understanding the dynamics and emergence of new species [30]. Because Crocidura is the largest and one of the most karyotypically diverse genera of the family Soricidae [19,31], it offers a unique opportunity to test the role of chromosomal reorganization in species' diversification and habitat-specialization.
It has been long assumed that the widespread and monophyletic Palearctic C. suaveolens group is characterized by a karyotype of 2n = 40 [15,16]. Exceptions to this rule were initially reported in isolated populations from the Czech Republic (2n = 41) and Switzerland (2n = 42) [21]. Here we extend these initial observations and describe the presence of previously unreported chromosomal variation in the southwesternmost limit of its distributional range, in the Gulf of Cádiz, emphasizing the uniqueness of these genetically isolated and marsh-specialist populations. Karyotypic variability was reflected by the presence of four different karyotypes (2n = 40, 2n = 40 + B, 2n = 42 and 2n = 42 + B), the latter (2n = 42 + B) being reported here for the first time for C. suaveolens. As 2n = 40 is considered the chromosomal form ancestral for C. suaveolens, the G-banding comparisons between karyotypes suggest that chromosomal fissions and/or inversions changes in centromeric position together with the emergence of supernumerary (B) chromosomes have originated the chromosomal variability detected in our study. Most notably, our results provide a rare example of a true intrapopulation chromosomal polymorphism in C. suaveolens.
Remarkably, two of the karyotypes detected (2n = 40+B and 2n = 42+B) were characterized by the presence of B-chromosomes. Although previous studies recorded the occurrence of B-chromosomes for C. crossei [17], C. cf. malayana [32], C. poensis [33] and C. suaveolens itself [21], the present study is the first report focusing on the southwesternmost limit of the distributional range of C. suaveolens. Despite their widespread distribution in wild populations of several animal, plant and fungi species, the evolutionary origin and function of B-chromosomes are largely unknown. These dispensable chromosomes present a particular behavior, thus not following Mendelian segregation laws [34]. Most B-chromosomes are mainly or entirely heterochromatic (i.e., largely non-coding), although in some cases, B-chromosomes can provide some positive adaptive advantage, as suggested by associations with particular habitats [35] or with increases of crossing over and recombination frequencies [36][37][38]. Interestingly, the presence of B-chromosomes in our study was associated to the C3 mt DNA clade, more particularly in Tinto and Odiel populations. This, together with its generalized rarity in the rest of the distribution, suggests a recent and derivative origin of supernumerary chromosomes. Pending further functional and genomic studies, we can only speculate on the evolutionary implications of B-chromosomes in the C. suaveolens populations of Southwestern Iberia.
Karyotypic Diversity and Differentiation of C. suaveolens Populations in the Gulf of Cádiz
The karyotypic diversity detected in the C. suaveolens populations in the Gulf of Cadiz reveals the potential of chromosomal differentiation in the isolation of genetically distinct populations of the marsh-specialist C. suaveolens (Mammalia: Soricidae). In fact, we found an association between the two differentiated mtDNA sub-lineages (C3 and C4) and diploid numbers. All specimens from the Guadalquivir River's mouth (sub-lineage C4) were characterized by the ancestral chromosomal form for C. suaveolens (2n = 40), which was also present in the individuals sampled from the Northwest Iberian lineage (lineage B [24]). Remarkably, this ancestral karyotype was not detected in sub-lineage C3 populations (i.e., Guadiana, Odiel, Piedras, Tinto and EDR), which presented diploid numbers ranging from 2n = 41 to 2n = 43. Since it has been suggested that both mtDNA sub-lineages' ages diverged around 110 Ka [25], the relationship between mtDNA divergence and chromosomal reorganizations adds support for the long-term isolation among C3 and C4 lineages.
Within populations of the C3 sub-lineage, the chromosomal form 2n = 42 was the most widespread, suggesting a common (and recent) origin in this lineage, most probably derived by a chromosomal fission from the ancestral form 2n = 40. The presence of 2n = 40 in these populations (always observed in combination with B-chromosome) is thus more parsimoniously explained as the retention of ancestral variation, especially since secondary contact with 2n = 40 populations in the Guadalquivir River is considered unlikely [24,25]. Chromosomal variation in Odiel and Tinto populations may thus constitute a transient "floating" polymorphism, as defined by [27], whose persistence may have been favored by different factors, such as its relatively recent origin (divergence of C3 sub-linage dated ca. 110 ka, [24]), relatively high population sizes and by the spatial structure within the Odiel-Tinto march complex. On the other hand, lack of detected polymorphisms in westernmost populations may be the consequence of a gradual diversity loss during westward colonization of more recent marshes, due to serial founder events [25].
Although chromosomal rearrangements have traditionally been considered to have a strong underdominance effect (e.g., [39]), there is increasing evidence to suggest that the small effect of certain types of rearrangements would favor their persistence within populations as polymorphisms [12]. This is the case, for example, of centric fusions/fissions (e.g., the physical joining of two acrocentric chromosomes by their centromeric regions and vice versa). Chromosomal fusions are particularly extended in nature (reviewed in [27]), occurring in as diverse taxa as mammals, reptiles, insects or mollusks [40,41]. Weak or null selection against centric fusions/fissions heterokaryotes due to mild meiotic pairing dysgenesis or reduction in meiotic recombination also might favor their presence in many lineages, as it has been previously suggested for Primates [42][43][44], Cetartiodactyla [45], the house mouse [12,46] and shrews [47]. Although the limited sample size included in our study calls for caution, the lack of observations of heterokaryotes, despite balanced frequencies of the two karyotypes detected in Odiel, could indicate underdominance in the case of C. suaveolens populations, a possibility that deserves further research.
Conclusions
Our observations for C. suaveolens provide an example of the persistence of chromosomal polymorphisms in mammals. The concurrence of chromosomal variation, recently diverged mitochondrial sub-lineages and habitat specialization in C. suaveolens populations in the Gulf of Cádiz provides a promising scenario to test the evolutionary significance of chromosomal variation in mammals and assess its contribution to phenotypic and ecological divergence. Future work should address the currently unresolved questions on the possible fitness differences among karyotypes, their role on postzygotic reproductive isolation and their association with morphological or ecological variation. | 6,120.8 | 2020-03-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Matricellular Signal Transduction Involving Calmodulin in the Social Amoebozoan Dictyostelium
The social amoebozoan Dictyostelium discoideum undergoes a developmental sequence wherein an extracellular matrix (ECM) sheath surrounds a group of differentiating cells. This sheath is comprised of proteins and carbohydrates, like the ECM of mammalian tissues. One of the characterized ECM proteins is the cysteine-rich, EGF-like (EGFL) repeat-containing, calmodulin (CaM)-binding protein (CaMBP) CyrA. The first EGFL repeat of CyrA increases the rate of random cell motility and cyclic AMP-mediated chemotaxis. Processing of full-length CyrA (~63 kDa) releases two major EGFL repeat-containing fragments (~45 kDa and ~40 kDa) in an event that is developmentally regulated. Evidence for an EGFL repeat receptor also exists and downstream intracellular signaling pathways involving CaM, Ras, protein kinase A and vinculin B phosphorylation have been characterized. In total, these results identify CyrA as a true matricellular protein comparable in function to tenascin C and other matricellular proteins from mammalian cells. Insight into the regulation and processing of CyrA has also been revealed. CyrA is the first identified extracellular CaMBP in this eukaryotic microbe. In keeping with this, extracellular CaM (extCaM) has been shown to be present in the ECM sheath where it binds to CyrA and inhibits its cleavage to release the 45 kDa and 40 kDa EGFL repeat-containing fragments. The presence of extCaM and its role in regulating a matricellular protein during morphogenesis extends our understanding of CaM-mediated signal transduction in eukaryotes.
Matricellular Protein-Mediated Signal Transduction
The matricellular protein component of the extracellular matrix (ECM) functions as a modulator and mediator of cell-matrix interactions [1 3]. Unlike collagens and laminins, they do not contribute to the physical properties or organization of extracellular structures. These proteins demonstrate several attributes: they function as both soluble and insoluble ECM components, they associate with extracellular proteases and growth factors, and they are expressed at high levels during developmental events. Among other characteristics, matricellular proteins modulate cellular processes by binding to cell surface receptors and initiating intracellular signal transduction. The most well characterized matricellular proteins are SPARC (secreted protein acidic and rich in cysteine), tenascin cytotactin (tenascin C) and thrombospondin (TSP) 1 and 2 [2,4,5]. Metalloproteases of the ADAMTS (a disintegrin and metalloproteinase with thrombospondin motifs) superfamily of proteins cleave matricellular proteins, including thrombospondins, resulting in the extracellular release of small signaling polypeptides and peptides [6].
Epidermal growth factor-like (EGFL) repeats are a common feature of cysteine-rich matricellular proteins [2]. EGFL repeats, which can be present as single entities or as multiple tandem repeats, are a widespread but highly variable domain containing cysteine residues reflecting the position of these residues in EGF [7,8]. While they have been well studied in certain human proteins they are also present in lower eukaryotes, including the model organisms Drosophila melanogaster and Dictyostelium discoideum [9 11].
Tenascin C, TSP-1, and laminin-5 are the best studied EGFL repeat-containing mammalian ECM proteins. The EGFL repeats of these proteins initiate intracellular signal transduction events that modulate cell movement. For example, the EGFL repeats of tenascin C (esp. Ten14) increase the rate of cell motility by binding to the EGFR and activating EGFR-dependent signaling [12,13]. Ten14 functions at micromolar concentrations but, unlike EGF, binding of Ten14 is transient and does not lead to internalization [13,14]. Due to its transient binding, Ten14 can mediate continuous activation of the receptor and thus sustain the increased cell motility induced by the EGFL repeat. The EGFL repeats of TSP-1 also increase the rate of cell movement by activating intracellular signaling events [15]. TSP-1 EGFL repeats do induce autophosphorylation of the EGFR but not by binding to the receptor, suggesting that not all EGFL repeats bind to the EGFR to enhance cell movement [15]. While not a true matricellular protein, laminin-5 does possess EGFL repeats which increase the rate of cell movement by binding to the EGFR [16]. When cleaved by matrix metalloproteinase 2 (MMP2), the resulting EGFL repeat-containing cleavage products activate the EGFR and downstream signaling pathways. Overall, studies on these proteins strongly suggest that a primary function of cysteine-rich, EGFL repeats present within ECM proteins is to regulate cell movement. The results for Dictyostelium suggest this function may be evolutionarily conserved.
ECM Proteins of Dictyostelium discoideum
Dictyostelium, a eukaryotic social amoebozoan, is a widely used model biological system for studying cellular and developmental processes [17]. When single cells are starved they enter a developmental program that begins with cell aggregation to produce multicellular tissue-like aggregates. The aggregation process is driven by cyclic AMP (cAMP)-mediated chemotaxis, an event that has been extensively studied and reviewed [18 20]. The resulting aggregates develop into multicellular pseudoplasmodia or slugs that exhibit a pattern and polarity of contained prespore and prestalk cells. The cells of the slug are surrounded by an ECM historically referred to as a slime sheath. This sheath, which is continuously synthesized from the tip (front) of the slug, was considered to serve as protection against desiccation [21]. Under appropriate conditions, the slugs culminate into fruiting bodies comprised of dead stalk cells supporting a mass of viable spores [22].
The Dictyostelium sheath shares similarity in structure and composition to the ECMs of both animals and plants. It is made up of cellulose and other polysaccharides embedded in a matrix of structural and non-structural proteins. Glycoproteins called sheathins (i.e., EcmC, EcmD, EcmE) co-localize with cellulose and are involved in regulating slug migration [23]. EcmA is a well-studied sheath protein that has been shown to be an integral structural protein distributed throughout the ECM. However, EcmA gene knockout (ecmA-) cells still form slugs that migrate normally in spite of a weakening of their ECM [24]. Other work has identified a group of soluble, mobile glycoproteins that localize within the sheath but their identity has not been determined [25,26]. More recently, the protein CyrA has been identified as another ECM component in Dictyostelium [27,28].
CyrA is a CaM-binding EGFL Repeat-Containing ECM Protein
A cDNA encoding CyrA, a novel, cysteine-rich, putative calmodulin (CaM)-binding protein (CaMBP) was isolated using the CaM-binding overlay technique [27,29]. CyrA is characterized by the presence of a signal sequence (i.e., target for secretion), a CaM-binding domain (CaMBD) and four tandem EGFL repeats (EGFL1-4) that comprise the C-terminal region of the protein (Figure 1, A). These repeats, especially EGFL1, show strong sequence similarity to Ten14 [27]. CyrA, which binds to CaM both in the presence and absence of calcium ions (Ca 2+ ), is secreted during growth and development [27]. In keeping with it being a secreted protein, both CyrA immunolocalization and CyrA-GFP localization showed that the protein localizes to the endoplasmic reticulum, particularly its perinuclear component [28]. Western blot analyses revealed that the intracellular expression of full-length CyrA (~63kDa) peaks between 12 and 16 hours of development, the time when multicellular slug formation occurs (Figure 1, B) [28]. At this time, CyrA is secreted at very high levels and localizes to the ECM (i.e., sheath) of the migrating slug. Immunolocalization of CyrA revealed that it is predominantly localized at the tip of the slug with decreasing abundance towards the back [27].
In various experiments cells were treated with pharmacological agents: LY294002 (PI3K inhibitor), quinacrine (PLA2 inhibitor), W7 (CaM inhibitor), and TMB-8 (intracellular calcium release inhibitor). Pharmacological intervention yielded insight into the regulation of CyrA secretion by showing that it was dependent on intracellular Ca 2+ release as well as active CaM, phosphatidylinositol 3 kinase (PI3K), and phospholipase A2 (PLA2) function [28]. During development CyrA is secreted and proteolytically cleaved to release two major extracellular C-terminal fragments of approximately 45kDa and 40kDa (Figure 1, B) [27]. PEST sequences are rich in proline (P), glutamic acid (E), serine (S), and threonine (T) and are considered to act as signal peptides for protein degradation. The size of these fragments and the location of the putative PEST (331-388) sequence indicate they would contain all four of the EGFL repeats. Dictyostelium secretes a large number of proteases but the CyrA cleaving protease has not yet been identified [30]. Like the EGFL repeat-containing cleavage products from mammalian matricellular proteins, EGFL1 of CyrA enhances the rate of cell motility in Dictyostelium [31,32].
EGFL1 Peptide Increases Cell Motility and Chemotaxis
Treatment of Dictyostelium cells with a peptide of identical sequence to the first 18 amino acids of EGFL1 (DdEGFL1) results in a 2-6-fold increase in random cell motility and an 85% increase in cAMP-mediated chemotaxis, depending on the strain used [31,32]. The over-expression of CyrA also increases the rate of cAMP-mediated chemotaxis, providing in vivo in vivo support for the role of CyrA as a normal mediator of cell movement and cAMP chemotaxis in Dictyostelium [28]. EGFL1 is not a chemoattractant for Dictyostelium cells, however it does activate signaling pathways that function in a supportive role to increase the rate of both random cell movement and cAMP chemotaxis during development [33]. As such, starvation increases the response of cells to EGFL1 [27]. The localization of CyrA to the ECM during multicellular development supports the concept that EGFL1 is involved in regulating the movement of cells within the slug during its movement and during morphogenesis [27,28,34].
EGFL1-Mediated Signal Transduction
A model of the major signaling events mediated by EGFL1 is presented in Figure 2. EGFL1 increases the rate of Dictyostelium random cell movement via a novel signaling pathway that does not require signaling mediated by either of the two cAMP receptors that are active during early development (carA and carC) [32]. The increase in random cell movement induced by EGFL1 requires signaling involving CaM and intracellular Ca 2+ release and leads to increases in polymeric actin and myosin II heavy chain (MHC) in the cytoskeleton [32]. The cytoskeletal proteins talin B (TalB) and paxillin B (PaxB), which are homologues of mammalian talin and paxillin, respectively, are also involved in translating the EGFL1 signal into an increase in random cell motility [35]. While the activities of both PI3K and PLA2, two signaling proteins that mediate the chemotaxis of Dictyostelium amoebae in parallel compensatory pathways, are required for EGFL1-enhanced random cell motility, PLA2 appears to be the more dominant regulator [31,33].
Both Ca 2+ and Ras signaling are required for EGF-induced cell movement in normal mammalian and cancer cells [36,37]. In Dictyostelium, RasC and RasG have been shown to regulate chemotaxis towards cAMP and EGFL1-increased movement partially requires the activity of RasG, but not RasC [32]. The cAMP-dependent serine/threonine kinase, protein kinase A (PKA), which is involved in regulating cell movement and cAMP chemotaxis in Dictyostelium, is also required for the increased rate of cell movement induced by EGFL1 function showing that PKA kinase activity is required for EGFL1 signal transduction [35,38]. In keeping with this, EGFL1-induced phosphorylation of a 90 kDa phosphotyrosine protein during cell starvation requires PKA activity [35]. Other as yet unidentified kinases also are involved in EGFL1 signaling since two unidentified phosphotyrosine proteins appear in DdEGFL1 pull-down assays [35].
Early work on cell motility in Dictyostelium revealed that random movement occurs in the absence -independent signaling pathw -dependent pathway takes over -independent pathway. The work done on EGFL1 signal transduction reveals that activation -independent signaling pathway can be induced by EGFL repeat-containing proteins providing more understanding of how cell motility is regulated in this social amoebozoan [32]. As discussed in the next section, insight into the role of one cytoskeletal component is an example of this. Figure 2. The primary signal transduction events mediating the increase in cell motility and chemotaxis induced by EGFL1. Secretion of full-length CyrA (CyrA FL) is Ca 2+ and calmodulin (CaM) dependent. Extracellularly CyrA FL becomes part of the extracellular matrix where it is processed releasing two smaller EGFL repeat-containing C-terminal fragments of ~45 kDa (CyrA-C45) and ~40 kDa (CyrA-C40). This proteolytic cleavage is inhibited by extracellular CaM (extCaM). At least one EGFL repeat (i.e., EGFL1) within the 45kDa and 40kDa CyrA fragments binds to an unidentified receptor to activate intracellular signaling events which will ultimately increase both random cell motility and chemotaxis (rounded box). As detailed in the main text, these signaling events involve kinase signaling (e.g., PKA), Ca 2+ , CaM, and Ras G which oversee actin polymerization and myosin heavy chain (MHC) assembly. For cells responding chemotactically to cAMP via carA or carC, stimulation of the PI3K/PLA2 signaling pathways increases MHC expression, MHC assembly and actin polymerization. MHC expression is also regulated by Ca 2+ and intracellular CaM (iCaM). Ca 2+ , iCaM, Ras G plus an unidentified kinase also are involved in the sustained phosphorylation of vinculin B-related (VinB) induced by EGFL1. Phosphorylated VinB (pVinB) working in conjunction with binding proteins talin, -actinin and paxillin along with the actin cytoskeleton, leads to increased cell motility and chemotaxis. The details of each step in the figure are covered in the main text of this review.
Vin B Phosphorylation is Regulated by EGFL1 Signal Transduction
During the starvation of Dictyostelium cells a 210 kDa protein is dephosphorylated [31]. Addition of EGFL1 peptide (i.e., DdEGFL1) to starved cells sustains the threonine phosphorylation of this protein. To identify the protein, immunoprecipitation coupled with an LC/MS/MS analysis was carried out revealing it to be vinculin B (VinB) [35]. Since VinB shares sequence similarity with mammalian vinculin, a protein that links the actin cytoskeleton to the plasma membrane, its potential co-localization with the cytoskeleton was investigated. Threonine phosphorylated VinB (pVinB) as well as VinB-GFP both localized to the cytoplasm of Dictyostelium amoebae with specific localization to the cytoskeleton [35]. Furthermore, VinB-GFP undergoes threonine phosphorylation and co-immunoprecipitates with the known vinculin-binding cytoskeletal proteins MHC, actin, alpha-actinin, and talin [35].
Mutant analysis further revealed that EGFL1-increased cell movement requires the cytoskeletalassociated proteins TalB and PaxB [35]. The threonine phosphorylation of VinB is independent of PI3K/PLA2 signaling and PKA kinase activity [35]. In addition to revealing aspects of the function of VinB, this work also provided insight into the signaling pathways involved in EGFL1 regulated cell movement. A model for the involvement of vinculin B in EGFL repeat-enhanced cell movement is presented in Figure 3. . Involvement of vinculin B in EGFL repeat-enhanced cell movement. EGFL1 binds to an uncharacterized receptor to initiate intracellular signal transduction involving Ca 2+ , CaM, RasG and PKA. These events underlie the increased expression and assembly of myosin II heavy chain (MHC) as well as the polymerization of actin. Concomitant with this, vinculin B (VinB) is threonine phosphorylated by signal transduction (pVinB) involving an uncharacterized kinase leading to its association with talin, actin, and MHC to mediate the increased cell movement that is induced by EGFL1 binding.
Extracellular CaM and its Multiple Functions
The identification of CyrA as an extracellular CaMBP at first seemed enigmatic. While the literature abounds with data on the myriad of functions of intracellular CaM, the literature on extracellular CaM (extCaM) has been sporadic and diverse resulting in a lot of skepticism about the validity of its true presence outside of cells and its function there. The evidence for extCaM is strongest in plants where it regulates several functions including cell wall regeneration, gene regulation, germination and proliferation in various species [41,42]. In animals, extCaM mediates DNA synthesis and cell proliferation in a number of species [43,44]. Individual studies have also implicated it in limb regeneration and vasodilation [45,46]. However, no reports for extCaM existed for any eukaryotic microbe.
A proteomic analysis of extracellular proteins from growing and developing Dictyostelium cells indicated the presence of CaM in the extracellular medium [30]. The existence of extCaM in Dictyostelium was validated in subsequent studies on CyrA. Western blotting revealed that CaM is present in the extracellular medium during growth and treatment of growth phase cells with exogenous CaM inhibits cell proliferation [27,47]. While the presence of extCaM during growth was suspected, our analyses unexpectedly revealed that high and constant levels of extCaM are present during growth as well as during starvation, aggregation (i.e., cAMP mediated-chemotaxis), multicellular tissue formation (i.e., slug formation) and later developmental stages [27,47]. The absence of major cytoplasmic proteins (e.g., tubulin) in the extracellular medium verified that this extCaM is due to secretion and not cell death [27]. These events and the multiple functions of extCaM are summarized in Figure 4. When cells starve, as a prelude to multicellular development, extCaM comes into play for the developmental regulation of cAMP chemotaxis that drives cell aggregation [47]. Earlier work showed that antagonizing CaM inhibited cAMP chemotaxis with several CaMBPs directly linked to the event [48]. These results were supported and extended when exogenous CaM was shown to enhance cAMP chemotaxis [47]. Ongoing and continued investigations into the CaMBP CyrA and its EGFL repeats reinforced the role of extCaM and extracellular CaMBPs in this event.
During the later stages of development extCaM is found to be specifically localized in the ECM [28]. The multicellular slug, which is motile, is covered in an ECM (or sheath) that is synthesized at the slug tip forming a tube of proteins, glycoproteins and carbohydrates through which the cells comprising the slug move [21]. The sheath continues to be synthesized as the slug moves leaving behind a relatively cell-free trail of ECM that can be isolated and analyzed. Purified sheath contains CaM, full length CyrA, and the 45 kDa and 40 kDa CyrA cleavage products [28]. Coimmunoprecipitation studies showed that extCaM co-binds to CyrA as well as the 45kDa and 40kDa fragments in the ECM [27,28]. The binding of extCaM and CyrA represents another extracellular function for extCaM binding to extracellular CaMBPs. Exactly how these proteins interact and function together remains to be elucidated.
ExtCaM shows a gradient of diffuse distribution in the front and middle of the slug, but forms punctate deposits in increasing numbers further back in the slug and into the ECM trail [47]. In contrast, CyrA is most concentrated in the sheath at the tip of the slug diminishing in amounts towards the rear [27]. The localizations of extCaM and CyrA in the ECM surrounding the slug stage of development are both compelling and enigmatic because it could be important in the regulation of slug motility and morphogenesis during multicellular development. While the details remain to be elucidated, the binding of extCaM and CyrA has been suggested to have a developmental function in regulating the movement of cells within the slug [47].
As discussed above, the EGFL1 domain of CyrA increases cell motility and chemotaxis. In contrast, extCaM binds to CyrA resulting in decreased proteolytic processing which is another function for extCaM the regulation of CaMBP proteolysis. It is possible that the binding of extCaM to CyrA controls the release of EGFL1-containing 45kDa and 40kDa fragments to regulate the localized rates of cell movement in the slug. However until the specific role of each of the EGFL1-containing components (i.e., CyrA, C45 and C40) is elucidated it is not possible to further clarify their importance in the movement of cells in the slug or their potential function in morphogenesis [47]. When cells are starved they embark on multicellular development which begins with cAMP-mediated chemotaxis. ExtCaM alone mediates cAMP chemotaxis. In addition as development progresses, extCaM and secreted CyrA become part of the extracellular matrix (ECM) surrounding the multicellular slug. At this time EGFL1-containing fragments of CyrA increase the rate of cAMP chemotaxis. The binding of extCaM to CyrA inhibits its proteolytic processing reducing the rate of EGFL1-fragment production.
Conclusions
The study of CyrA has added new insight into the evolution of matricellular proteins and the function of EGFL repeats. It has also provided data revealing that CaM is a valid extracellular protein with diverse stage-specific functions. The cysteine-rich CyrA is not homologous to any mammalian protein but it does share certain defining characteristics with ECM proteins designated as matricellular proteins. The characterization of the function of one of its four tandem EGFL repeats (i.e., EGFL1) has not only provided the first evidence for EGFL repeat function in a lower eukaryote, the similarity in its function as a regulator of cell motility and the use of similar signaling mechanisms are strongly reminiscent of matricellular protein function in mammals. CyrA is unique in another way in that it is the first identified matricellular CaMBP. In keeping with this extCaM is present in the ECM where it not only binds to CyrA but it also controls the release of C-terminal EGFL repeat-containing fragments from the protein.
In total, the data prove not only that extCaM is present in the eukaryotic amoebozoan Dictyostelium but also that it mediates several processes during both growth and development. Coupled with the research carried out on plants and animals these results reveal that extCaM may be as functionally critical and evolutionarily ubiquitous as intracellular CaM. Further research in a diversity of species will enhance our understanding of the true value of extCaM in signal transduction and cell function. | 4,809 | 2013-02-15T00:00:00.000 | [
"Biology"
] |
Integrative Analysis for Identifying Co-Modules of Microbe-Disease Data by Matrix Tri-Factorization With Phylogenetic Information
Microbe-disease association relationship mining is drawing more and more attention due to its potential in capturing disease-related microbes. Hence, it is essential to develop new tools or algorithms to study the complex pathogenic mechanism of microbe-related diseases. However, previous research studies mainly focused on the paradigm of “one disease, one microbe,” rarely investigated the cooperation and associations between microbes, diseases or microbe-disease co-modules from system level. In this study, we propose a novel two-level module identifying algorithm (MDNMF) based on nonnegative matrix tri-factorization which integrates two similarity matrices (disease and microbe similarity matrices) and one microbe-disease association matrix into the objective of MDNMF. MDNMF can identify the modules from different levels and reveal the connections between these modules. In order to improve the efficiency and effectiveness of MDNMF, we also introduce human symptoms-disease network and microbial phylogenetic distance into this model. Furthermore, we applied it to HMDAD dataset and compared it with two NMF-based methods to demonstrate its effectiveness. The experimental results show that MDNMF can obtain better performance in terms of enrichment index (EI) and the number of significantly enriched taxon sets. This demonstrates the potential of MDNMF in capturing microbial modules that have significantly biological function implications.
INTRODUCTION
With the development of high-throughput sequencing technology, such as 16S ribosomal RNA (16S rRNA), more and more microbes were identified. Nearly 10 14 bacterial cells are existed in human internal gut and provide a wide variety of gene products which induce diverse metabolic activities (Micah et al., 2007;Shah et al., 2016). The dynamic balance of human microbiome composition is essential to maintain good health. Once such balance is broken, many closely related human disease and disorders may be caused (Medzhitov, 2007;Thiele et al., 2013), such as colorectal cancer (CRC) (Boleij et al., 2014), obesity (Turnbaugh et al., 2009), inflammatory bowel disease (IBD) (Qin et al., 2010), bacterial vaginosis (Fredricks et al., 2005), and so on. For example, Jorth et al. have reported that gene expression profiles of periodontitisrelated microbial communities have highly conserved changes, relative to healthy samples (Jorth et al., 2014). It means that microbiome composition changes in oral cavity could be associated with pathogenesis of periodontitis. Furthermore, Socransky et al. have found that subgingival plaque is connected with several major microbial taxon including Fusobacterium, Prevotella, and so on (Socransky et al., 1998). Chen et al. have observed that the colonization with Helicobacter pylori has negative correlation with the symptom of allergy (pollens and molds), especially in the childhood (Chen and Blaser, 2007;Blaser, 2014). All these reveal the potential association between pathogenic microorganisms and complex human diseases.
Considering the key role of microbes in health, many important projects including the Human Microbiome Plan (HMP) (Gevers et al., 2012), the Earth Microbiome Project (EMP) (Gilbert et al., 2010), Metagenomics of the Human Intestinal Tract (MetaHIT) (Ehrlich and Consortium, 2011) were launched to investigate the relationships between microbiota and diseases. Moreover, some related databases and tools have been developed to analyze the increasing information for disease-related microbes. A human microbe-disease association database, called HMDAD (Ma et al., 2016a), manually collected 483 microbe-disease association entries from previously published literatures. These databases provide a possibility for microbe-disease association relationship prediction by computational approaches. Zhang et al. proposed bidirection similarity integration method (BDSILP) for predicting microbe-disease associations by integrating the disease-disease semantic similarity and the microbe-microbe functional similarity. Wang et al. proposed a semisupervised computational model called LRLSHMDA to predict large-scale microbe-disease association (Wang et al., 2017). Huang et al. combined neighbor-based collaborative filtering and graphbased model into a unified objective function to predict microbe-disease relationship . He et al. integrated symptom-based disease similarity network into graph regularized nonnegative matrix factorization models (GRNMF), meanwhile utilizing neighbor information to boost the performance of GRNMF (He et al., 2018). Zhang et al. utilized the advantages of ensemble learning to improve the performance of association prediction, which provided a new way for mining microbe-disease relationship (Zhang et al., 2018a;Zhang et al., 2019). All these efforts pave the way for further understanding complex regulatory mechanisms by means of which disease-related microbiota get involved. However, cellular system is complicatedly organized and biological functions are mainly performed in a highly modular manner (Barabasi and Oltvai, 2004;Chen and Zhang, 2018). In microbial ecosystems, microbes often cooperate with each other to finish some biochemical activities. For example, ammonifiers decompose nitrogen-containing organic compounds to release ammonia. Nitrous acid bacteria (also known as ammonia oxidizing bacteria) oxidize ammonia to nitrous acid. Then, nitric acid bacteria (also known as nitrous acid oxidizing bacteria) oxidize nitrous acid to nitric acid. These two types of bacteria can obtain the energy needed for growth from the above oxidation process. Therefore, the mutualism relationship among ammonifier, nitrous acid bacteria, and nitric acid bacteria forces them to form a tight biological community. Guo et al. studied the contributions of high-order metabolic interactions to the activity of four-species microbial community and demonstrated that the interactions between pairwise species play an important role in predicting the complex cellular network behavior (Guo and Boedicker, 2016). Although knowledge about microbe-disease associations could provide helpful insights into understanding complex disease mechanisms He et al., 2018), the "one-disease, many microbes" models ignore interactions within microbial community composed of several species.
Recently, multilayer interaction and modular organization have attracted more and more attentions. Several studies proposed co-module discovery methods to identify combinatorial patterns using pairwise gene expression and drug response data (Kutalik et al., 2008;Chen and Zhang, 2016). In addition, Chen et al. proposed a new method based nonnegative matrix factorization (NMF) to reveal drug-gene module connections from different molecular levels (Chen and Zhang, 2018). Cai et al. proposed a new network-guided sparse binary matching model to jointly analyze the gene-drug patterns hidden in the pharmacological and genomic datasets with the additional prior information of genes and drugs . Chen et al. also proposed a higher order graph matching with multiple network constraints (gene network and drug network) to identify co-modules from different multiple data sources .
All these have made great progresses to study the coordinate regulatory mechanisms between two or more biological molecular networks from a systematic view. However, as far as we know, less work focuses on microbe-disease co-modules discovering. Previous studies mainly aimed to microbe-disease association prediction, and did not reveal within-module interactions (microbe-microbe, disease-disease) from the same level and cross-module interactions (microbe-disease) from multiple molecular levels.
To this end, we design a new algorithm based on NMF to construct the two-level microbe-disease module network by Gaussian profile kernel similarity (MDNMF). In order to improve efficiency and effectiveness of the proposed algorithm, we introduce human symptoms-disease network (Zhou et al., 2014) and microbial phylogenetic distance into this model, which makes functionally similar microbes (diseases with similar symptoms) tend to appear in the same microbial module (disease module). We applied MDNMF to HMDAD dataset and compared it with two classical NMF methods to demonstrate its effectiveness. The experimental results show that the majority of identified microbial modules have significant functional implications [significantly enriched in taxon sets that refer to groups of microbes that has something in common (Dhariwal et al., 2017)]. Figure 1 gives the illustrative example of MDNMF.
The contribution of this paper lies in (1) an efficient two-level module discovering algorithm (MDNMF) has been proposed to reveal microbe-microbe, disease-disease and microbe-disease modules association. (2) The phylogenetic distance of diseaserelated microbes is introduced into the proposed MDNMF model to make phylogenetically close microbes tend to intertwine in the development of similar disease. To our knowledge, this is the first attempt to link microbial phylogenetic relatedness to NMF-based module identification.
(3) The proposed MDNMF algorithm is easily extended to other multiple-level molecular network application, for example, virushost co-modules, microbe-drug co-modules discovering, and so on. The rest of this paper is organized as: in the next section, we give a brief overview of NMF and MDNMF. And then, followed by the experimental results and the conclusions are provided in the last section.
Dataset
The dataset is downloaded from the Human Microbe-Disease Association Database (HMDAD, http://www.cuilab.cn/hmdad) (Ma et al., 2016a). It contains 483 microbe-disease associations, which cover 292 microbes and 39 diseases. By 16S RNA sequencing techniques, most microbe names was recorded at the genus level. Based on these known microbe-disease relation, an adjacency matrix X∈R 292×39 can be constructed where X ij =1 if microbe i is related to disease j, and vice versa.
The NMF Model
NMF and its variants have been widely applied to various fields including bioinformatics (Ma et al., 2016b;Ma et al., 2017;Chen and Zhang, 2018). In NMF, given an original data matrix X∈R n×m , we seek to find two low-rank matrices W∈R n×k (also called basis matrix) and H∈R k×m (coefficient matrix) to approximate X, such that X≈WH, where k<<min(m,n). Here, data X can be represented as the linear additional combination of basis vectors. We can obtain such a decomposition by solving the following least squares problem: where ||•|| F denotes Frobenius norm.
Gaussian Interaction Profile Kernel Similarity for Microbes
Based on the hypothesis that functionally similar microbes could be associated with more common human diseases, Gaussian kernel interaction profiles can be used to calculate the inferred microbe similarity (Wang et al., 2017;He et al., 2018). Given microbe-disease association matrix X, the ith row of X indicates the interaction profiles between microbe m i and all the diseases. For any two microbes m i and m j , their similarity can be computed as follows: FIGURE 1 | Illustrative example of MDNMF. First, based on Gaussian kernel function we can obtain microbe and disease similarity matrices from the original microbe-disease association matrix. Then, these three matrices are served as the input of MDNMF. Simultaneously, in order to improve the accuracy of module finding and biological interpretability of modules identified by MDNMF, human symptoms-disease network and microbial phylogenetic distance are also introduced into the model. At last, microbe-disease co-modules from different levels can be obtained.
where X i,* denotes the ith row of matrix X. g m is bandwidth parameter that needs to be normalized based on a novel bandwidth parameter g′ m and the interaction profile for each microbe, i.e., the ith row of X: Here, n m is the number of microbes related to all diseases (here, n m =292). g′ m was set as 1 according to the previous study (Wang et al., 2017). In this way, microbe similarity matrix MS can be constructed, the element of MS indicates the similarity score between two arbitrary microbes.
Gaussian Interaction Profile Kernel Similarity for Diseases
Similarly, Gaussian kernel based disease similarity matrix can be inferred as follows: where X *,i denotes the ith column of X, n d is the number of diseases related to all microbes (n d =39), g′ d was also assigned to 1.
Phylogenetic Distance for Disease-Related Microbes
Gaussian interaction profiles kernel similarity reflects the intertwining between microbes in term of microbe-disease association relationship. However, functionally similarity could not be explained only by disease relatedness, homology and phylogenetic correlation should be considered as side information to make the connected microbes in the microbe-disease association matrix likely to be placed in the same co-modules. We searched 91 nucleotide sequences of disease-related microbes from NCBI, and imported them into MEGA to compute the phylogenetic distance between pairwise sequences by Kimura 2-parameter model. Other parameters are set in default. Thus, we can obtain the final microbial phylogenetic distance matrix M phy which is used to enforce microbe members within identified modules likely to be near in phylogeny.
In order to demonstrate the role of phylogenetic information in identifying disease-related microbe modules, we extract the top 10 largest and smallest phylogenetic distance pairs as illustrative examples to further analyze whether closely related taxa tend to associate with the same disease, or similar diseases. For each microbe-microbe phylogenetic distance pair, we compute the Jaccard coefficient (JC) between two microbe-related disease profiles (rows of microbe-disease association matrix). The results shows that top 10 microbe pairs which are closely related in genetic have the largest JCs in terms of disease profile similarities. Similarly, we also compute the disease similarities between phylogenetically distant microbes and find that 9 in 10 microbe pairs have the smallest JCs. This suggests that closely related taxa tend to associate with the same disease or similar diseases, and phylogenetically distant taxa usually have distinct disease profiles.
The MDNMF Algorithm
Besides the typical NMF as Dataset described, tri-factor NMF (tri-NMF, X≈FSG) is also an important matrix factorization method for clustering (Ding et al., 2006). In tri-NMF, factorized matrices F,G provide an approach to perform biclustering of X, respectively. Factorized matrix s not only provides an additional degree of freedom to enforce the reconstruct error tiny, but also implicitly denotes the relationship between clusters (Ding et al., 2005). In particular, given the symmetric similarity matrix A, we can decompose it into A≈HS H T . The similarity matrix reflects the intrinsic connection patterns within its original data matrix (Van Dam et al., 2017). In this paper, we propose a novel algorithm MDNMF to simultaneously factorize two similarity matrices (microbe similarity matrix MS, disease similarity matrix DS) and one microbe-disease association matrix X. The objective function is formulated as follows: where MS ∈R n m ×n m, DS∈R n d ×n d are microbe-microbe and disease-disease similarity matrices, respectively. H 1 ∈R n m ×k , H 2 ∈R n d ×k are cluster indication matrices, S 1 ∈R k×k , S 2 ∈R k×k are the symmetric matrices. Here, k is the number of clusters, and l 1 , l 2 are the parameters to balance the weights of three terms in Eq.6. The second term‖ X − H 1 H T 2 ‖ 2 F establishes the one-to-one relationships between identified microbe modules and disease modules. Moreover, it can be regarded as a tri-NMF‖ X − H 1 IH T 2 ‖ 2 F , here I is the identity matrix which enforce the ith module identified by microbe clustering indication matrix H 1 is only bound up with the ith module by H 2 . The other two terms respectively identify one type of modules at individual levels and reveal the module associations within them via S 1 and S 2 .
In order to further improve the performance of the proposed algorithm, we introduce symptoms-based disease similarity network and microbial phylogenetic distance into MDNMF. The symptoms-based disease similarity was previously studied based on co-occurrence of disease/symptom terms (Zhou et al., 2014). Here, we use DS sym to denote symptoms-based disease similarity matrix. The objective function of MDNMF (Eq.6) can be rewritten as follows: Where L 1 =D 1 -MS phy , L 2 =D 2 -DS symp are Laplacian matrices, respectively. MS phy =1-M phy m is the regularization parameter and the whole last term in Eq.7 is used to exert a penalty for violating the prior cognition about microbial phylogeny and disease phenotype associations. Note that disease symptoms dataset collected from PubMed literatures contains diseases and symptoms terms. The association between symptoms and diseases are quantified using term co-occurrence (just like in the field of information retrieval, if the document and keyword simultaneously appear, the corresponding position of the word-document matrix is set to the frequency of co-occurrence). And then, each disease can be represented by a vector of symptoms. At last, the cosine similarity function is used to quantify the similarity between two diseases. The link weight between two diseases quantifies the similarity of their respective symptoms. Thus, these two disease similarities based on microbes and human symptoms are different essentially in that HMDAD dataset describes the binary relationships between microbes and diseases, however, disease symptoms dataset describes the co-occurrence relationships between symptoms and diseases. Integrating them into the objective of MDNMF will simultaneously take account of the diffusion and propagation of the information from different source.
We used the multiplicative update rules to solve MDNMF problem and can find a local minimal solution by alternately updating matrices H 1 , H 2 , S 1 , S 2 .
Determination of Modules
In fact, the same microbe may play different roles in the development of diseases. Therefore, the idea of soft clustering is more suitable to model the function associations among microbes. The factorized matrices H 1 , H 2 can be used to identify two types of modules, respectively. The elements with relatively large values of each column of H 1 (H 2 ) is assigned to the members of corresponding module. We calculate the threshold for each feature (each rowh 1 i, * of H 1 (h 2 i, * of H 2 )) with where m(f ) = 1 , t is a given threshold. Based on this rule, we determined the ith module members if the entries of h * fi are larger than Th (f). In Experimental Results and Discussion section, we set t=1.5 for two clustering indication matrices H 1 and H 2 to identify modules with proper resolution.
Determination of Module Links
Given the symmetric similarity matrix A, tri-NMF factorizes it to Here, h i denotes the ith column of H, s ij is the corresponding element of s. The latent clustering indication vector h i can reconstruct the original similarity matrix A, and s ij can be viewed as the weight of h i h T j . It means that the larger s ij is, the stronger the connection between the modules identified by h i and h j is. Therefore, the diagonal elements of s can be used to evaluate the quality of clustering, and the off-diagonal elements can be used to establish the possible connections between different modules.
Functional Enrichment Analysis for Co-Modules
We use MicrobiomeAnalyst (Dhariwal et al., 2017) tools to conduct functional enrichment analysis for microbe modules, and select the significantly enriched taxon set terms if P-value < 0.005 and FDR < 0.05 (hypergeometric tests). Because MicrobiomeAnalyst provides 229 taxon sets associated with host-intrinsic factors such as diseases. For microbe-disease comodules we define the enrichment indices between significantly enriched taxon set terms and diseases within the same co-module to evaluate the performance of different algorithms. The enrichment index (EI) is formulated as follows: where |{significantly enriched taxon set}| denotes the number of significantly enriched taxon sets, |{diseases}| denotes the number of diseases which is related to microbes within the same comodule. Generally speaking, higher EI s indicates good clustering quality of identified co-modules.
Results and Comparison
We compared MDNMF with typical NMF and NetNMF (Chen and Zhang, 2018) (without considering microbial phylogenetic information and symptoms-based disease similarity) by applying them to HMDAD dataset. Since NMF-based algorithms cannot guarantee a global optimal solution, we run 50 times with different initializations and selected the factorization with minimal objective function value as the downstream analysis. We adopted EI (as described in Functional Enrichment Analysis for Co-Modules) and the number of significantly enriched microbe taxon set (TS sig ) as metrics to evaluate the performance of different algorithms. Other taxon sets (OTS=| {significantly enriched taxon set}|=|identified disease-related taxon sets|) indicate the significantly enriched taxon sets that are not considered by EI. To some extent, the number of other taxon sets reflects the identified ability of different methods in potential microbe function modules discovering. Extensive comparison experiments are conducted and the results are shown in Table 1.
As Table 1 shown, compared with other two NMF-based algorithms, MDNMF achieves the best performance in terms of EI and TS sig , indicating that MDNMF could potentially discover the meaningful function modules as much as possible by introducing symptoms-based disease network and microbe phylogenetic distance.
Comparison of All the Significantly Enriched Taxon Sets of Modules Identified by MDNMF, NMF, and NetNMF
To demonstrate the effectiveness of MDNMF, we compared the microbe modules identified by these three approaches in terms of biologically functional enrichment. We performed microbe taxon set enrichment analysis for these three groups of modules and reserved the taxon set (TS) terms (FDR < 0.05, hypergeometric test) which are significantly enriched by two modules derived of MDNMF and NetNMF (or NMF). Then, for each TS term, we calculated enrichment scores (-log10(pvalue)) and took the highest scores among all modules as the final score of this TS for each method. Note that the co-modules identified by MDNMF cover about 20 microbes and 3 diseases on average. There is only one co-module which contains no diseases. This is consistent with the average size of each microbe or disease module (see Parameter Analysis). Applying MDNMF to HMDAD dataset, many TS terms are above the diagonal line (see Figure 2). Specifically, the enriched TS terms obtained by MDNMF have more significant Q-value (FDR < 0.05) than those of NMF and NetNMF. For microbe modules, 58.33% (MDNMF versus NMF, P < 0.005 and FDR < 0.05, hypergeometric test) and 47.06% (MDNMF versus NetNMF, P < 0.005 and FDR < 0.05, hypergeometric test) TS terms are above the central diagonal line, respectively.
As Figure 2 shown, compared to NetNMF, microbe modules identified by MDNMF had lower significance for 52.94% modules. One of the possible reasons is that when selecting microbes, NetNMF just concerns the relationships among microbes from the original microbe-disease association matrix, whereas MDNMF has to take their phylogenetic relationships into account. This kind of extra constrains of MDNMF might affect the selected microbe subsets and their enriched functions. Despite that, MDNMF still identified more significantly enriched taxon sets than NetNMF (62 vs. 49, Table 1).
Parameter Analysis
In MDNMF, there are three parameters:l 1 , l 2 , and µ. We set l 1 = n m n d , l 1 = n 2 m n 2 d according to the previous study (Chen and Zhang, 2018). When applying these three NMF-based algorithms to HMDAD data, the reduced dimension k is needed to be predetermined. Here, we selected k=15 from the candidate set {10,15,20}, and µ=0.001 from {0.001,0.01,0.1}, respectively. Under this setting, the number of identified microbe modules with significantly enriched taxon sets terms is highest (hypergeometric tests, P-value < 0.005 and FDR < 0.05). Mode selection is demonstrated in Figure 3.
Case Studies
To further validate the performance of MDNMF, we select several microbe-disease co-modules identified by MDNMF to analyze their biological functions and inner connections. In total, 60% microbe modules are enriched in at least one TS term. In these identified microbe-disease co-modules, the diseases caused by microbes also exist in their matched disease modules. Tables 2 and 3 show two of the identified microbe-disease co-modules and the associations between different disease (microbe) modules (according to S 2 ). As The MDNMF Algorithm shown, in trifactor NMF X≈HS H T , the matrix S has a special meaning. To see this, let us assume that H T H=I. Setting the derivative ∂ min ‖ X − HS H T ‖ 2 = ∂ S to be 0, we can obtain: S = H T XH, or S lk = h T l Xh k = oi∈ C l oj∈ C k x ij ffiffiffiffiffiffiffiffi ffi n l n k p : S indicates proper normalized within-cluster sum of weights (l = k) and between-cluster sum of weights (l ≠ k). Therefore, S provides a good representation for the clustering quality. If the clusters are separated well, respectively the diagonal elements of S will be much larger than the off-diagonal elements. We conduct extensive experiments, and find that some off-diagonal elements are large, for example co-modules 4 and 9. According to Eq.14, this case may reflect a close connection between these two modules. The connections can provide some insights to further understand the relationships between microbe and disease, disease and disease, and microbe and microbe.
As Table 2 shown, in co-module 9, 5 of 8 diseases (62.5%, same color from disease module and taxon sets columns indicates matched or associated disease) are in accord with significantly enriched microbe TS terms (FDR < 0.05). Besides, several TS such as "Chronic Obstructive Pulmonary Disease," "Asthma," "Colorectal Carcinoma," "Resistance to Immune Checkpoint Inhibitors (increase)" which have no matched diseases are also identified. This could provide potential associations among diseases or microbes. Figure 4 shows top biological terms enriched in the microbe module 9.
In order to demonstrate that MDNMF can indeed cluster similar diseases to the same co-module, we retrieval each disease existed in co-module 9 from the MeSH website (https://meshb. nlm.nih.gov) and find that most of the diseases belong to the same MeSH disease category. For example, Ileal Crohn's disease (CD), Irritable bowel syndrome (IBS), Liver cirrhosis and Necrotizing enterocolitis are clustered together and they are all divided into the same MeSH disease category C06 (Digestive System Diseases). Interestingly, Clostridium infections and Bacterial vaginosis which belong to C01 (Bacterial Infections and Mycoses) are also divided into the co-module. A detailed analysis of these related diseases may yield novel insights into the more and more widely recognized the associations between microbes and human diseases.
Based on the factorized matrix s 2 , we identified the connections among microbe modules 9 and 4, 7, 10. For example, microbe modules 9 and 4 share the "Crohn's Disease" and "Head and neck squamous cell carcinoma" microbe sets, but focus opposite aspects. In microbe module 9, the enriched microbe TS term "Crohn's Disease" is decreased, but is increased in module 4. These two microbe modules may afford us an opportunity to further investigate the complicated pathogenic mechanism in system level.
Without loss of generality, we also analyzed another microbedisease co-module 4, the detailed information is shown in Table 3.
From Table 3, we can see that 7 of 10 diseases (70%, same color from the "disease module" and "taxon sets" columns indicates matched or associated disease) are in accord with significantly enriched microbe TS terms (FDR < 0.05). Especially, for enriched microbe TS term "Atopic dermatitis," three diseases ("Allergic sensitization," "Eczema," and "Psoriasis") in matched disease module are associated with it. This demonstrates the ability of the proposed MDNMF algorithm in finding correlation among diseases and microbes. Figure 5 shows top biological terms enriched in microbe module 4.
Similarly, we retrieval each disease member in co-module 4 from the MeSH website and find that a few similar diseases belong to the same MeSH disease category. For example, Eczema, Psoriasis, Rheumatoid arthritis, and New-onset untreated rheumatoid arthritis are all from the same MeSH disease category C17 (Skin and Connective Tissue Diseases). In addition, we also find that Chronic Obstructive Pulmonary Disease (COPD), Cystic Fibrosis, Allergic sensitization, and Intestinal diseases (IBS, Irritable bowel disease, and Ulcerative colitis) have also been clustered together. Several diseases belong to two or more MeSH categories, which indicates the pathological connections between the human genetic susceptibility to infectious diseases and inflammatory diseases.
Based on factorized matrix s 2 , we can find that co-module 4 has more links to co-module 7(s 4.7 =2.72). Matched disease modules 4 and 7 own the similar disease members, such as "Allergic sensitization" (from module 4) and "Asthma" (from module 7) induced by "Atopic dermatitis." Besides, two corresponding microbe modules 4 and 7 share TS term "Aging." Note that in Tables 2 and 3 some related diseases and microbes are divided into different co-modules. One possible of reasons is that the connection weight between these comodules is large, MDNMF as a soft clustering approach, cannot well separately these related microbes or disease. In the future, we will design more robust threshold selecting method to assign each diseases or microbes to accurate modules.
In summary, for the identified module pairs by MDNMF, especially for microbe modules, some of them share a few biological functions (TS), but also have their special roles. Simultaneously, some associations between microbe modules, disease modules can be also detected by MDNMF.
CONCLUSIONS
The association between microbes and human diseases has been verified by more and more researches. However, previous studies mainly focused on detecting the relationship such as "one microbe, one disease," rarely analyzed the pathogenesis of microbial-related complex diseases from a modular perspective. In this paper, we propose a novel microbe-disease co-module detecting algorithm MDNMF to construct a two-level module network by integrating two similarity matrices (microbe-microbe, disease-disease similarity matrices) and one microbe-disease bipartite network. Using the identified individual modules from different levels (microbe, disease levels) and their links, we are able to find a few disease-related microbes (taxon sets) which provide an opportunity to further understand the microbe high-order relationship and their potential functions.
Meanwhile, in order to improve the accuracy of module finding and biological interpretability of modules identified by MDNMF, we introduce human symptoms-disease network and microbial phylogenetic distance into the model. Compared with other two NMF-based approaches, MDNMF can achieve better performance in terms of EI and the number of significantly enriched taxon sets. The proposed MDNMF is also easily extended to other multiple-level molecular network application, for example, virus-host co-modules, microbe-drug co-modules discovering, and so on.
DATA AVAILABILITY STATEMENT
The data and MDNMF codes analyzed during the study are available in the GitHub repository, https://github.com/ chonghua-1983/MDNMF. | 6,763.4 | 2020-02-21T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Cholesteryl ester transfer protein and its inhibitors
Most of the cholesterol in plasma is in an esterified form that is generated in potentially cardioprotective HDLs. Cholesteryl ester transfer protein (CETP) mediates bidirectional transfers of cholesteryl esters (CEs) and triglycerides (TGs) between plasma lipoproteins. Because CE originates in HDLs and TG enters the plasma as a component of VLDLs, activity of CETP results in a net mass transfer of CE from HDLs to VLDLs and LDLs, and of TG from VLDLs to LDLs and HDLs. As inhibition of CETP activity increases the concentration of HDL-cholesterol and decreases the concentration of VLDL- and LDL-cholesterol, it has the potential to reduce atherosclerotic CVD. This has led to the development of anti-CETP neutralizing monoclonal antibodies, vaccines, and antisense oligonucleotides. Small molecule inhibitors of CETP have also been developed and four of them have been studied in large scale cardiovascular clinical outcome trials. This review describes the structure of CETP and its mechanism of action. Details of its regulation and nonlipid transporting functions are discussed, and the results of the large scale clinical outcome trials of small molecule CETP inhibitors are summarized.
have been evaluated in large-scale randomized cardiovascular clinical outcome trials. While the trials with torcetrapib, dalcetrapib, and evacetrapib failed to show any cardiovascular benefit of CETP inhibition, treatment with anacetrapib significantly decreased major coronary events (14). However, as the manufacturers of anacetrapib recently decided to suspend development of the drug, the future of CETP inhibition as a potential therapeutic option for reducing major cardiovascular events is currently uncertain.
This review is concerned with the structure, function, and regulation of CETP and its inhibitors. It also outlines functions of CETP that are distinct from its lipid transfer activities, summarizes preclinical studies of CETP inhibition in animal models, and presents details of the outcomes of the randomized clinical outcome trials of the aforementioned small molecule CETP inhibitors.
STRUCTURE OF CETP
The structure of CETP has been the focus of numerous investigations. The LTP/LBP gene family, of which CETP is a member, includes several proteins, such as LBP, bactericidal permeability-increasing protein (BPI), and phospholipid transfer protein (PLTP), all of which have a high degree of structural similarity (15). Early structural models of CETP that were based on the crystal structure of BPI (2) identified CETP as a boomerang-shaped molecule with a hydrophobic lipid binding pocket at each end of the concave side (16,17). CETP is a highly flexible molecule that undergoes a twisting motion when it binds to neutral lipids. This rotating motion enables CETP to bind to the surface of lipoproteins that vary widely in size and surface curvature and is a very important aspect of its mechanism of action.
The assumption of a "boomerang" structure for CETP was confirmed by Qiu et al. (18) who reported the first crystal structure of CETP at 3.5 Å resolution. That study identified the presence of a continuous central tunnel within the CETP molecule, which is unique among members of the LTP/LBP gene family. Two lipid binding pockets in the N-and C-terminal domains, and an amphipathic helix, helix X, located in the C-terminal domain of CETP have also been reported (18,19). The central CETP tunnel can accommodate two CE molecules, one CE and one TG molecule, or two TG molecules (18). These structural features of CETP have been confirmed in atomistic and coarsegrained simulation studies (20), and by cryo-electron microscopy (21). Evidence that structural integrity of the central CETP tunnel is essential for the transfer activity of CETP was established by mutating selected polar amino acid residues that are located in the tunnel into hydrophobic residues. This altered the tunnel architecture and reduced the transfer activity of CETP (18).
MECHANISM OF ACTION OF CETP
CETP transfers CE and TG between different lipoproteins by two mechanisms (Fig. 2). The first mechanism is a "shuttle" process ( Fig. 2A) that involves random collisions of CETP with HDLs, LDLs, and VLDLs. This leads to the formation of complexes that facilitate bidirectional exchanges of CE and TG between each of the lipoproteins and CETP. The complexes subsequently dissociate from the lipoproteins where they were generated, and remain in the circulation until they randomly collide with another lipoprotein and participate in a further round of CE and TG exchanges. This process is repeated multiple times (3,4). The crystal structure of CETP supports the shuttle mechanism and is consistent with the interaction of CETP with only one lipoprotein particle at a time (18).
The second mechanism of action of CETP involves the formation of a bridge between CETP and two lipoprotein particles to form a ternary complex (Fig. 2B) (21,22). Neutral lipids move in both directions between the two lipoproteins through the tunnel in CETP. Evidence consistent with ternary complex formation comes from cryo-electron microscopy studies with anti-CETP polyclonal antibodies and atomistic molecular dynamics simulations (21). The results of these studies support the penetration of the N-terminal domain of CETP into the surface of an HDL particle together with a concomitant interaction of the C-terminal domain of CETP with an LDL or VLDL particle. Additional analyses have indicated that the transfer of CEs between HDLs and LDLs, or HDLs and VLDLs, by this mechanism is dependent on conformational changes in the N-and the C-terminal domains of CETP that increase tunnel continuity and improve neutral lipid accessibility (19,21).
However, it should be noted that these observations are not supported by other electron microscopy studies in which HDLs were shown to bind to the N-as well as the C-terminal domain of CETP (23). It should be noted, however, that no interactions of CETP with LDLs, or formation of HDL-CETP-LDL complexes, were observed in that study, and that monoclonal antibodies targeted toward the N-and C-terminal domains of CETP did not prevent the penetration of CETP into the HDL surface or affect CETP activity (23). When taken together, these findings do not support the formation of a ternary complex as a major mechanism of action of CETP. There are, by contrast, multiple reports of anti-CETP antibodies inhibiting CETPmediated transfers of CE and TG between HDLs and other lipoproteins (5,24). These discrepant findings highlight a potential dependency of CETP-mediated neutral lipid transfers on the antibodies that are used to target the N-and C-terminal domains of the CETP molecule. For example, Zhang et al. (21) used polyclonal antibodies that recognized a large area of the CETP molecule, whereas the more recent studies of Lauer et al. (23) were undertaken with monoclonal antibodies that recognize specific epitopes within the protein.
CETP gene transcription
Transcription of the CETP gene is under the control of extrinsic and intrinsic factors. For example, dietary cholesterol upregulates CETP expression in mice transgenic for human CETP (25)(26)(27). Plasma cholesterol levels also correlate with CETP mass in human plasma (28). Studies of transgenic mice have established that induction of human CETP gene expression in response to cholesterol is a consequence of transactivation of a nuclear receptor binding site in the promoter region of the gene by the transcription factors, liver X receptor (LXR) and retinoid X receptor (29,30). These results are supported by studies of LXR agonists that increase CETP expression in mice transgenic for human CETP, and in mice with LXR deficiency in which CETP expression is not increased by administration of an LXR agonist (31). The human CETP gene is also regulated by SREBP-1, a transcription factor that transactivates sterol regulatory-like elements in the promoter region of the gene (32).
Lifestyle factors
Light to moderate, but not heavy, alcohol consumption is generally considered to decrease CETP mass and activity, increase HDL-C levels, and decrease CVD risk. However, investigations into this relationship have produced conflicting results. Some investigators have confirmed the association (33), while others have found that the alcoholmediated increase in HDL-C levels is independent of CETP activity (34,35) and unrelated to effects on genes that regulate HDL levels (36).
Physical activity in the form of endurance exercise also increases HDL-C levels, decreases plasma CETP levels, and reduces CVD risk in humans (37). However, aerobic exercise has been reported not to affect CETP activity in mice transgenic for the human CETP gene (38) or plasma CETP levels in humans (39,40).
Loss-of-function mutations in the CETP gene (CETP deficiency)
The first report of a loss-of-function mutation in the CETP gene was in a Japanese population with a G-to-A substitution in the 5′-splice donor site of intron 14 (Int 14A) (41). Homozygosity for this mutation is associated with very low or undetectable CETP activity, markedly elevated plasma HDL-C, apoA-I, and apoE levels, a moderate reduction in VLDL-cholesterol, LDL-cholesterol (LDL-C), and apoB levels, a low incidence of atherosclerosis, and increased life span compared with unaffected family members (41,42). HDLs isolated from people homozygous for this mutation, as well as compound heterozygotes, also have HDLs that are larger than the HDLs in unaffected individuals (41,43). In addition, people with CETP deficiency have LDLs that are small and polydisperse relative to people with a normal level of CETP activity (44).
Several other mutations associated with CETP deficiency have been reported (45)(46)(47). A missense mutation of Asp to Gly at codon 442 in exon 15 of the CETP gene (Asp-442Gly) that is associated with abnormally high levels of HDL-C has been reported in the Japanese population and in Japanese Americans (48,49). People homozygous for a nonsense mutation in the CETP gene at codon 309 in exon 10 and a G-to-T substitution at codon 181 of exon 6 (G181X) have elevated plasma concentrations of HDL-C and apoA-I (45,46). A nonsense T-to-G mutation at codon 57 of exon 2 that is associated with high HDL-C levels has also been reported (47).
Human CETP gene polymorphisms
Results from small studies of CETP gene polymorphisms in humans have not been conclusive. The results of larger genetic studies are, however, more consistent and have led to the conclusion that CETP is pro-atherogenic and that its inhibition is potentially anti-atherogenic.
In a large meta-analysis of 92 studies involving 113,833 participants, it was concluded that CETP gene polymorphisms that are associated with decreased CETP activity and mass are associated with high HDL-C levels, low LDL-C levels, and a significantly decreased risk of having a coronary event (50). A similar conclusion emerged from a study of 18,245 healthy Americans in the Women's Genome Health Study, where 20 SNPs in the CETP gene that had genome-wide effects on HDL-C levels were identified (51). In particular, the Taq1B polymorphism at rs708272 in the CETP gene was associated with a per-allele increase in HDL-C levels of 3.1 mg/dl and a 24% lower risk of future myocardial infarction (51). This conclusion was further supported by another meta-analysis in which a common variant in the CETP gene was accompanied by increased HDL-C levels, decreased LDL-C levels, and a reduced risk of myocardial infarction comparable to that reported in the earlier meta-analysis (52).
Perhaps the most compelling genetic evidence in favor of CETP activity being pro-atherogenic comes from the Copenhagen City Heart Study (53) and from a study that examined the effect of protein-truncating variants of the CETP gene (54). In the Copenhagen City Heart Study, 10,261 people were followed for up to 34 years (53). More than 3,000 of these people had a cardiovascular event and 3,807 died. In this study, two common CETP gene polymorphisms known to be associated with low CETP activity were also associated with significant reductions in the risk of ischemic heart disease, myocardial infarction, ischemic cerebrovascular disease, and ischemic stroke. People with these polymorphisms also had increased longevity, with no evidence of adverse effects.
In a study of protein-truncating variants in the CETP gene, it was found that the HDL-C level was 22.6 mg/dl higher and the LDL-C level was 12.2 mg/dl lower than in those without the variants (54). These lipoprotein changes were accompanied by a significant 30% lower risk of having a coronary event (54). Collectively, these human genetic studies support the proposition that CETP inhibition is potentially anti-atherogenic.
CETP AND ANIMAL MODELS OF ATHEROSCLEROSIS
CETP exists in the plasma of only few species, including humans, rabbits, and hamsters, but not in rodents, which have a low susceptibility to atherosclerotic lesion development (1). As mice are naturally deficient in CETP, rendering them transgenic with the human CETP gene means that studies can be undertaken in the absence of the confounding effects of endogenous CETP activity. However, the results from mice transgenic for CETP are modeldependent, with some studies suggesting that CETP is proatherogenic (55)(56)(57). Other studies in mice transgenic for human CETP and LCAT, by contrast, have indicated that CETP is anti-atherogenic (58). An anti-atherogenic role for CETP has also been suggested for the db/db mouse model of type 2 diabetes when it is made transgenic for human CETP (59), and for hypertriglyceridemic CETP transgenic mice (60). In contrast to mice, rabbits have approximately twice as much CETP activity in their plasma as humans (1) and they are very susceptible to diet-induced atherosclerosis (61), which is reduced by inhibiting CETP (12).
CETP and innate immunity
From the available evidence, CETP appears to have a beneficial role in reducing the inflammatory response to bacterial endotoxins through interaction with the innate immune system and the sequestration of pro-inflammatory lipopolysaccharide (LPS). The innate immune system detects highly conserved components of micro-organisms, termed pathogen-associated molecular patterns (PAMPs), by pattern recognition receptors (PRRs), including members of the toll-like receptor (TLR) family. Detection of PAMPs by PRRs is the first line of defense against a nonself pathogen, which leads to activation of the innate immune system and a cascade of inflammatory responses (62,63). If not effectively regulated, the inflammatory responses that are activated when PRRs detect PAMPs can result in sepsis with shock, end-organ damage, and, ultimately, death of the host. Beyond antimicrobial therapy, treatments for septic shock are limited (64).
LPS is a component of the gram-negative bacteria cell wall and the ligand for TLR4 (63,65). It potently stimulates the innate immune system and is largely responsible for the septic response to gram-negative bacteremia (66). Effective sequestration and excretion of LPS is required to curtail the inflammatory response. LPS binds to circulating HDLs, LDLs, and VLDLs, making it unavailable for stimulation of the innate immune system (67)(68)(69). Following early cessation of the trial of the CETP inhibitor, torcetrapib, in the ILLUMINATE trial, the role of CETP in the immune response came under scrutiny because of an excess of deaths related to infection (70,71). This was not an issue in cardiovascular clinical outcome trials of other CETP inhibitors.
Although CETP has an intrinsically weak ability to bind LPS compared with LBP or BPI (72), it is associated with resilience to sepsis. Mice transgenic for human CETP have improved mortality following LPS administration compared with wild-type mice (73,74). This is likely due, at least in part, to an increase in LPS sequestration by HDLs and LDLs and increased uptake of LPS by the liver (73). Conversely, PTLP knockout mice have increased endotoxin-associated mortality, delayed uptake of LPS by lipoproteins, and decreased LPS clearance (75).
The mechanism of LPS clearance is not well-understood. CETP facilitates the transfer of LPS from HDLs to LDLs (73), and LDL receptor-mediated uptake of LDL-associated LPS by the liver has been reported (76). Hepatic uptake of HDL CEs by scavenger receptor B1 has also been implicated in LPS clearance (77).
In addition to facilitating LPS sequestration and excretion, mice expressing CETP and mice with the cecal ligation/puncture model of polymicrobial sepsi, have reduced production of the pro-inflammatory cytokines, TNF- and interleukin (IL)-6, in response to LPS administration (73,74). Furthermore, LPS decreases TNF- production in macrophages from mice transgenic for human CETP relative to macrophages from wild-type mice (73). Incubation of RAW 264.7 murine macrophages with LPS and human CETP also induces a dose-dependent decrease in TNF- production (73), possibly due to reduced expression of TLR4. TLR4 and IL-6 secretion are both reduced and survival is improved in mice transgenic for human CETP compared with wild-type mice following LPS administration (74). LPS-stimulated peritoneal macrophages from mice transgenic for human CETP also have reduced TLR4 expression, LPS uptake, nuclear factor (NF)-B activation, and IL-6 production compared with peritoneal macrophages from wild-type mice (74). The reduced IL-6 production and increased resistance to sepsis in human CETP transgenic mice is consistent with evidence that IL-6 levels are correlated with the risk of death and that this is ameliorated by anti-IL-6 monoclonal antibody treatment in humans (78).
Reverse cholesterol transport, the process whereby excess cholesterol from peripheral tissues is transported to the liver for excretion, is also inhibited in sepsis (79), and CETP protein and mRNA levels are both decreased in hamsters and human CETP transgenic mice in response to LPS (80,81). This is in line with the outcome of a small study in humans in which an association of increased mortality with the magnitude of CETP reduction in septic hospitalized patients was reported (82).
Targeted inhibition of CETP with neutralizing antibodies
Treatment of New Zealand White rabbits with neutralizing monoclonal antibodies to CETP increases HDL-C levels and reduces atherosclerosis (8). In another study of mice transgenic for human CETP, suppression of plasma CETP activity with an anti-CETP monoclonal antibody increased liver CETP mRNA levels. This unexpected finding was presumably due to increased plasma cholesterol levels in these mice (83). Anti-CETP monoclonal antibodies have also been shown to inhibit CETP activity by 70-80% and increase HDL-C levels by 33% in chow-and cholesterolfed male Golden Syrian hamsters (84).
Targeted inhibition of CETP with antisense oligonucleotides
Antisense oligonucleotides to CETP that target and degrade CETP mRNA levels and decrease hepatic CETP protein levels have been reported to increase HDL-C levels by 32% and reduce atherosclerosis in cholesterol-fed Japanese White rabbits (9). Similarly, administration of antisense oligonucleotides to LDL receptor-deficient mice transgenic for human CETP inhibits CETP activity by 81% and increases plasma HDL-C levels by 38% (85). Enhanced macrophage reverse cholesterol transport and decreased accumulation of aortic cholesterol have also been reported in these mice relative to mice treated with a control antisense oligonucleotide (85).
Targeted inhibition of CETP with vaccines
Human CETP contains a hydrophobic 26 amino acid residue sequence in the C-terminal domain that is essential for neutral lipid transfer (5,24). Immunization of cholesterol-fed New Zealand White rabbits with a peptide that targets this region of CETP generates neutralizing antibodies that inhibit CETP activity, increase plasma HDL-C levels, and decrease atherosclerotic lesion area (10). In another study, immunization of high-fat highcholesterol-fed New Zealand White rabbits with a vaccine in which rabbit IgG-Fc was conjugated to a 26 amino acid C-terminal epitope of CETP increased plasma HDL-C and apoA-I levels. This vaccine also decreased the plasma level of oxidized LDL, as well as atherosclerosis and nonalcoholic hepatic steatosis, in New Zealand White rabbits (86,87). A similar increase in plasma HDL-C levels and a reduction in atherosclerotic lesion area were observed in New Zealand White rabbits immunized subcutaneously with a heat shock protein-65-CETP vaccine (88). Intranasal administration with the combined heat shock protein-65-CETP vaccine also decreased atherosclerotic lesion area and serum total cholesterol and LDL-C levels, but did not affect TG or HDL-C levels in rabbits (89). Vaccination of cholesterol-fed New Zealand White rabbits with a tetanus toxoid-CETP peptide, by contrast, was associated with only a modest reduction in CETP activity, a modest increase in HDL-C levels, and no effect on atherosclerosis (10,90). To date only one vaccine, CETi-1, has progressed to a phase I human clinical trial (11). Repeated administration of this vaccine in healthy adults generated variable anti-CETP antibody titers, and did not significantly inhibit CETP activity or increase plasma HDL-C levels (11).
Targeted inhibition of CETP with small molecule inhibitors
Use of small molecule inhibitors of CETP activity mimics the lipid profile changes that occur in humans and animals with CETP deficiency (12,91). In most cases these inhibitors bind to and inactivate the CETP that is associated with HDLs (92). In doing so they prevent neutral lipid transfers between HDLs and TG-rich lipoproteins, including VLDLs. This results in the retention of CEs in HDLs (93). As discussed in detail below, four small molecule inhibitors that were developed to pharmacologically inhibit CETP activity have been tested in large cardiovascular clinical outcome trials.
Torcetrapib
Torcetrapib is a small lipophilic tetrahydroquinoline derivative (Fig. 3A) that forms a tight complex between HDLs and CETP. This impedes the exchange of CE and TG between HDLs and other lipoproteins (94). Torcetrapib increases plasma HDL-C levels 3-fold and reduces atherosclerosis by 60% in cholesterol-fed rabbits (12). Treatment with torcetrapib also increases plasma HDL-C levels in hamsters, which, in turn, increases macrophage cholesterol efflux (95).
Evacetrapib
Evacetrapib is a benzazepine-based CETP inhibitor (Fig. 3C) that dose dependently inhibits CETP activity and increases HDL-C levels by up to 130% in mice transgenic for human CETP and apoA-I (98,99). It also increases macrophage-to-feces reverse cholesterol transport in CETP transgenic mice and improves the net efflux of cholesterol from macrophages to HDLs (100).
Anacetrapib
Anacetrapib is structurally similar to torcetrapib (Fig. 3D) (99). It also increases HDL-C levels, which leads to enhanced macrophage cholesterol efflux and reverse cholesterol transport in dyslipidemic hamsters (101). In a dose escalation study with APOE*3 Leiden.CETP transgenic mice, anacetrapib, either as a monotherapy or in combination with atorvastatin, promoted a dose-dependent increase in HDL-C levels, improved lesion stability, and reduced atherosclerosis (102).
Anacetrapib has a number of additional cardioprotective functions. Treatment of normocholesterolemic New Zealand White rabbits with endothelial denudation of the abdominal aorta with the anacetrapib analog, desfluoro-anacetrapib, which has one less fluorine atom than the parent compound, improves endothelial repair and endothelial function (103), increases angiogenesis in New Zealand White rabbits with hind limb ischemia (104), and reduces neointimal hyperplasia in New Zealand White rabbits with endothelial denudation of the iliac artery and stent deployment (105).
ILLUMINATE trial with torcetrapib
The ILLUMINATE trial (Investigation of Lipid Level Management to Understand its Impact in Atherosclerotic Events) (ClinicalTrials.gov number NCT00134264) was performed in 15,067 high-risk statin-treated people randomized in a double-blind design to receive torcetrapib or placebo (70). The primary endpoint was the time to first occurrence of a major cardiovascular event, a composite that included four components: death from coronary heart disease (defined as fatal myocardial infarction excluding procedure-related events, fatal heart failure, sudden cardiac death, or other cardiac death), nonfatal myocardial infarction (excluding procedure-related events), stroke, and hospitalization for unstable angina. Treatment with torcetrapib increased HDL-C levels by 72% and decreased LDL-C levels by 25%. This trial was terminated after 18 months because of a statistically significant excess of deaths (93 vs. 59) in those treated with torcetrapib. There was also a statistically significant 25% increase in ASCVD events in the participants that received torcetrapib.
The explanation for the harm caused by torcetrapib is not known with certainty, but it may have been the consequence of serious off-target adverse effects of the drug (70), including increased blood pressure, increased synthesis and secretion of aldosterone, and an increase in endothelin-1 levels in the artery wall. Given these off-target effects of torcetrapib that are unrelated to CETP inhibition, it was not possible to draw conclusions from the ILLUMINATE trial regarding the potential cardiovascular benefits of CETP inhibition.
The dal-OUTCOMES trial with dalcetrapib
The dal-OUTCOMES trial (ClinicalTrials.gov number NCT00658515) included 15,871 participants recruited soon after an acute coronary syndrome (ACS) event. All patients were treated with a statin and were randomized in a double-blind design to receive either dalcetrapib or placebo (106). The primary end point was a composite of death from coronary heart disease, nonfatal myocardial infarction, ischemic stroke, unstable angina, or cardiac arrest with resuscitation. Treatment with dalcetrapib increased the concentration of HDL-C by about 30%, but its effect on LDL-C and apoB levels was minimal. Treatment with dalcetrapib did not reduce ASCVD events (106).
The absence of benefit in the dal-OUTCOMES trial may have been because dalcetrapib did not reduce the level of LDL-C. However, it may also have been because this trial was conducted in patients soon after an ACS event, at a time when HDL function is likely to be compromised. This explanation was supported by the observation in the placebo group in the dal-OUTCOMES trial in which the concentration of HDL-C was unrelated to the risk of having an ASCVD event (106). This is in contrast to what occurs in people with stable ASCVD, where there is an inverse relationship between HDL-C levels and the risk of having an ASCVD event. As was the case in the ILLUMINATE trial, dal-OUTCOMES did not test the hypothesis that CETP inhibition may reduce ASCVD events in people with stable coronary artery disease. As loss of the cardioprotective functions of HDLs after an ACS is likely to be temporary, it is possible that a meaningful reduction of cardiovascular events may have been observed in this trial if the median intervention had been extended beyond 31 months.
ACCELERATE trial with evacetrapib
The ACCELERATE (Assessment of Clinical Effects of Cholesteryl Ester Transfer Protein Inhibition with Evacetrapib in Patients at a High-Risk for Vascular Outcomes) trial (ClinicalTrials.gov number NCT01687998) included approximately 12,500 high-risk statin-treated patients randomized in a double-blind design to receive evacetrapib or placebo (107). The primary endpoint was the first occurrence of any component of the composite endpoint of cardiovascular death, myocardial infarction, stroke, hospitalization for unstable angina, or coronary revascularization. The planned follow-up was 3 years. Evacetrapib reduced the level of LDL-C by 37% and increased HDL-C levels by 132% compared with placebo. This trial was terminated after just over 2 years when it became apparent that there would not be a positive outcome if it continued to its planned 3 year follow-up. There was no evidence that evacetrapib caused harm. The reason for the failure of evacetrapib to impact on the primary endpoint is not known, but it is possible that the trial was too short to detect benefit. If the REVEAL trial (see below) had stopped at the same time as ACCELERATE, a similar lack of efficacy would have been found, thus emphasizing the fact that cardiovascular events are unlikely to be decreased in the short term by interventions that increase plasma HDL-C levels and lower plasma LDL-C levels.
DEFINE trial with anacetrapib
The DEFINE (Determining the Efficacy and Tolerability of CETP Inhibition with Anacetrapib) trial (ClinicalTrials. gov number NCT00685776) was an 18 month intervention designed to assess the lipid efficacy and safety of anacetrapib. The trial included 1,623 high-risk statin-treated patients who were randomized in a double-blind design to receive anacetrapib or placebo (108). Anacetrapib decreased the concentration of non-HDL-C by 32% and increased HDL-C levels by 138%. Anacetrapib had no effect on blood pressure or on plasma electrolyte or aldosterone levels. DEFINE showed that treatment with anacetrapib had favorable effects on plasma lipid levels by decreasing LDL-C levels and increasing HDL-C levels. It also showed that anacetrapib had an acceptable side-effect profile and, within the limits of the power of the study, did not have any of the adverse effects that were observed with torcetrapib. In a long-term follow-up of participants in the DEFINE trial, it was found that anacetrapib accumulated in adipose tissue and remained detectable in the body for two or more years after the last dose of the drug (109). There was no evidence, however, that retention of anacetrapib was associated with adverse effects. Despite the tendency for anacetrapib to be retained in the body, a decision was made to proceed with the REVEAL trial.
REVEAL trial with anacetrapib
The REVEAL (Randomized Evaluation of the Effects of Anacetrapib through Lipid Modification) trial (Clinical-Trials.gov number NCT01252953) included more than 30,000 high-risk statin-treated people who were randomized to receive anacetrapib or placebo. The planned follow-up was 4 years (14). The primary endpoint was the first major coronary event, a composite of coronary death, myocardial infarction, or coronary revascularization.
The participants in REVEAL, who were treated intensively with atorvastatin prior to randomization, had a low baseline mean LDL-C level of 61 mg/dl, a mean non-HDL-C level of 92 mg/dl, and a mean HDL-C level of 40 mg/dl. Treatment with anacetrapib increased the level of HDL-C by 104% and decreased non-HDL-C levels by 18%. During the median 4.1 years of follow-up, the primary outcome was reduced from 11.8% in the placebo group to 10.8% in those treated with anacetrapib (rate ratio, 0.91; 95% confidence interval, 0.85-0.97; P = 0.004). The magnitude of this benefit was consistent with that observed for comparable reductions in non-HDL-C levels in statin trials (14). Participants with a baseline LDL-C level in the upper tertile (>66 mg/dl) had a statistically significant reduction in the primary endpoint of 13%. In those with a baseline non-HDL-C level in the upper tertile (>101 mg/dl), the reduction in the primary endpoint was a statistically significant 17%. There were no significant between-group differences in the risk of death, cancer, or other serious adverse events in this trial.
The reduction in coronary events in those treated with anacetrapib did not become apparent until after 2 years of treatment (14). During the third year, however, the reduction in coronary events was 13%, while coronary events occurring beyond 4 years of treatment were reduced by a statistically significant 17%. This delay in benefit highlights the possibility that the failure of evacetrapib to reduce cardiovascular events in ACCELERATE may have been related to termination of the trial after only 2 years of follow-up.
Treatment with anacetrapib in the REVEAL trial also reduced the risk of developing diabetes from 6.0% in those on statin alone to 5.3% in those treated with a statin plus anacetrapib. A positive effect of CETP inhibition on glycemic control (a reduction in plasma glucose levels and HbA1c) was also observed in participants in the torcetrapib arm of the ILLUMINATE trial (110) and in the evacetrapib arm of the ACCELERATE trial (111).
The mechanism by which CETP inhibition improves glycemic control and reduces the risk of new onset diabetes is uncertain, but may be related to the increased levels of HDL-C and apoA-I. Both HDLs and apoA-I increase the synthesis and secretion of insulin in pancreatic cells (112,113). They also enhance glucose uptake by skeletal muscle (114)(115)(116) and, thus, improve insulin sensitivity. An increase in either, or both, of these HDL functions in people treated with a CETP inhibitor could explain the improvement in glycemic control and decreased risk of developing diabetes that was observed in these studies.
NEW CETP INHIBITORS
TA-8995 is another novel tetrahydroquinoline derivative CETP inhibitor (Fig. 3E) (117). A phase I dose-escalating study of healthy subjects treated with single or multiple doses of TA-8995 (30-150 mg daily) or placebo, confirmed that TA-8995 is well-tolerated and does not adversely affect blood pressure, aldosterone levels, or serum electrolyte concentrations (117). In that study, TA-8995 inhibited CETP activity by 92-99%, increased HDL-C levels by 140%, and decreased LDL-C levels by 53% at the 10 mg/day dose (117).
In the 12 week randomized double-blind parallel-group phase II TULIP (TA-8995 in Patients with Mild Dyslipidaemia) trial (ClinicalTrials.gov number NCT01970215), subjects with mild dyslipidemia were randomly assigned to receive placebo, TA-8995 as monotherapy (1-10 mg/day), 10 mg/day TA-8995 with atorvastatin (20 mg) or with rosuvastatin (10 mg), or statin alone (118). TA-8995 dosedependently inhibited CETP activity, increased HDL-C and apoA-I levels by up to 179% and 63%, respectively, and decreased LDL-C and apoB levels by 45% and 34%, respectively (118). In combination with atorvastatin, TA-8995 increased HDL-C levels by 152% and reduced LDL-C levels by 68%, while in combination with rosuvastatin, HDL-C levels increased by 157% and LDL-C levels decreased by 63% (118). This makes TA-8995 one of the most potent CETP inhibitors available to date (118,119). Although TA-8995 did not affect plasma TG and total cholesterol levels, it did increase the ability of HDLs to promote cholesterol efflux at the 10 mg/day dose (111).
CONCLUSIONS
Our understanding of the structure and function of CETP has progressed rapidly in recent years. There is clear evidence from the REVEAL trial that inhibition of CETP significantly reduces the risk of having a coronary event in statin-treated patients. There is also evidence that CETP inhibition improves glycemic control and reduces new onset diabetes, an effect that has the potential to counteract the increase in new onset diabetes associated with statin treatment. There is thus a compelling case for using the combination of a statin plus a CETP inhibitor in people at high cardiovascular risk that are treated with a statin and are at risk of developing diabetes. This will not only reduce the risk of having a coronary event beyond that achieved by a statin alone, but it will also counteract the statin-induced development of diabetes. Whether new CETP inhibitors, such as TA-8995, will be investigated in outcome trials in this population remains to be seen. Further investigations are awaited with interest. | 7,600.6 | 2018-02-27T00:00:00.000 | [
"Biology"
] |
A CHANGE OF SCALE FORMULA FOR WIENER INTEGRALS ON ABSTRACT WIENER SPACES
In this paper we obtain a change of scale formula for Wiener integrals on abstract Wiener spaces. This formula is shown to hold for many classes of functions of interest in Feynman integration theory and quantum mechanics.
Introduction
It has long been known that Wiener measure and Wiener measurability behave badly under the change of scale transformation [3] and under translations [2]. However Cameron and Storvick [5] for a rather large class of functionals expressed the analytic Feynman integral as a limit of Wiener integrals. In doing so, they discovered a rather nice change of scale formula for Wiener integrals on classical Wiener space [6]. In [20,22,23], Yoo, Yoon and Skoug extended these results to Yeh-Wiener space and to an abstract Wiener space.
This paper continues the study of a change of scale formula for Wiener integrals on an abstract Wiener space previously given in" [22].
Motivated by the work of Kallianpur and Bromley [17], we establish a change of scale formula for Wiener integrals, for a larger class than the Fresnel class studied in [22], on an abstract Wiener space. Results in [5,6,20,22,23] will then be corollaries of our results.
Definitions and preliminaries
Let H be a real separable infinite dimensional Hilbert space with inner product (.,.) and norm 11 . 11. Let III . III be a measurable norm on H with respect to the Gaussian cylinder set measure u on H. Let B denote the completion of H with respect to III . Ill. Let l denote the natural injection from H to B. The adjoint operator t* of t is one-to-one and maps B* continuously onto a dense subset of H* . By identifying H with H* and 11* with £*B* , we have a triple where (.,.) denotes the natural dual pairing between Band B* . Bya well-known result of Gross [13], (70£-1 has a unique countably additive extension m to the Borel (7 -algebra B (B) of B . The triple (H,B,m) is called an abstract Wiener space. For more detailed, see [12,17,18,19]. DEFINITION In particular,
Change of scale formulas
We begin this section with a key lemma for Wiener integral on an abstract Wiener space (H, B, m).
In the following theorem, for F E FAt ,A2' we express the analytic Feynman integral of F over B x B as the limit of a sequence of Wiener integrals.
Next, using the bounded convergence theorem, equation ( Proof. This follows from the fact that for F E F( B) the sequential Feynman integral of F is equal to its analytic Feynman integral [18].
Our main. result, namely a cha.p.ge of scale formula for Wiener integrals on a product abstract Wiener space now follows from Theorem 3.2 above. The Banach algebra FAl ,A 2 is not closed with respect to pointwise or even uniform convergence [16,p2], and thus its uniform closure F~1,A2 with respect to uniform convergence s-a.e. is a larger space than F A1 ,A 2 . Next we show that equation (3.4) holds for F E F~1,A2' Next, using Theorem 3.2, the iterated limit theorem and the dominated convetgence theorem,we obtain: In addition, the equation (3.10) holds for F E F( B) u.
Proof. Apply Theorem 3.5 and 3.6 after making the following choices: Al = the identity, A 2 = 0 and PI = p. With these choices and Lemma 3.1, we can easily obtain our corollary.
Finally we end this section by showing that the class of functions for which Theorem 3.5 (Theorem 3.6) and Corollary 3.7 hold is more extensive than F~A and F(B)u respectively. To evaluate the integral on the right side of (3.10), we apply some technique in the proof of Lemma 3.1 so that (BN, B (BN))' Then (HN,BN,mN) The following corollaries show that the class of functionals for which the above corollaries hold is more extensive than 5;"(11). | 915.6 | 1994-01-01T00:00:00.000 | [
"Mathematics"
] |
Asynchronous Rate Chaos in Spiking Neuronal Circuits
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results.
Author Summary
Cortical circuits exhibit complex temporal patterns of spiking and are exquisitely sensitive to small perturbations in their ongoing activity. These features are all suggestive of an underlying chaotic dynamics. Theoretical works have indicated that a rich dynamical reservoir can endow neuronal circuits with remarkable computational capabilities. Nevertheless, the mechanisms underlying chaos in circuits of spiking neurons remain unknown. We combine analytical calculations and numerical simulations to investigate this fundamental issue. Our key result is that chaotic firing rate fluctuations on the time scales of the synaptic dynamics emerge generically from the network collective dynamics. Our results Introduction single neuron properties? How do excitation and inhibition contribute to the emergence of these states? To what extent these chaotic dynamics share similarities with those exhibited by the SCS model? We first study these questions in one population of inhibitory neurons receiving feedforward excitation. We then address them in networks of two populations, one inhibitory and the other excitatory, connected by a recurrent feedback loop. A major portion of the results presented here constitutes the core of the Ph.D thesis of one of the authors (O.H) [27].
One population of inhibitory neurons: General theory
We consider N randomly connected inhibitory spiking neurons receiving an homogeneous and constant input, I. The voltage of each neuron has nonlinear dynamics, as e.g. in the leaky integrate-and fire (LIF model, see Materials and Methods) or in conductance-based models [20].
The connection between two neurons is J ij = JC ij (i, j = 1, 2. . .N), with J 0, and C ij = 1 with probability K/N and 0 otherwise. The outgoing synapses of neuron j obey t syn dS j ðtÞ dt where S j (t) is the synaptic current at time t and τ syn the synaptic time constant. When neuron j fires a spike (time t s j ), S j increments by J. Thus, the total input to neuron i, h i (t) = I+∑ j J ij S j (t), satisfies: We assume K ) 1, hence the number of recurrent inputs per neuron is K AE Oð ffiffiffi ffi K p Þ. Scaling J and I as: J ¼ ÀJ 0 = ffiffiffi ffi K p , I ¼ ffiffiffi ffi K p I 0 , the time-averaged synaptic inputs are Oð ffiffiffi ffi K p Þ and their spatial (quenched) and temporal fluctuations are O(1) [28,29]. Finite neuronal activity requires that excitation and inhibition cancel to the leading order in K. In this balanced state, the mean and the fluctuations of the net inputs are O(1) [28,29]. The properties of the balanced state are well understood if the synapses are much faster than all the typical time constants of the intrinsic neuronal dynamics [30]. Temporally irregular asynchronous firing of spikes is a hallmark of this regime [13,28,29,31,32]. However, this stochasticity does not always correspond to a true chaotic state [28,29,[33][34][35][36]. In fact, this depends on the spike initiation dynamics of the neurons [37]. The opposite situation, in which some of the synapses are slower than the single neuron dynamics, remains poorly understood. This paper mostly focuses on that situation.
When the synaptic dynamics is sufficiently slow compared to the single neuron dynamics, the network dynamics can be reduced to the set of non-linear first order differential equations: t syn dh i ðtÞ dt ¼ Àh i ðtÞ þ I þ X j J ij r j ðtÞ ð3Þ where r i (t) is the instantaneous firing rate of neuron i and g(h) is the neuronal input-output transfer function [20]. These are the equations of a rate model [20,38] in which the activity variables correspond to the net synaptic inputs in the neurons. Eqs (3)-(4) differ from those of the SCS model in that they have a well defined interpretation in terms of spiking dynamics, the time constant has a well defined physiological meaning, namely, the synaptic time constant, the transfer function quantifies the spiking response of the neurons and is thus positive, the interactions satisfy Dale's law and the neuronal connectivity is partial. Dynamical mean-field theory (DMFT). We build on a DMFT [19] to investigate the dynamics, Eqs (3)-(4), in the limit 1 ( K ( N. Applying this approach, we rewrite the last two terms in the right hand side of Eq (3) as a Gaussian noise whose statistics need to be self-consistent with the dynamics. This yields a set of self-consistency conditions which determine the statistics of the fluctuations, from which the synaptic net inputs and the firing rates of the neurons can be calculated. This approach is described in detail in the Materials and Methods section.
The DMFT shows that, for a given transfer function, depending on the parameters J 0 and I 0 , the dynamics either converge to a fixed point state or remain in an asynchronous, time-dependent state. In the fixed point state, the net inputs to the neurons, h 0 i , (i = 1. . .N) are constant. Their distribution across the population is Gaussian with mean μ and variance J 0 2 q. The DMFT yields equations for μ, q, as well as for the distribution of firing rates r 0 i (i = 1. . .N) (Eqs (24)- (25) and (36)). In the time-dependent state, h i (t) exhibit Gaussian temporal fluctuations, which are characterized by a mean, μ = [hh(t)i], and a population-averaged autocovariance (PAC) function, σ(τ) = [hh(t)h(t+τ)i] − μ 2 ([Á] and h Á i denote means over the population and over time, respectively). Solving the set of self-consistent equations which determine σ(τ) and μ (Eqs (25), (27) and (37)- (38)) indicates that σ(τ) decreases monotonically along the flow of the deterministic dynamics, thus suggesting that the latter are chaotic. To confirm that this is indeed the case one has to calculate the maximum Lyapunov exponent of the dynamics (which characterizes the sensitivity of the dynamics to initial conditions [39]) and verify that it is positive. This can be performed analytically in the framework of DMFT [19]. However, this is beyond the scope of the present paper. Therefore, in the specific examples analyzed below we rely on numerical simulations to verify the chaoticity of the dynamics.
For sufficiently small J 0 , the fixed point state is the only solution of the dynamics. When J 0 increases beyond some critical value, J c , the chaotic solution appears. We show in the Materials and Methods section that J c is given by: where q and μ are computed at the fixed point state and Dz ¼ e À z 2 2 ffiffiffi ffi 2p p dz. On the stability of the fixed point state. The NxN matrix characterizing the stability of the fixed point is D ¼ M ffiffi ffi N p À I with I the NxN identity matrix and: where h 0 j is the total input in neuron j at the fixed point. This is a sparse random matrix with, on average, K non zero elements per line or column. In the limit N ! 1, these elements are uncorrelated, have a mean ÀJ 0 ffiffi ffi (for large N, the second moment of the matrix elements is equal to their variance). Interestingly, Eq (5) means that the SD of the elements of M crosses 1 (from below) at J c . As J 0 increases, the fixed point becomes unstable when the real part of one of the eigenvalues crosses 1. Note that that for large K, D always has a negative eigenvalue, which is Oð ffiffiffi ffi K p Þ.
In the specific examples we investigate below, simulations show that when the chaotic state appears the fixed point becomes unstable. This implies that for J < J c given by Eq (5) the real parts of all the eigenvalues of M ffiffi ffi N p are smaller than 1 and that for J = J c , the real part of one of the eigenvalues, the eigenvalue with maximum real part, crosses 1. This suggests the more general conjecture that in the limit 1 ( K ( N the eigenvalue with the largest real part of M= ffiffiffiffi N p is: Below we compare this formula to results from numerical diagonalization of M= ffiffiffiffi N p .
One population of inhibitory neurons: Examples
The above considerations show that when synapses are slow, the dynamics of inhibitory networks is completely determined by the transfer function of the neurons. Therefore, to gain insights into the way dynamics become chaotic in such systems we proceed by investigating various spiking models that differ in the shape of their transfer functions.
Sigmoidal transfer functions. Neurons in a strong noise environment can be active even if their net inputs are on average far below their noiseless threshold, whereas when these inputs are large the activity saturates. The transfer functions of the neurons can therefore be well approximated by a sigmoid. Specifically here we consider the dynamics described in Eqs (3)-(4) with a sigmoidal transfer function: This form of the sigmoid function makes analytical calculations more tractable. Fig 1A shows that for J 0 = 4, I 0 = 1, the simulated network dynamics converge to a fixed point. This is not the case for J 0 = 6 and J 0 = 15 (Fig 1B, 1C). In these cases the activities of the neurons keep fluctuating at large time. Note also that the mean level of activity is different for the three neurons. This is a consequence of the heterogeneities in the number of inputs the neurons receive.
These differences in the network dynamics for these three values of J 0 are consistent with the full phase diagram of the DMFT in the parameter space I 0 − J 0 . Fig 2A depicts the results obtained by solving numerically the self-consistent equations that define chaos onset with g(x) = ϕ(x) (Eqs (17)- (18) in S2 Text). In the region above the line a chaotic solution exists whereas it does not exist below it. Simulations indicate that in the region above the line, very small perturbations from the fixed point state drive the network toward the time dependent state. In other words, the fixed point solution is unstable above the line: the bifurcation to the time dependent state is thus supercritical.
The instability of the fixed point on this line is also confirmed by direct diagonalization of the matrix M= ffiffiffiffi N p (see Eq (6)). To this end, we solved numerically the mean field equations for different values of J 0 to obtain μ and q, randomly sampled h 0 i values from the distribution defined by μ and q to generate the random matrix matrix M= ffiffiffiffi N p , and then computed numerically the spectrum of the matrix (for N = 10000). Examples of the results are plotted in Fig 3A for two values of J 0 , one below and one above the critical value J c . In both cases, the bulk of the spectrum is homogeneously distributed in the disk of radius λ max centered at the origin. Fig 3B plots λ max computed numerically (dots) and compare the results to our conjecture, Eq (7) (solid line). The agreement is excellent. The instability of the fixed point corresponds to λ max crossing 1.
To verify the chaoticity of the time dependent state predicted by the DMFT in the region above the bifurcation line we simulated the dynamics and computed numerically the largest Lyapunov exponent, Λ, for different values of I 0 and J 0 (see Materials and Methods for details). The results plotted in Fig 2A (red dots and inset) show that Λ crosses zero near the DMFT bifurcation line and is positive above it. Therefore the dynamics observed in simulations are chaotic in the parameter region above this line as predicted by the DMFT. We solved numerically the parametric self-consistent differential equation which determined the PAC, σ(τ), (Eqs (25), (29) and (37)- (38)) for different values of J 0 and I 0 . An example of the results is plotted in Fig 2B. It shows that numerical simulations and DMFT predictions are in very good agreement. Moreover, simulations with increasing values of N and K indicate that the small deviations from the DMFT predictions are due to finite N and K effects; a detailed study of these effects is reported in S1 Text. Fig 4A shows the bifurcation diagram of the PAC amplitude, σ 0 − σ 1 . For J 0 below the bifurcation point (BP) the PAC amplitude is zero, which corresponds to the fixed point state (solid blue line). At the bifurcation the fixed point loses stability (dashed blue line) and a chaotic state with a strictly positive PAC amplitude emerges (black line).
We studied analytically the critical behavior of the dynamics at the onset of chaos. We solved perturbatively the DMFT equations for 0 < δ = J 0 − J c ( 1, as outlined in the Materials and Methods section and in S2 Text. This yields (σ(τ) − σ 1 ) / δ α /cosh 2 (τ/τ dec ), with α = 1 and a decorrelation time scaling like τ dec / δ β with β = −1/2. Therefore at the onset of chaos, the PAC amplitude vanishes and the decorrelation time diverges. We show in the Materials and Methods section that this critical behavior with exponents α = 1, β = −1/2, is in fact a general property of the model, Eqs (3)-(4), whenever g(h) is twice differentiable. It should be noted that in the SCS model the PAC also vanishes linearly at chaos onset. However, the critical exponent of the decorrelation time is different (β = −1) [19].
The inset in Fig 4A compares the PAC amplitude obtained by numerically solving Eq (27) (black line) with the corresponding perturbative result (red line) for small δ. The agreement is excellent. In fact, the perturbative calculation provides a good estimate of the PAC even if δ is as large as 0.2J c (Fig 4A, main panel and Fig 4B). More generally, the PAC can be well fitted with the function (σ 0 − σ 1 ) Á cosh − 2 (τ/τ dec ) (Fig 4C, inset) providing an estimate of the decorrelation time, τ dec , for all values of J 0 . Fig 4C plots τ dec vs. σ 0 − σ 1 for I 0 = 1. It shows that the formula t dec / 1= ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s 0 À s 1 p we derived perturbatively for small δ provides a good approximation of the relationship between the PAC amplitude and the decorrelation time even far above the bifurcation. for inhibitory population rate model with g(x) = ϕ(x). The matrix was diagonalized numerically for N = 10000, K = 400, I 0 = 1 and different values of J 0 . A: The bulk of the spectrum for J 0 = 6 (blue) and for J 0 = 1.12 (red). Left: The imaginary parts of the eigenvalues are plotted vs. their real parts for one realization of M. This indicates that the support of the spectrum is a disk of radius λ max . Right: Histograms of N eig /R (one realization of M) where N eig is the number of eigenvalues with a modulus between R and R+ΔR (ΔR = 0.0428 (top), 0.0093 (bottom)) for J 0 = 6 (top) and J 0 = 1.12 (bottom). The distribution of eigenvalues is uniform throughout the spectrum support. B: The largest real part of the eigenvalues (black dots), λ max , is compared with the conjecture, Eq (7) (solid line). The fixed point loses stability when λ max crosses 1. Threshold power-law transfer function. We next consider the dynamics of the network (Eqs (3)-(4)) with a transfer function where γ > 0 and H(x) = 1 for x > 0 and 0 otherwise. Non-leaky integrate-and-fire neurons [40] (see also S3 Text) and θ-neurons [41][42][43][44] correspond to γ = 1 and γ = 1/2, respectively. The transfer functions of cortical neurons in-vivo can be well fitted by a power-law transfer function with an exponent γ % 2 [45,46]. 4 as we show analytically in S4 Text. For γ < 1/2, the integral in the right hand side of Eq (5) diverges. Equivalently, the elements of the stability matrix have infinite variance. Therefore, the DMFT predicts a chaotic dynamics as soon as J 0 > 0.
To compare these predictions with numerical simulations, we simulated different realizations of the network (N = 32000, K = 400, I 0 = 1) for various values of J 0 . For each value of J 0 and γ we determined whether the dynamics converge to a fixed point or to a time dependent state as explained in the Materials and Methods section. This allowed us to compute the fraction of networks for which the dynamics converge to a fixed point. The solid red line plotted in Fig 5B corresponds to a fraction of 50% whereas the dotted red lines correspond to fractions of 5% (upper line) and 95% (lower line). We also estimated the Lyapunov exponent, Λ, for each values of J 0 and γ. The blue line in Fig 5B corresponds to the location where Λ changes sign according to our estimates (see Materials and Methods for details). (11) in S2 Text) in the limit δ ! 0. C: Blue dots: Decorrelation time, τ dec vs. PAC amplitude. The PAC, σ(τ) − σ 1 , was obtained by solving numerically the DMFT equations and τ dec was estimated by fitting the result to the function A/cosh 2 (τ/τ dec ). Red: In the whole range, J 0 2 [5,7] considered, τ dec can be well approximated by t dec ¼ 4:97= ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi s 0 À s 1 p . This relation becomes exact in the limit σ 0 − σ 1 ! 0. Inset: Numerical solution of the DMFT equations for J 0 = 6.65 (blue dots) and the fit to A/cosh 2 (τ/τ dec ) (red). The fit is very good although this is far from bifurcation. For γ > % 0.6, the fraction of networks with an unstable fixed point varies sharply from 0 to 100% in the vicinity of the bifurcation line predicted by the DMFT. Moreover, for these values of γ, the spectrum of the matrix M= ffiffiffiffi N p is homogeneously distributed in the disk of radius λ max centered at the origin and the values of λ max agrees with Eq (7). This is shown in Fig However, as γ ! (1/2) + , the discrepancies between DMFT and simulations become more pronounced. Very close to γ = (1/2) + there is a whole range of values of J 0 for which the DMFT predicts chaos whereas in numerical simulations the dynamics always converge to a fixed point. This discrepancy can be understood by observing that the integral over the Gaussian measure in Eq (5) corresponds to a population average over neurons. When γ ! (1/2) + , the region where z is just above À m J 0 ffi ffi q p dominates the integral; in other words, the neurons with positive close-to-threshold net inputs are those that make the largest contribution to the destabilization of the fixed point. On the other hand, the DMFT shows that these neurons become extremely rare as γ ! (1/2) + : in that limit μ c increases sharply, thus shifting the center of the Gaussian distribution to very large positive values. Therefore, we would need to simulate outrageously large networks to obtain a quantitative agreement with the DMFT predictions for the locations of the bifurcation to chaos. Similar arguments explain why when γ < 1/2 we find a transition from fixed point to chaos in numerical simulations for J 0 < % 0.9 although according to the DMFT the fixed point is always unstable since the integral in Eq (5) diverges.
Numerical diagonalization of M ffiffi ffi N p shows that when γ < % 0.6 (i) the eigevalues in the bulk of the spectrum are distributed in a disk centered at the origin and that this distribution is less and less homogeneous as γ ! (1/2) + (ii) the eigenvalue λ max governing the instability exhibits substantial deviations from Eq (7) especially for large J 0 ( Fig 6C) (iii) λ max exhibits large sample to sample fluctuations (results not shown). We conjecture that these features are due to large finite N and K effects and stem from the fact that the SD of the elements of M ffiffi ffi N p diverges when γ ! (1/2) + . We studied the dynamics in detail for γ = 1. The DMFT predicts that J c ¼ ffiffi ffi 2 p for all I 0 and K (K large). As already mentioned, the simulations agree well with this result (Fig 5B). We studied analytically the dynamics for J 0 close to this transition (Fig 7A-7C). To this end, we solved the self-consistent DMFT equations in the limit δ = J 0 − J c ! 0 + . The perturbative calculation, explained in S4 Text, is less straightforward than in the case of a sigmoid transfer function. This stems from the fact that at the threshold, the threshold-linear transfer function is only differentiable once. It yields that σ − σ 1 * δ α σ s (τ/δ β ) with α = 2, β = −1/2 and the function σ s (x)) has to be determined numerically. The function σ s is plotted in Fig 7B. It can be well fitted to the function A[cosh(x/x dec )] −1 with A = 12.11 and x dec = 2.84 (see Fig 7B, inset). In particular, for small δ, the amplitude and the decorrelation time of the PAC are related by τ dec / 1/(σ 0 − σ 1 ) 1/4 . Note that the amplitude of the PAC vanishes more rapidly (α = 2) than for sigmoidal transfer functions (α = 1) whereas the decorrelation time diverges with the same critical exponent (β = −1/2) in the two cases. (27). Unlike what we found for the sigmoid transfer function, δ must be very small (δ < % 0.03J c ) to achieve a good quantitative agreement. It should be noted, however, that the quality of the fit of σ − σ 1 to A[cosh(τ/τ dec )] −1 does not deteriorate by much even far from the bifurcation ( The agreement is reasonably good but not perfect. We show in S1 Text that the discrepancy between the two improves as the network size increases but that finite size effects are stronger here than in the rate model with sigmoid transfer function.
Leaky integrate-and-fire (LIF) inhibitory networks. Our objective here is to obtain further insights into the relevance of the chaotic behavior exhibited by rate dynamics, Eqs (3)-(4), Inset: The function (σ(τ) − σ 1 )/δ 2 (black) can be well fitted to A/cosh(x/x dec ) (red dots, A = 12.11, x dec = 2.84). C: Decorrelation time, τ dec vs. PAC amplitude (blue). The function σ(τ) − σ 1 was obtained by integrating numerically Eq (29) and τ dec was estimated by fitting this function to A/cosh(τ/τ dec ). Red: In the whole range of J 0 considered (J 0 2 [1.4, 1.9] the relation between τ dec and σ 0 − σ 1 can be well approximated by y ¼ 5:29= ffiffi ffi to understand spiking network dynamics. The dynamics of one population of LIF spiking neurons reduces to Eqs (3)-(4) with the transfer function in the limit where the synapses are much slower than the cell membrane time constant, τ m . Our goal is twofold: 1) to study the emergence of chaos in this rate LIF rate model and 2) to compare it to full spiking dynamics and characterize the range of values of the synaptic time constant for which the two dynamics are quantitatively or at least qualitatively similar. Figs 8, 9 depict typical patterns of neuronal activity in simulations of the inhibitory spiking LIF model. For strong and fast synapses (τ syn = 3 ms, Fig 8A), neurons fire spikes irregularly and asynchronously (Fig 8A). Fig 8B shows that when τ syn = 100 ms the population average firing rate remains essentially the same (* 14.1 Hz) and the network state stays asynchronous. The spiking patterns, however, change dramatically: with large τ syn neurons fire irregular bursts driven by slowly decorrelating input fluctuations (Fig 9A, blue). Fig 9B shows that reducing J 0 increases the firing rate, reduces the amplitude of the fluctuations (Fig 9B, inset) and slows down their temporal decorrelation. Eventually, for small enough J 0 , σ(τ) becomes flat and the fluctuations are negligible. Fig 10 compares the dynamics of the rate to those of the spiking LIF networks. Panels A,B show that for J 0 = 2, I 0 = 0.3 and τ syn = 100 ms, σ(τ), the distributions of the time averages of neuronal firing rates and net inputs, hr i i and hh i i, are essentially the same in the simulations of the two networks. When reducing τ syn down to τ syn > % 15 ms, the function σ(τ/τ syn ) measured in the spiking network simulations, changes only slightly. In fact, this function is remarkably similar to what is found for the corresponding function in the DMFT and in simulations of the LIF rate model (Fig 11A). Fitting σ(τ) with the function B+A[cosh(τ/τ dec )] −1 yields τ dec % 2.45Áτ syn .
How small can τ syn be for the two models to still behave in a quantitatively similar manner? Simulations show that this value increases with the mean activity of the network (see examples in Fig 11) but that for reasonable firing rates, fewer than several several tens of Hz, the fluctuations have similar properties in the two models even for τ syn % 20 ms. The results for the rate model are also plotted (black). The firing rates are * 15 Hz in A and C, * 10 Hz in B and * 30 Hz in D, in good agreement with the prediction from the balance condition ([hri] = 100I 0 /J 0 Hz). As the population firing rate increases, a larger τ syn is needed for good agreement between the spiking and the rate model. We conducted extensive numerical simulations of the inhibitory LIF rate and spiking models (N = 40000, K = 800) to compute their phase diagrams in the I 0 − J 0 parameter space. The results for the rate model are plotted in Fig 12. For sufficiently small J 0 the dynamics always converge to a fixed point whereas for sufficiently large J 0 the network always settles in a state in which the activity of the neurons keeps fluctuating at large time. We show in S5 Text that in this regime the maximum Lyapunov exponent is strictly positive, therefore the dynamics are chaotic. Between these two regimes, whether the dynamics converge to a fixed point or to a chaotic state depends on the specific realization of the connectivity matrix. The fraction of networks for which the convergence is to a fixed point depends on J 0 . The range of J 0 where this fraction varies from 95% to 5% is notably large as shown in Fig 12. Additional simulation results on this issue are depicted in S5 Text. The counterpart of this behavior in the spiking network is that when J 0 is small, neurons fire regular spikes tonically whereas for sufficiently large J 0 they fire highly irregular bursts. The transition between the two regimes occurs for similar values of J 0 in the rate and in the spiking networks. In both networks this transition is driven by the neurons with low firing rates; i.e., with larger numbers of recurrent inputs. These neurons are the first to become bursty as J 0 increases (see S6 Text).
In Fig 13A we plot the bifurcation diagram of the model as obtained in the numerical solution of the DMFT equations (black line) and as calculated in simulations of the rate model (blue dots) and of the spiking network with τ syn = 25 ms (red ×'s) and τ syn = 7.5 ms (green ×'s). The rate model simulations are in good agreement with DMFT for 0.8 < % J 0 < % 2. For larger J 0 the discrepancy becomes significant and increases with J 0 . This is because of finite K effects that grow stronger as J 0 increases as shown in the right inset in Fig 13A, for J 0 = 3 (blue) and J 0 = 4 (red). Fig 13A also shows that, as discussed above, the amplitude of the PACs obtained in simulations of the LIF rate and spiking networks are barely different provided the synaptic time constant is sufficiently large. Finally, according to the DMFT the fixed point should be always unstable since for the LIF transfer function the elements of the stability matrix always have an infinite variance or, equivalently, the integral in Eq (5) always diverges. This can be seen in the close-up in the left inset of Fig 13A, indicating that the PAC amplitude is non-zero for small J 0 and that it approaches 0 very slowly as J 0 decreases. By contrast, in numerical simulations in the same range of J 0 , the dynamics are not chaotic for most of the realizations of the network: they converge to a fixed point, as shown in Fig 12. The explanation for this difference is as for the rate model with threshold power-law transfer function with γ < 1/2 (see above).
Two asynchronous chaos mechanisms in excitatory-inhibitory recurrent networks
We now consider EI spiking networks with recurrent feedback interactions between the two populations. The synaptic strengths and time constants are J ab 0 = ffiffiffi ffi K p and τ αβ (α, β 2 {E, I}). Assuming slow synapses, the dynamics can be reduced to four sets of equations for the four types of synaptic inputs, h i ab ðtÞ (Materials and Methods, Eq (17)). The DMFT yields self-consistent equations for the statistics of these inputs. These equations can be analyzed straightforwardly for the fixed point state. In contrast to purely inhibitory networks where the fixed point loses stability only via a bifurcation to chaos, it can now also lose stability via a Hopf bifurcation. This depends on the synaptic time constants. When this happens the network develops synchronous oscillations which break the balance of excitation and inhibition (the oscillation amplitude diverges for large K).
We focus here on instabilities which lead to chaos. Their locations in the 6 dimensional parameter space (4 synaptic strengths, 2 external inputs) of the model can be derived for a general transfer function (Eqs (54)-(55)). Differential equations for the PAC functions, σ αβ (τ), can also be derived in the chaotic regime. However, general analytical characterization of their solutions is particularly difficult. Leaving such study for future work, we mostly focus below on numerical simulations. Our key result is that in EI networks asynchronous chaos emerges in two ways, one driven by I-I interactions (II mechanism) and the other by the EIE loop (EIE mechanism).
EI network with threshold-linear transfer function. We first study a EI network in which all the neuronal transfer functions are threshold-linear. Fig 14 plots for different K the phase diagram of the DMFT of this model in the J IE 0 À J II 0 parameter space, when J EE 0 ¼ 0 and I E = I I = 1, J EI 0 ¼ 0:8.(The phase-diagram for a non-zero of value J EE 0 , J EE 0 ¼ 1:5, is plotted and briefly discussed in S7 Text). On the lines, the dynamics bifurcate from fixed point (below the lines) to chaos (above). As J II 0 decreases the lines go to infinity. Numerical simulations indicate the maximum Lyapunov exponent changes sign very close to these lines (compare red line and red dots) in good agreement with DMFT. For any finite K, the instability line exhibits a reentrance, crossing the J II 0 -axis at J II 0 ¼ ffiffi ffi 2 p , where the instability occurs in a purely inhibitory network; in this respect, the limit K ! 1 is singular. Solving the self-consistent equations for the average firing rates, r E and r I , one finds that the two populations can have a comparable firing rate for large J II 0 when J IE 0 is not too large. As J II 0 becomes small, the activity in the E population becomes much lower than in the I population. In fact, for K ! 1, r E vanishes on the line J II 0 ¼ I I I E J EI 0 ¼ 0:8 and is zero for J II 0 < I I I E J EI 0 (white region in Fig 14). In other words, in the latter case, inhibition is not balanced by excitation in the E population.
As shown above, in the single inhibitory population case with threshold-linear transfer functions the transition to chaos occurs at J 0 ¼ ffiffi ffi is small, but when J II 0 is large this happens only for h IE . By contrast, dividing τ II by 10 (purple) has very little effect when J II 0 is small but the fluctuations of all the inputs are substantially faster when J II 0 is large. Fig 15 also demonstrates the effect of changing τ αβ on the PAC of the net inputs to the E neurons, h i E ðtÞ ¼ I E þ h i EE ðtÞ À h i EI ðtÞ (corresponding results for the I population are shown in S8 Text). The PAC in the reference case is plotted in gray. For large J II 0 , a ten-fold increase in τ II causes the PAC width to become ten times larger and the PAC amplitude increases (Fig 15A, blue; see also inset). For a ten-fold decrease in τ II (purple) compared to reference, the width of the PAC is smaller but by a smaller factor whereas its amplitude is greatly reduced. By contrast, a ten-fold increase in τ IE has no noticeable effect, either on the width or on the amplitude of the PAC (black). Fig 15B plots the PAC of the total input to the E population for small J II 0 . Here, decreasing τ II by a factor of 10 (purple line) weakly affects the width as well as the amplitude of the PAC. In contrast, a ten-fold increase of τ IE (black) widens the PAC by a comparable factor (see also inset). A similar widening occurs if τ EI is increased ten-fold (see S8 Text).
This phenomenology can be understood as follows. In the large J II 0 regime, the II interactions play the key role in the generation of chaos. Therefore, the time scale of the fluctuations in the activity of the I neurons is essentially determined by τ II . Thus if the latter is 10 times larger than reference, the I inputs to the E neurons are slowed down by the same factor. At the same time, the filtering effect of the EI synapses becomes weaker and thus the amplitude of the PAC of the net input in the E neurons increases. The effect of decreasing τ II stems from the filtering effect of the EI synapses which is now stronger than in the reference case. Finally, changing τ IE has no noticeable effect since the fluctuations are generated by the II interactions. By contrast, when J II 0 is small, II interactions are not sufficient to generate chaotic fluctuations. In this regime, the EIE loop drives these fluctuations if J IE 0 is sufficiently large. That is why the time scale of the activity fluctuations depends primarily on τ IE and to a much smaller extent on τ II .
These results point to the existence of two mechanisms for chaos emergence in two population networks; they differ by the type of the dominant interactions (EIE or II) and therefore on the synaptic time constants which settle the time scale of the activity fluctuations. Another difference is that in the EIE mechanism, the E population is always significantly less active than the I population. This is not the case in the II mechanism.
Two-population spiking LIF network. We ran a similar analysis for LIF networks. Fig 16A, 16C plot the PACs of h i E ðtÞ for the LIF spiking and rate models (PACs of h i I ðtÞ are shown in S9 Text). In all panels J EE 0 ¼ 0, J IE 0 ¼ 3, J EI 0 ¼ 0:8 and τ EI = 3 ms. For J II 0 ¼ 4 ( Fig 16A), increasing τ II slows down the fluctuations. By contrast, changing τ IE only has a very mild effect (S10 Text). This is because the fluctuations are essentially driven by the II interactions. For τ II > % 15 ms, the fluctuation statistics are quantitatively similar in the spiking and the rate models: in both, the decorrelation time, τ dec % 2τ II (Fig 16A, inset). Moreover, simulations indicate that the dynamics of the rate model are chaotic (Λ % 1.7/τ II ). The trace in Fig 16B shows that with large τ II (=100 ms) the spiking pattern is bursty. The membrane potential between bursts exhibit slow fluctuations because they are generated by the slow II connections. Fig 16C plots the PACs of h i E ðtÞ for J II 0 ¼ 1. Here also, the LIF rate model operates in a chaotic regime (Λ % 120s −1 ). In the spiking model the PACs exhibit a slow time scale but also a fast one (the sharp peak around τ = 0). These correspond to the slow and fast fluctuations observable in the voltage traces in Fig 16D. Increasing τ IE while keeping τ EI = τ II = 3 msec has a substantial effect on the slow component but hardly affects the fast component. When plotted vs. τ/τ IE , the slow components of the PACs all collapse onto the same curve (Fig 16C, inset). This indicates that the EIE loop is essential in generating the slow, but not the fast, fluctuations. Fitting this slow component with the function AÁ[cosh(τ/τ dec )] −1 yields τ dec % 2.4τ IE . Furthermore, increasing τ II suppresses the fast fluctuations and amplifies the slow ones. These two effects saturate simultaneously when τ II % 10 ms (S11 Text). Thus, it can be inferred that fast fluctuations are mostly generated by II interactions. Their amplitude is suppressed as τ II is increased because they become more filtered. Concomitantly, the slow fluctuations become amplified. This is because fast fluctuations smooth the effective transfer function of the E neurons in the low firing rate regime. Thus, their suppression increases the gain of this transfer function. This explains the quantitative differences between the PACs in the spiking and the rate LIF network when II synapses are fast and why these differences are lessened as τ II increases (S11 Text).
In the simulations reported in Fig 16 there is no recurrent excitation in the E population (J EE 0 ¼ 0). Moreover, all the excitatory synapses to the I population are slow. Both assumptions were made to reduce the number of parameters in order to simplify the analysis. However, in cortical networks in general, fast (AMPA) and slow (NMDA) excitation coexist (in fact AMPA synapses are required to open the NMDA receptors). Moreover, recurrent excitation is thought to be in general substantial (see however [47]). Results depicted in S12 Text show that the EIE loop can induce slow rate fluctuations in our network when it combines slow and fast excitatory synapses and when substantial recurrent excitation is present in the E population.
Discussion
Networks of neurons operating in the so-called balanced regime exhibit spiking activity with strong temporal variability and spatial heterogeneity. Previous theoretical studies have investigated this regime assuming that excitatory and inhibitory synapses are sufficiently fast compared to the neuronal dynamics. The nature of the balanced state is now fairly well understood in this case. By contrast, here we focused on networks in which some of the synapses are slow. To study the dynamics in these networks, we reduced them to a rate dynamics that we investigated by combining Dynamical Mean-Field Theory and simulations. Our key result is that when synaptic interactions are sufficiently strong and slow, chaotic fluctuations on the time scales of the synaptic dynamics emerge naturally from the network collective behavior. Moreover, the nature of the transition to chaos and the behavior in the chaotic regime are determined only by the neuronal f − I curve and not by the details of the spike-generation mechanism.
We identified two mechanisms for the emergence of asynchronous chaos in EI neuronal networks. One mechanism relies on II interactions whereas in the other the EIE feedback loop plays the key role. These mechanisms hold in rate models (Eq (3)) as well as in LIF spiking networks. By computing the maximum Lyapunov exponent, we provided direct evidence that in rate models these states are indeed chaotic. For LIF spiking networks, we argued that when the synapses are sufficiently slow, the observed activity fluctuations are chaotic since their statistics are quantitatively similar to those observed in the corresponding rate model. This similarity persists for synaptic time constants as small as the membrane time constant. This is in agreement with [33][34][35] which relied on numerical integration of the LIF model to compute the Lyapunov spectra of networks of various sizes and increasing synaptic time constants. They found that the LIF dynamics are chaotic only if the synapses are sufficiently slow.
In these two mechanisms, the dynamics of the synaptic currents play the key role whereas dependence on the intrinsic properties of the neurons only occurs via their nonlinear instantaneous input-output transfer function. Since the synaptic currents are filtered versions of the neuronal spike trains, and that the temporal fluctuations of the activity occur on the time scales of the synaptic currents, it is natural to qualify the dynamical regime as rate chaos. Although the features of the bifurcation to chaos may depend on the shape of the transfer function, as we have shown, the qualitative features of the chaotic state are very general, provided that the synaptic currents are sufficiently slow. Rate chaos is therefore a generic property of networks of spiking neurons operating in the balanced regime. We show in S3 Text that rate chaos occurs also in networks of non-leaky integrate-and-fire spiking neurons. In that case, the statistics of the fluctuations are similar to those of the model in Eq (3) with a threshold-linear transfer function. We also found rate chaos in biophysically more realistic network models in which the dynamics of the neurons and of the synapses are conductance-based (results not shown). In these cases, the dynamics of the synaptic conductances give rise to the chaotic fluctuations.
Quantitative mappings from spiking to rate models have been derived for networks in stationary asynchronous non chaotic states [38] or responding to external fluctuating inputs [48]. Spiking dynamics also share qualitative similarities with rate models for networks operating in synchronous states [9-11, 38, 43]. To our knowledge, the current study is the first to report a quantitative correspondance between spiking and rate model operating in chaotic states.
The SCS model [19] has been widely used to explore the physiological [22,49] and computational significance of chaos in neuronal networks. Recent works have shown that because of the richness of its chaotic dynamics, the SCS model has remarkable learning capabilities [15][16][17][18]. Our work paves the way for an extension of these results to networks of spiking neurons with a connectivity satisfying Dale's law, which are biologically more realistic than the SCS model.
Another interesting implication of our work is in the field of random matrices. Given a dense NxN random matrix, A, with i.i.d elements with zero mean and finite standard deviation (SD), in the large N limit, the eigenvalue of A= ffiffiffiffi N p with the largest real part is real, and it is equal to SD [50,51] (more generally, the eigenvalues of A= ffiffiffiffi N p are uniformly distributed within a disk of radius SD centered at the origin [50,51]). Several results regarding the spectra (bulk and outliers) of dense random matrices with structures reflecting Dale's law have been derived recently [52][53][54]. Less is known when the matrices are sparse. A byproduct of our approach are two conjectures for the maximal eigenvalue of such sparse random matrices, namely Eqs (7) and (62) that we verified numerically.
Neuronal spiking statistics (e.g., firing rate, spike counts, inter-spike intervals) exhibit a very broad range of time scales during spontaneous or sensory evoked activity in-vivo (see e.g [55,56]). Fluctuations on time scales larger than several 100s of millisecond can be accounted for by neuromodulation which changes the global excitability of the cortical network or changes in behavioral state. Very fast fluctuations are naturally explained in the framework of the standard model of balance of excitation and inhibition [28][29][30]. By contrast, it is unclear how to explain modulations in the intermediate temporal range of a few 10s to several 100s of milliseconds. In fact, the standard framework of balanced networks predicts that fluctuations on this time scale are actively suppressed because the network state is very stable. Our work extends this framework and shows two mechanisms by which modulations in this range can occur. In the II mechanism, inhibitory synapses must be strong and slower than 10 − 20 ms. GABA A inhibition may be too fast for this [57] (see however [58]), but GABA B [59] are sufficiently slow. In contrast, the EIE mechanism is achieved when inhibition in fast. It requires slow recurrent excitation to inhibitory neurons, with a time constant of a few to several tens of ms, as is typically the case for NMDA receptors (see e.g [60][61][62]). Hence, the combination of GABA A and NMDA synapses can generate chaotic dynamics in the cortex and fluctuations in activity on a time scale of several tens to a few hundreds of ms.
Note added in production: Following a request from the editors after formal acceptance of our article, we note that a recent paper [63] claims that spiking networks with instantaneous delayed synapses exhibit an asynchronous state similar to the chaotic state of the SCS model. However, this claim is incorrect and has been shown to rely on flawed analysis [64].
Models
Two population leaky integrate-and-fire spiking network The two population network of leaky integrate-and-fire (LIF) neurons considered in this work consists of N E excitatory (E) and N I inhibitory neurons. The subthreshold dynamics of the membrane potential, V a i , of neuron i in population α (i = 1, . . ., N α ; α, β 2 {E, I}) obeys: where τ m is the membrane time constant (we take τ m = 10 msec for both populations), C ab ij and J αβ are respectively the connectivity matrix and the strength of the connections between the (presynaptic) population β and (postsynaptic) population α and I α the external feedforward input to population α. For simplicity we take N E = N I = N. However, all the results described in the paper are also valid when the number of neurons is different in the populations (provided both numbers are large)., The variables S ab j , which describe the synapses connecting neuron j in population β to population α, follow the dynamics: where τ αβ is the synaptic time constant and the sum is over all the spikes emitted at times t b j < t. Eqs (11), (12) are supplemented by a reset condition. If at time t sp , V a i ðt sp Þ ¼ 1, the neuron emits a spike and V a i ðt þ sp Þ ¼ 0. For simplicity we do not include the neuronal refractory period. We assume that the connectivity is random with all the C ab ij uncorrelated and such that C ab ij ¼ 1 with probability K/N and 0 otherwise. Hence each neuron is connected, on average, to K neurons from its population as well as to K neurons from the other population. When varying the connectivity K we scale the interaction strength and the feedforward inputs according to:
Network of inhibitory leaky integrate-and-fire neurons
The dynamics of the network of the one-population spiking LIF neurons considered in the first part of the paper are: supplemented with the reset condition at threshold. The elements of the connectivity matrix, C ij , are uncorrelated and such that C ij = 1 with probability K/N and 0 otherwise. All neurons are inhibitory, thus J < 0. The synaptic dynamics are: where τ syn is the synaptic time constant of the inhibition and the sum is over all the spikes emitted at times t j < t. The interaction strength and the feedforward inputs scale with K as:
Network of non-leaky integrate-and-fire neurons
We consider briefly this model in S3 Text. The network architecture as well as the synaptic dynamics are as above. The single neuron dynamics of non-leaky integrate-and-fire (NLIF) neurons are similar to those of LIF neurons except for the first terms on the right-hand side of Eqs (11), (13) which are now omitted.
Rate dynamics for spiking networks with slow synapses
If the synapses are much slower than the membrane time constant, the full dynamics of a spiking network can be approximated by the dynamics of the synapses driven by the instantaneous firing rates of the neurons, namely: where g(x) is the transfer function of the neuron (the f − I curve) [20]. In particular, for the LIF networks, with H(x) = 1 for x > 0 and H(x) = 0 otherwise. For the NLIF networks, the transfer function is threshold-linear: g(x) = xH(x).
Defining h ab i ≜J ab P j C ab ij S ab j , the dynamics of h ab i are given by We will denote by h b i the total input into neuron i in population β: For networks comprising only one population of inhibitory spiking neurons we will drop the superscript β = I and denote this input by h i . The dynamics then yield: where τ syn is the inhibitory synaptic time constant.
Dynamical Mean-Field Theory of the Single Inhibitory Population
A Dynamical Mean-Field Theory (DMFT) can be developed to investigate the rate model, Eq (17), for a general transfer function under the assumption, 1 ( K ( N. Here we provide a full analysis of a one-population network of inhibitory neurons whose dynamics are given in Eq (18). We take I ¼ I 0 ffiffiffi ffi K p as the external input and J ¼ J 0 = ffiffiffi ffi K p as the coupling strength. In this case, a functional integral derivation shows that these dynamics can be written as: where η i (t) is a Gaussian noise: with z i , i.i.d Gaussian quenched variables with zero mean and unit standard deviation (SD), ξ i (t) are Gaussian noises with hξ i (t)i t = 0, and hξ i (t)ξ j (t+τ)i t = C ξ (τ)δ i, j where h Á i t stands for averaging over time. Therefore, in general, the inputs to the neurons display temporal as well as quenched fluctuations.
The self-consistent equations that determine the mean, temporal correlations and quenched fluctuations yield: where h Á i and [Á] stand for averaging over noise and quenched disorder, respectively. Thus the quantities q and μ obey: and: where σ(τ) = [hh(t)h(t+τ)i] − μ 2 is the population-averaged autocovariance (PAC) of the input to the neurons and we define: σ 0 = σ(0) and Dx ¼ e À x 2 2 ffiffiffi ffi 2p p dx. In the limit K ! 1, μ must remain finite. This implies that the population averaged firing rate, [hg(h)i] = I 0 /J 0 does not depend on the specifics of the transfer function of the neurons and varies linearly with I 0 . This is a key outcome of the balance between the feedforward excitatory and the recurrent inhibitory inputs to the neurons.
To express C ξ (τ) in terms of σ, we note that the vector (h(t), h(t+τ)) T is a bivariate Gaussian, so in fact we need to calculate E[g(μ+x)g(μ+y)] where (x, y) T has zero mean and a covariance matrix with G(x) = R g(x)dx. Note that for positive σ this equation yields Therefore the quantity is conserved under the dynamics, Eq (29). Hence: To simplify notations, we drop the parameter σ 0 and denote the potential by V(σ). The first, second and third order derivatives of the potential with respect to σ are denoted V 0 (σ), V 00 (σ) and V 000 (σ).
A bifurcation between these behaviors occurs at some critical value, J c , such that for J 0 < J c the self-consistent solutions of Eq (29) are either oscillatory or constant as a function of τ, whereas for J 0 > J c they are either oscillatory or decay monotonically. A stability analysis of these different solutions is beyond the scope of this paper; instead, we rely on numerical simulations of the full dynamics. They indicate that the network dynamics always reach a fixed point for sufficiently small J 0 . For sufficiently large J 0 the fixed point is unstable and the network settles in a state in which σ(τ) decays monotonically with τ. Simulations also show that the maximum Lyapunov exponent in these cases is positive (see below); i.e. the network is in a chaotic state. For values of J 0 in between these two regimes, the network displays oscillatory patterns of activity. However, for increasing network sizes, N, the range of J 0 in which oscillations are observed vanishes (not shown). Therefore for large N the bifurcation between a fixed point and chaos occurs abruptly at some critical value J c . A similar phenomenology occurs for other non-linear positive monotonically increasing transfer functions.
In summary, for a fixed feedforward input, I 0 , there are two regimes in the large N limit: 1. for J 0 < J c : the stable state is a fixed point. The distribution of the inputs to the neurons is a Gaussian whose mean, μ, and variance, σ are determined by the self-consistent mean-field equations: For a transfer function, g(x), which is zero when x is smaller than some threshold T (functions without threshold correspond to T = −1), the distribution of the neuronal firing rates, r i , in this state is given by: 2. for J 0 > J c : the stable state is chaotic. The distribution of time average inputs is Gaussian with mean μ and variance s 1 ¼ J 0 2 q and the autocovariance of the inputs is determined by Eq (29) which depends on σ 0 . The quantities μ, σ 0 and σ 1 are determined by the self-consistent equations: and s 0 2 À s 1 together with Eq (25).
(1+iτ αβ ω). Transforming back to the time domain yields: Since Δ αβ = σ αβ +(μ αβ ) 2 we get: Thus we get a set of self-consistent equations for the four PACs σ αβ . The relevant soutions have to satisfy the four boundary conditions: In general, these dynamical equations cannot be written like those of a particle in some potential. This makes the study of their solutions substantially more difficult than in the one population case.
Separation of time scales
A potential function can be written for the DMFT if the time scale of one type of synapses is substantially larger than the others, which makes it possible to consider the latter as instantaneous. We carry out this analysis below assuming τ IE ) τ EI , τ EE , τ II .
Setting all the synapses except those from E neurons to I neurons to be instantaneous implies that except for σ IE one has: whereC b is defined in Eq (44). Since τ IE is now the only time scale we can take τ IE = 1. Also, σ EE , σ EI , σ II and the potential V are now functions of a single variable, σ IE . Therefore, the differential equation for σ IE can be written as The instability of the fixed point occurs when, V 0 (σ IE ) and V 00 (σ IE ), the first and the second derivatives of V with respect to σ IE , vanishes. Using Eq (49) one has: Since σ α = σ αE +σ αI : where theC ab (α = E, I, β = E, I) are N×N sparse matrices with elements (C αβ is the matrix of connectivity between populations β (presynaptic) and α). We are interested in instability onsets at which a real eigenvalue crosses 0. Using Eq (56), it is straightforward to show that such an instability happens if the synaptic strength are such that: If J EE 0 ¼ 0, one can rewrite Eq (58) as: with: Let us assume that J II 0 is fixed and such that for small enough J IE 0 J EI 0 the fixed point is stable. When increasing, J IE 0 J EI 0 the fixed point loses stability when the value of J IE 0 J EI 0 is the smallest for which Eq (59) is satisfied, that is for which the largest real eigenvalue, λ max of the matrix M crosses 1. If this instability also corresponds to chaos onset, Eq (54), this would imply that the condition λ max = 1 is equivalent to: Interestingly, this condition means that the variance of the elements of the matrix ffiffiffiffi N p M is equal to one leading us to conjecture that more generally the eigenvalue of the latter which has the largest real part and is given by:
Numerical simulations Integration of network dynamics and mean-field equation solutions
The integration of differential equations, Eq (15) and Eq (18) (Eq (3) in main text), was performed with a C code using the Euler method with fixed Δt = τ syn /20 (the validity of the results was verified using smaller values of Δt).
Simulations of the LIF spiking networks were done using a second-order Runge-Kutta integration scheme supplemented by interpolation of spike times as detailed in [65]. In all the spiking network simulations the time step was Δt = 0.1 ms.
Self-consistent mean-field equations were solved with MATLAB function fsolve, which implements a 'trust-region-dogleg' algorithm or the Levenberg-Marquardt algorithm for nonsquare systems. Numerical calculations of integrals was done with MATLAB function trapz.
Population-averaged autocovariance
The population average autocovariance (PAC) functions of neuronal quantities f i (t) (i = 1. . .N) were computed as where N t is the number of time samples for the calculation of the PAC. In all figures f i (t) = h i (t) except in Fig 16 where f i ðtÞ ¼ I a þ h aE i ðtÞ À h aI i ðtÞ. All PACs of spiking networks were calculated over 163.84 sec, and averaged over 10 realizations of the connectivity. For models Eq (15) and Eq (18), PACs were calculated over 2048τ syn after discarding 200τ syn of transient dynamics and averaged over 8 realizations.
Largest Lyapunov exponents
To calculate the maximal Lyapunov exponent, Λ, of the inhibitory network, Eq (3), we simulated the system for a sufficiently long duration (200τ syn ) so that it settled on the attractor of the dynamics. Denoting byh à the network state at that time, we then ran two copies of the dynamics, one with initial conditionsh 1 ðt ¼ 0Þ ¼h à and the other with slightly perturbed initial conditions,h 2 ðt ¼ 0Þ ¼h à þ = ffiffiffiffi N p (jjh 1 ð0Þ Àh 2 ðð0Þ jj¼ , where jjÁjj is the l 2 norm).
Monitoring the difference,dðtÞ ¼h 1 ðtÞ Àh 2 ðtÞ we computed T ð1Þ reset ¼ minðargðjjdðtÞ jj¼ D max Þ; T max Þ and D ð1Þ reset ¼jjdðT ð1Þ reset Þ jj. We then reinitialized the dynamics of the second network copy toh 2 ðT ð1Þ reset Þ þd Fraction of networks with a stable fixed point in rate dynamics Fig 10D in the main text plots the lines in the J 0 − I 0 phase diagrams of the threshold-power law rate model, for which 5%,50%,95% of randomly chosen networks have dynamics which converge to a fixed point. To compute these lines we simulated, for each value of γ and J 0 , 100 realizations of the network. For each realization, we computed the population average of the temporal variance the synaptic inputs, ρ: where N tot is the total number of time steps of the simulations after discarding a transient with a duration of 256τ syn . The fixed point was considered to be unstable if ρ > 10 −9 . The fraction of unstable networks, F u , was fitted with a logistic function: S9 Text. The two mechanisms underlying asynchronous chaos in two-population LIF networks: Results for inhibitory neurons. (PDF) S10 Text. Two-population LIF rate and spiking models: In the II mechanism the PAC depends very mildly on τ IE . (PDF) S11 Text. Two-population LIF rate and spiking models: In the EIE mechanism the slow component of the PAC depends very mildly on τ II . (PDF) S12 Text. Two-population integrate-and-fire network with recurrent EE excitation, AMPA and NMDA synapses and fast inhibition. | 14,791 | 2015-04-06T00:00:00.000 | [
"Computer Science",
"Physics"
] |
The Vocabulary-Comprehension Relationship across the Disciplines : Implications for Instruction
The main purpose of vocabulary instruction is to enhance and support reading comprehension. This goal spans across the grade levels and different disciplines and is supported by a plethora of research. In recent years, a great deal of needed attention has been finally given to academic vocabulary and disciplinary literacy. To contribute to this body of knowledge, we believe it is critical to examine how the complex relationship between vocabulary and comprehension may be addressed in secondary content area classrooms, given the unique nature of the academic vocabulary students encounter daily in school. This conceptual paper contains the following: (1) definition of academic vocabulary; (2) description of what is known about the vocabulary–comprehension relationship; (3) conceptualization of the intersection of academic vocabulary and the vocabulary–comprehension relationship; and (4) instructional implications emerging from this intersection. Perhaps this conceptualization may provide disciplinary practitioners more insight to help them make decisions regarding vocabulary instruction.
In recent years, a great deal of needed attention has been given to the literacy demands of different content areas (i.e., science, history, mathematics, literature) and how these demands are unique to each subject area.Moving away from a focus on generic strategies promoted under the umbrella of content area literacy, we are now drawing attention to disciplinary literacy with an emphasis on discipline-specific practices employed by the experts in each field of study [1][2][3].Disciplinary literacy, in contrast to content area literacy, looks closely at what Fang and Coatoam call "disciplinary habits of mind" [4] (p. 628) in regard to the unique ways, in which experts in different subject areas use to communicate through reading, writing, viewing, visually representing, speaking, and reasoning.As a result, there is increasing evidence that infusing literacy practices tailored to particular disciplines can enhance students' academic achievement [5].
While both content area literacy and disciplinary literacy practices are useful to promote learning in various subject-matter areas [6], one important component in each view is vocabulary.We know that the main purpose of vocabulary instruction is to enhance and support reading comprehension.This goal spans across the grade levels and different disciplines and has been supported by a plethora of research across the decades.In regard to different disciplines, there are two major categories of vocabulary-the specific academic vocabularies associated with particular disciplines as well as general vocabularies shared by the disciplines.To contribute to the understandings of the body of knowledge about vocabulary and comprehension, we believe it is critical to examine how the complex relationship between vocabulary and comprehension may be addressed in secondary content area classrooms, given the unique nature of the academic vocabulary students encounter daily in school.An examination of the intersection of the academic vocabulary in the disciplines and the vocabulary-comprehension relationship may provide disciplinary practitioners more insight to inform decision-making regarding vocabulary instruction.We begin by providing an overview of the meaning of academic vocabulary and an explanation of the relationship between vocabulary and comprehension.We then examine the intersection of academic vocabulary and the vocabulary-comprehension relationship and the instructional implications for enhancing learning.
Academic Vocabulary
Vocabulary growth occurs in both oral and written contexts.Oral contexts appear to support vocabulary more easily, given the natural opportunity for multiple uses and repetition of words as well as the presence of concrete referents [7].In the case of written contexts, vocabulary acquisition involves engagement in more sophisticated language, especially as students move into the upper grades.In middle schools and high schools, the focus is primarily on reading texts across different subject-matter areas, each of which has its own distinctive language patterns.Nagy and Townsend define this academic language as the "specialized language, both oral and written, of academic settings that facilitate communication and thinking about disciplinary content" [8] (p.92).In the words of Zwiers, the academic language in each discipline is "the set of words, grammar, and organizational strategies used to describe complex ideas, higher-order thinking processes, and abstract concepts" [9] (p.20).The complex and multi-dimensional nature of each discipline requires that students learn about the language of each area in order to gain conceptual knowledge in these fields.
Academic vocabulary is a critical component embedded in the language of all disciplines and places challenging demands on all learners.This vocabulary is critically important for disciplinary learning and thinking.Descriptions about academic vocabulary focus on two distinctive categories of words-technical, content-specific terms and general academic vocabulary.As noted by Baumann and Graves [10], scholars label these content-specific terms in different ways, including technical vocabulary and domain-specific words.Regardless of the label, these are terms representing concepts in particular disciplines, such as fossil fuels, greenhouse effect, and atmosphere when learning about global warming.
General academic vocabulary, on the other hand, are generalized terms that appear across different content areas with their meanings sometimes changing depending upon the context.Examples of general academic vocabulary include analyze, market, legend, and grade.McKeown and her colleagues aptly note the importance of general academic vocabulary, especially for second-language learners by stating that "[high frequency general words] provide the foundation upon which the knowledge of rarer words must build.Developing knowledge of academic vocabulary-mid-frequency, high dispersion words preferentially appearing in academic written texts-is particularly important for K-12 vocabulary development and for advanced language learners" [11] (pp. 55-56).
Vocabulary-Comprehension Relationship
For many decades, we have accumulated well-documented evidence that vocabulary size is a strong predictor of a student's ability to comprehend text [12,13].This strong correlation between vocabulary and comprehension leads to an obvious conclusion that if teachers teach word meanings, students will comprehend better.However, the relationship between vocabulary and comprehension is not that straightforward and is highly complex involving a host of other variables.To explain this relationship, Anderson and Freebody [12] considered three standpoints that they labeled as the instrumental hypothesis, the knowledge hypothesis, and the aptitude hypothesis.Since then, researchers in the field have augmented these explanations to include others, such as the access hypothesis [14], the metalinguistic hypothesis [15], and the reciprocal hypothesis [16].These explanations do not contradict each other, but rather illustrate the interplay of multiple variables involved in the vocabulary-comprehension relationship [17].We provide an explanation of each standpoint or hypothesis.
Instrumental hypothesis.This common sense explanation suggests that learning word meanings influences comprehension, thus leaning more toward a causal connection between vocabulary and comprehension [12].Stahl [18] points out two implications embedded in this standpoint.First, if knowledge of words can directly enhance comprehension, texts with more challenging words will be more difficult to understand.Another implication is that directly teaching word meanings will improve comprehension.However, Stahl [18] cautions that not all vocabulary instructional methods will influence comprehension.While knowing word meanings is necessary for comprehension to occur, it is not a sufficient explanation by itself to explain the relationship between vocabulary and comprehension [15].
Knowledge hypothesis.Moving away from the direct, causal relationship implied by the instrumental hypothesis, the knowledge hypothesis takes into account the influence of a mediating variable-that of background knowledge.This explanation of the relationship between vocabulary and comprehension illustrates that word meanings are not learned in isolation; rather, they are developed while learning about a new topic.Such contexts allow students to see how words are semantically related to other words [19].Furthermore, as Nagy states, " . . . it is not [the idea of] knowing the meaning of words that causes readers to understand what they read; rather, knowing the meanings of words is an indication of the readers' knowledge of a topic or concept.It is this knowledge that help readers comprehend" [15] (p.31).Thus, word knowledge is one aspect of topic knowledge needed for comprehension to occur, and learning about a topic provides the opportunity to increase word knowledge [15,18].
Aptitude hypothesis.Similar to the knowledge hypothesis, the aptitude hypothesis also considers a third variable to explain the relationship between vocabulary and comprehension.In this case, verbal ability plays an important role, since students with a high verbal aptitude tend to know more words, and are better able to learn new words, and comprehend what they read [12,17].Both Sternberg and Powell [20] and Stahl and Nagy [17] expand the aptitude hypothesis in different ways.Sternberg and Powell [20] interpret this hypothesis to include a reader's ability to make inferences.In this way, inferential ability, as a subset of the aptitude hypothesis, has a critical impact on comprehension, especially when readers must infer the meanings of unfamiliar words encountered in texts.Using a different perspective about the aptitude hypothesis, Stahl and Nagy [17] consider metalinguistic aspects of word learning that can impact comprehension.They argue that readers use their knowledge of language as they construct an understanding of what is being read, that is, they use what they know about morphology (e.g., affixes, roots), syntax, figurative language, polysemy (i.e., multiple meanings of some words), and other language cueing systems, all of which relate to the aptitude hypothesis.
Reciprocal hypothesis.Beck, McKeown, and Omanson question " . . . .are people good comprehenders because they know a lot of words, or do people know a lot of words because they are good comprehenders and in the course of comprehending text, learn a lot of words, or is there some combination of directionality?"[21] (pp.147-148).To address this question, Stanovich [16] suggests that there is a reciprocal relationship between vocabulary and comprehension, where vocabulary is increased through reading and comprehending and comprehension is enhanced by knowledge of more words.In the words of Stahl and Nagy, "having a bigger vocabulary makes you a better reader, being a better reader makes it possible for you to read more, and reading more gives you a bigger vocabulary" [17] (p.13).
Access hypothesis.To explain the complex relationship between vocabulary and reading comprehension, Mezynski [14] suggests another dimension called the access hypothesis.Based upon the theoretical work of LaBerge and Samuels [22] concerning automaticity in reading, the access hypothesis indicates that the quick and easy retrieval of word meanings is necessary for comprehension to occur.Thus, readers must know the various aspects of word meanings (e.g., correct nuances of meanings) well enough for easy access and use while reading.
Each of these explanations helps to illustrate the complexity of the relationship between vocabulary and comprehension in general and do not contradict each other.Furthermore, they offer a way to examine the relationship between academic vocabulary and reading comprehension in different disciplines.
Intersection of Academic Vocabulary and the Vocabulary-Comprehension Relationship
Teacher beliefs about teaching in general, and specifically about vocabulary learning and instruction, are important across all grade levels and disciplines [23].One study by Konopak and Williams [19] examines teacher beliefs about vocabulary teaching and learning from the perspective of the relationship between vocabulary and comprehension.Enlightened by this work, we attempt to conceptualize the intersection between the academic vocabulary across subject-matter areas specifically and the vocabulary-comprehension relationship as represented by the aforementioned hypotheses.Perhaps this perspective may provide insights into what vocabulary practices in content area classrooms may enhance reading comprehension.We address each hypothesis or standpoint in light of both technical, content-specific vocabulary and general academic vocabulary.
Instrumental hypothesis.This hypothesis supports the direct teaching of both technical vocabulary and general academic vocabulary.Explicit instruction that provides definitions of words, both technical and general, is an initiating, introductory event that is necessary but not totally sufficient to ensure that students internalize word meanings.Variability in the kinds of content-specific words is one factor that must be considered.Some words lend themselves well to explicit instruction with definitions.For example, a simple definition for words, such as triangle and perimeter in mathematics, may suffice.Direct instruction for some general academic terms may also work for such words and phrases as find the least amount and record your answer.However, other content-specific words need more detailed and integrated instruction for students to internalize these ideas.For example, in keeping with mathematics examples, the more complex concepts of both slope and functions require more detailed and extensive instruction than the use of simple explanations using definitions.Furthermore, particular general academic words also require more in-depth instruction with words and phrases, such as analyze the data, demonstrate your understanding, and relationship to.
Knowledge hypothesis.Conceptual knowledge speaks explicitly to the teaching and learning of disciplinary vocabulary.In the case of content-specific academic vocabulary, building word knowledge is closely related to building conceptual knowledge.As Vacca and his colleagues point out, "Words are labels for concepts.A single concept, however, represents much more than the meaning of a single word.It may take thousands of words to explain a concept" [24] (p.243).Moving away from the definitional level, a focus on conceptual knowledge acquisition requires that students engage in meaningful, purposeful experiences, both firsthand and vicariously, in order to learn [24].
Conceptual knowledge, in this case of subject-matter knowledge, is part of a reader's prior knowledge.Two types of prior knowledge are important for literacy learning in the disciplines-topic knowledge and domain knowledge, both of which involve academic vocabulary [25][26][27].Topic knowledge focuses on the depth of knowledge a student may have about a topic, that is, what background knowledge and experiences the student has acquired about a topic or a concept.If a student has a strong knowledge base about the topic of diabetes, for example, this student will also have the depth of knowledge about the words and terms that are used in talking and writing about this topic (how well the words are known), words and ideas, such as glucose level, insulin, and proper diet.Domain knowledge, on the other hand, represents the breadth of knowledge that readers have about a particular discipline, including not only the breadth of knowledge about vocabulary (the size of word knowledge), but also the language and thinking associated with a given field of study.In other words, domain knowledge can be considered to be general knowledge experts have in a particular field of study [25].In the diabetes example, domain knowledge would include knowledge about metabolism, how glucose is used as the main source of energy for the body, health problems caused by a great deal of glucose in the blood (e.g., neuropathy, hypertension, ketones), and the use of hemoglobin testing for measuring control of diabetes.Specifically in regard to vocabulary knowledge, both topic and domain knowledge illustrate Schmitt's statement that "all aspects of vocabulary knowledge are interrelated" [28] (p.942).
In addition to the role of vocabulary in conceptual knowledge, it is important to consider academic vocabulary in light of both linguistic and lexical knowledge.From a disciplinary perspective, linguistic knowledge is knowledge of the language used in particular subject-matter areas and knowledge of language differences across disciplines.Linguistic knowledge is the basis, from which students understand how authors frame the ways in which concepts are explained and described in disciplinary texts.This includes such features as syntactic knowledge, nominalization, multiple meanings of similar words, collocational knowledge, and morphological structures [11].Table 1 provides explanations and examples for some of these particular features in regard to content specific vocabulary and general academic vocabulary.Aptitude hypothesis.The relationship between vocabulary and comprehension in different disciplines is also influenced by verbal ability.As students read the informational texts that are unique to each content area, they need to be skillful in inferring particular meanings of technical vocabulary as well as the different uses of general academic vocabulary in each discipline of study.For both sets of words, metalinguistic knowledge enables readers to reflect upon and manipulate the special vocabulary and language found in each discipline [15].Given that all the disciplines have unique and significantly different language patterns and terminology [4], it is even more imperative that students acquire the metalinguistic knowledge needed for each field.
Reciprocal hypothesis.The reciprocal relationship between vocabulary and comprehension holds true for reading and understanding in the content areas.When students read more in science, history, and the other disciplines, they are exposed to more of the technical terms as well as the general academic terms used in conveying meaning.The more they read, the more both vocabularies grow and become internalized.The richer their vocabularies become, the better they are able to comprehend different texts and more challenging texts.
Access hypothesis.As in any field of study, the need to internalize word meanings to the point of automatic retrieval is important for comprehension to occur.Again, the richness or depth of word knowledge, as well as the breadth or size of word knowledge about a given topic, aids in the accessibility of word meanings.This accessibility is needed for both content-specific words and general academic vocabulary.Students need multiple opportunities to engage in meaningful, topic-centered contexts and situations, in which both types of disciplinary vocabulary are internalized.Based upon the International Literacy Association's use of the term "literacy" [29], these encounters can include opportunities for students to read, write, speak, listen, view, and visually represent their understandings about specific topics across all disciplines-opportunities that cannot occur without vocabulary and language.
It is important to keep in mind that vocabulary learning is complex and multifaceted.Hence, all of these hypotheses have value and play different roles in helping students attain the necessary vocabulary to be successful in understanding academic texts.
Instructional Implications
The vocabulary-comprehension hypotheses previously described form the basis of effective vocabulary instructional practices across different disciplines.Structured lesson frameworks typically include teacher preparation, initial explanations, applications, and reinforcement [30].The support for both content-specific terms and general academic vocabulary can be integrated simultaneously in these instructional formats.Furthermore, each aspect of the lesson draws from what we know about the vocabulary-comprehension connection in disciplinary learning.However, it is important to be mindful that the hypotheses used to explain the vocabulary-comprehension connection can be recursive and overlapping in each segment of a structured lesson.
Teacher preparation.A critical part of any vocabulary lesson is the careful selection of words and phrases that need close attention to support reading.To support comprehension, these words and phrases need to be those that are important for conceptual learning (i.e., technical words) and those that build upon the reader's linguistic knowledge (i.e., general academic vocabulary).Ultimately, the assessment of a student's background knowledge about the terms and phrases is needed for making informed decisions about word selection.The use of the Knowledge Rating Scale [31] for such purposes should include not only the content-specific words but also general academic words and phrases to ensure that all aspects of vocabulary and language are taken into consideration.The knowledge hypothesis supports these efforts to attend to the background knowledge of students.
Initial explanation.The work of Beck and her associates [32] provides important guidelines for introducing new terminology to students.They advocate for the introductions of new words and phrases to include both a context and a "student-friendly" definition that students can understand.This initial step embraces the instrumental hypothesis with a focus on establishing the basic meanings of the terms within a context and not in isolation.The context would especially provide students with a sense of how general academic terms and phrases are used within the language of the text.Again, the variability of both content-specific vocabulary and general academic vocabulary as previously described (i.e., the degree of abstractness of concepts) must be considered in order to provide an explanation that students will understand.From an instrumental standpoint, these initial explanations can be multimodal to include verbal, visual, and virtual tools.The non-linguistic approach of using pictures, drawings, charts, graphs and other visual representations has much support in the professional literature for assisting students of all ability levels including English language learners with newly introduced vocabulary [33,34].In addition, digital tools are readily available to help reinforce the student's friendly explanations, such as visuwords.comwhich shows a graph of related terms and www.visualthesaurus.comwhich provides an interactive map complete with definitions and pronunciations.
These introductory tasks also reflect the knowledge hypothesis, especially in light of the specific contexts used to build conceptual knowledge.Students must have a working knowledge of the particular contexts used to explain the targeted words and phrases in order to make understandable connections to increase word learning.
Application.Once students have a general understanding of the word meanings, the next instructional step is to have students engage in multiple, meaningful encounters of the words through readings, class discussions, and related activities, in which the conceptually loaded terms and the general academic vocabulary are used.These tasks enable students to make connections of the word usage to a variety of different contexts and situations.However, to be able to connect to the situations, students must have appropriate background knowledge and experiences.Again, the knowledge hypothesis becomes the basis for these learning opportunities.These experiences are even more apparent and necessary across different disciplines.Furthermore, the fact that students need multiple encounters with these varied contexts is supported by the access hypothesis to ensure that the word meanings are internalized deeply, so that students are able to transfer newly acquired learning to other disciplinary-related encounters with the words and phrases.One means for ensuring significant and multiple encounters with words is to pre-teach the key academic vocabulary before reading, ask students to focus on the key words during reading (making notes, drawings, charts) and then revisit the words after the reading [35].Additional follow-up can take the form of small-group writing activities using the words in the context of the content studied [36].
Reinforcement.As previously mentioned, multiple encounters with newly acquired word meanings not only help students develop breadth and depth of word meanings (knowledge hypothesis), but also reinforce the integration of this new knowledge.Continual revisiting of previously learned word meanings, both content-specific words and general academic vocabulary, is a critical component in the instructional framework.Students need a great deal of exposure to both vocabularies to enable them to use this dynamic knowledge base to learn more unfamiliar words and phrases.The purpose of reinforcement is not only based upon the instrumental, knowledge, and access hypotheses, but also on the aptitude and reciprocal hypotheses.In other words, the more students use newly acquired word meanings in their disciplinary readings, the more they will be exposed to and learn new word meanings, thus building an even greater knowledge from which learning can be continued.This reciprocal nature of word learning occurs through opportunities for students to engage in explanations, explorations, elaborations, and evaluations of various topics and concepts across the disciplines.
Conclusions
The hypotheses presented in this article suggest important implications for vocabulary instruction [17].With these theoretical positions as a basis, the potential exists to change the face of vocabulary instruction from merely mentioning, telling, and assigning rote memorization and "look it up in the dictionary" tasks to more meaningful, contextual approaches.To this end, we feel that the single term "vocabulary", which suggests teaching words in isolation be replaced with the broader, multi-dimensional term "vocabulary literacy," which involves making the vocabulary/comprehension connection using all aspects of literacy: reading, writing, listening, speaking, viewing, and visually representing [30].With this perspective undergirding our teaching of both content-specific and general academic vocabulary terms, students process word meanings more deeply as they actively engage in multi-modal activities designed to promote strategic learning and long-term conceptual understanding.
Table 1 .
Features of linguistic knowledge in academic language. | 5,258.4 | 2018-07-17T00:00:00.000 | [
"Education",
"Linguistics"
] |
A computational reproducibility study of PLOS ONE articles featuring longitudinal data analyses
Computational reproducibility is a corner stone for sound and credible research. Especially in complex statistical analyses—such as the analysis of longitudinal data—reproducing results is far from simple, especially if no source code is available. In this work we aimed to reproduce analyses of longitudinal data of 11 articles published in PLOS ONE. Inclusion criteria were the availability of data and author consent. We investigated the types of methods and software used and whether we were able to reproduce the data analysis using open source software. Most articles provided overview tables and simple visualisations. Generalised Estimating Equations (GEEs) were the most popular statistical models among the selected articles. Only one article used open source software and only one published part of the analysis code. Replication was difficult in most cases and required reverse engineering of results or contacting the authors. For three articles we were not able to reproduce the results, for another two only parts of them. For all but two articles we had to contact the authors to be able to reproduce the results. Our main learning is that reproducing papers is difficult if no code is supplied and leads to a high burden for those conducting the reproductions. Open data policies in journals are good, but to truly boost reproducibility we suggest adding open code policies.
Introduction
Reproducibility is-or should be-an integral part of science. While computational reproducibility is only one part of the story, it is an important one. Studies on computational reproducibility (e.g. [1][2][3][4][5][6]) have found reproducing findings in papers is far from simple. Obstacles include lack of methods descriptions and no availability of source code or even data. Researchers can choose from a multitude of analysis strategies and if they are not sufficiently described, the likelihood of being able to reproduce the results are low [7,8]. Even in cases where results can be reproduced, it is often tedious and time-consuming to do so [6]. We conducted a reproducibility study based on articles published in the journal PLOS ONE to learn about reporting practices in longitudinal data analyses. All PLOS ONE papers which fulfilled our selection criteria (see Fig 1) in April 2019 were chosen ( [9][10][11][12][13][14][15][16][17][18][19]).
Longitudinal data is data containing repeated observations or measurements of the objects of study over time. For example, consider a study investigating the effect of alcohol and marijuana use of college students on their academic performance [10]. Students perform a monthly survey on their alcohol and marijuana use and consent to obtain their grade point averages (GPAs) each semester during the study period. In this study not only the outcome of interest (GPAs during several semesters) is longitudinal, but also the covariates (alcohol and marijuana use) change over time. This does not always have to be the case in longitudinal data analysis. Covariates may also be constant over time (e.g. sex) or baseline values (e.g. alcohol consumption during the month before enrollment).
Due to the clustered nature of longitudinal data with several observations per subject, special statistical methods are required. Common statistical models for longitudinal data are mixed effect models or generalized estimating equations. These models can have complex structures and rigorous reporting is required for reproducing model outputs. A study on reporting in generalized linear mixed effect models (GLMMs) on papers from 2000 to 2012 found that there is room for improvement on reporting of these models [20]. Alongside the models, visualization of the data often plays an important role in analyzing longitudinal data. An example is the spaghetti plot, a line graph with the outcome on the y-axis and time on the x-axis. Research on computational reproducibility when methods are complex-such as in this case-is still in its infancy. With this study we aim to add to this field and to provide some insights on challenges of reproducibility in the 11 papers investigated. Furthermore we would like to note that each reproduced paper, is another paper that we can put more trust in. As such reproducing a single paper is already a relevant addition to science.
Computational (or analytic [21]) reproducibility studies-as we define them for this work-take existing papers and corresponding data sets and aim to obtain the same results from the statistical analyses. One prerequisite for such a study is the access to the data set which was used for the original analyses. Also, a clear description of the methods used is essential. An easily reproducible paper provides openly licensed data alongside an openly licensed source code in a programming language commonly used for statistical analyses and also available under a free open source software license (e.g. R [22] or python [23]). If the source code is accompanied with a detailed description of the computing environment (e.g. operating system and versions of R packages) or the computing environment itself (e.g. a Docker container [24]) we believe the chances of obtaining the exact same results to be highest. It is difficult to determine whether a scientific project is reproducible: Is it possible to obtain exactly the same values? Is the (relative) deviation lower than a certain value? Is the difference in p-value lower than a certain value? These and more are questions that can be asked and if answered "yes" the results can be marked as reproducible. Yet all of these come with downsides including being too strict, incomparable, uncomputable, or downright not interesting. Here, we use the definition of leading to the same interpretation, without a rigorous formal definition. The reason is, that the papers analysed here use very different models, so it is hard to compare them on a single scale (such as absolute relative deviation, see e.g. [6]). We argue, that in combination with a qualitative description of challenges and difficulties that we faced in each reproduction process, this definition fits our small scale, heterogeneous, setting better.
In this work we investigated longitudinal data analyses published in PLOS ONE. The multidisciplinarity of PLOS ONE is a benefit for our study as longitudinal data play a role in various fields. Additionally the requirement for a data availability statement in PLOS ONE (see https:// journals.plos.org/plosone/s/data-availability) facilitates the endeavour of a reproducibility study. Note that we only selected papers which provided data openly online and where authors agreed with being included in this study. We assume that this leads to a positive bias in the sense that other papers would be more difficult to reproduce.
In the following we discuss the questions we asked in this reproducibility study, the setup of the study within the context of a university course, the procedure of paper selection, and describe the process of reproducing the results.
Study questions
The aim of this study is to investigate reproducibility in a sample of 11 PLOS ONE papers dealing with longitudinal data. We also collect information on usage of methods, how they are made available and computing environments used. We expect that this study will help future authors in making their work reproducible, even in complex settings such as when working with longitudinal data. Note that based on the selection of 11 papers we cannot make inferences on papers in general or in the journal. We can, however, learn from the obstacles we encountered in the given papers. Also, even reproducing a single paper creates scientific value. It provides a scientific check of the work and increases (or in case of failure decreases) trust in the results.
With the reproducibility study we want to answer the following questions: Transfer and Accumulation System) course aimed at statistics master students (compulsory in biostatistics master, elective in other statistics masters) with 4 hours of class each week: 3 hours with a professor (Heidi Seibold), 1 with a teaching assistant (Malte Nalenz). The course teaches how to work with longitudinal data and discusses appropriate models, such as mixed effect models and generalized estimating equations, and how to apply them in different scenarios. As part of this course, student groups (2-3 students) were assigned a paper for which they aimed to reproduce the analysis of longitudinal data. In practical sessions the students received help with programming related problems and understanding the general theory of longitudinal data analysis. To limit the likelihood of bias due to differing skills of students, all groups received support from the teachers. Students were advised to contact the authors directly in case of unclear specifications of methods. Internal peer reviews, where one group of students checked the setup of all other groups, ensured that all groups had the same solid technical and organizational setup. Finally all projects were carefully evaluated by the teachers and updated in case of problems. Replications and a student paper were the output of the course for each student group and handed in in August 2019. We believe that the setup of this reproducibility study benefits from the large time commitment the students put into reproducing the papers. Also having several students and two researchers work on each paper, ensures a high quality of the study. This project involved secondary analyses of existing data sets. We had not worked with the data sets in question before.
Selection of papers
For a paper to be eligible for the reproducibility study it has to fulfill the following requirements:
R.1
The paper deals with longitudinal data and uses mixed effect models or generalized estimating equations for analysis.
R.2
The paper is accompanied by data. This data is freely available online without registration.
R.3
At least one author is responsive to e-mails.
Requirement R.1 allows us to select only papers relevant to the topic of this project. Requirement R.2 is necessary to allow for reproducing results without burdens (e.g. application for data access). Although PLOS ONE does have an open data policy (https://journals. plos.org/plosone/s/data-availability), we found many articles which had statements such as "Data cannot be made publicly available due to ethical and legal restrictions". Issues with data policies in journals have been studied in [25]. Requirement R.3 is important to be able to contact the authors later on in case of questions. Fig 1 shows the selection procedure. All papers which did not fulfill the criteria were excluded. The PLOS website search function was utilized to scan through PLOS ONE published works. Key words used were "mixed model", "generalized estimating equations", "longitudinal study" and "cohort study". This key word searchperformed for us by a contact at PLOS ONE-resulted in 57 papers. From these 14 papers fulfilled all criteria and were selected. Two authors prohibited to use of their work within our study. We note that authors do not have the right to prohibit the reuse of their work as all papers are published under CC-BY license. However the negative response lead us to drop the papers, as we expected to have the need to contact authors with questions. For one paper we did not receive any response. Discussions on the selection criteria of all proposed papers are documented in https://osf.io/dx5mn/?branch=public. Table 1 shows a summary of all papers selected so far.
Replication
In the reproducibility study we adhered to open science best practices. (1) We contacted all corresponding authors of papers we aimed to reproduce via e-mail; (2) all of our source code and data used is available; (3) any potential errors in the original publications were reported immediately to the corresponding author. In our study we conducted all analyses as close to the original analyses as possible. If many analyses were performed in the original paper, we focused on the analyses of longitudinal data. We conducted all analyses using R [22] regardless of the software used in the original paper to mimic a situation where no access to licensed software is available (R was the only open source software used in the 11 papers).
Each analyis consisted of the following steps: 1. Read the data into R.
2. Prepare data for analysis. The description about all these steps was generally vague (see classification of reported results in [6]) meaning that there were multiple ways of preparing or analysing the data that were in line with the descriptions in the original paper. This study, thus, exposed a large amount of "researcher degrees of freedom" [26] coupled with a lack in transparency about in the original studies. We aimed to take steps that align as closely as possible with the original paper and the results therein. That means, if the methods description in paper or supplementary material were clear, we used those; If not, we tried different possible strategies that we assumed could be correct; If this was not possible or did not lead to the expected results, we contacted the authors to ask for help. All code used by us is publicly available including software versions and in a format easily readable by humans (literate programming, for further information see section on technical details).
Results
The results of our study are summarized in Tables 2-4. As each paper has its own story and reasons why it was or wasn't reproducible and what the barriers were, we provide a short description of each individual paper reproduction.
Which methods are used? For an overview on the following questions we refer to Table 2.
What types of tables are shown? Most of the papers show tables on characteristics of the observation units at baseline or other summary tables (similar to the so called " Table 1" commonly used in biomedical research) which give a good overview of the data.
What types of figures are shown? Few papers include classical visualizations taught in courses on longitudinal data, such as spaghetti plots. They mostly present other visualizations (for details, see Table 2).
PLOS ONE
What types of statistical models are used? Although in most cases (G)LMMs are superior to GEEs (see [27] for an in-depth discussion and further references)-, 7 out of the 11 papers used GEEs for their analyses [11,12,[15][16][17][18][19]. There is, in fact, only one complex mixed model among the methods used (Beta Binomial Mixed Model, [9]). The other articles [10,13,14] use LMMs which are equivalent to GEEs for normally distributed response variables. It should be noted that the selection of papers may not be representative of the general use of GEEs and (G)LMMs. Nevertheless it seems that the reluctance of using GEEs has not spilled over from the statistics community to some other fields, which we speculate to have historical reasons, as GLMMs used to be difficult to compute.
Which software is used? The results of this section are summarized in Table 3. Is the software free and open source? All except one paper (paper [16]) used closed source software. As our goal was to evaluate how hard reproducing results is when licenses for software products are not available we worked with the open source software R. Implementations in different software products for complex methods such as GEEs and (G)LMMs may show slightly different results even when given the same inputs and with this we expected difficulties in reproducing exactly the same numbers for all papers using software other than R.
PLOS ONE
Is the source code available? Only one paper (paper [9]) provided source code. The source code provided was only a small part of the entire code needed to reproduce the results. Nevertheless it was a major help in obtaining the specifications of the models. For one paper we received the code through our email conversations [16]. For all other papers we had to rely on the methods and results sections of the papers. Often we resorted to reverse engineering the results as the methods sections were not sufficiently detailed.
Is the computing environment described (or delivered)? In most cases the authors provided information on the software used and the software version (9 out of 11). None of the papers described the operating system or provided a computing environment (e.g. Docker container).
Are we able to reproduce the data analysis? The results of this section are summarized in Table 4.
Are the methods used clearly documented in paper or supplementary material (e.g. analysis code)? Although all papers in question had methods sections, for most papers we were not able to extract all needed information to reproduce the results by ourselves. The most common issue was that papers did not provide enough detail about the methods used (e.g. model type was mentioned but no detailed model specifications, for details see Table 4). Since, in addition, no source code was provided (except for paper [9]), reproducing results was generally only possible by reverse engineering and/or contacting the authors. As most authors used licensed software which was not available to us, we could not determine if we would have reached the same results using default settings in the respective software. A clear documentation therefore requires enough detail to explicitly specify all necessary parameters for the model, even when using a different software.
Do we have to contact the authors in order to reproduce the analysis? How many emails are needed to reproduce the results? In all but two cases (papers [10,19]) we contacted the authors to ask questions on how the results were generated (for four of them several emails were exchanged). All but one of the authors responded, which was to be expected as we had previously contacted them asking whether they would agree with us doing this project and only papers were chosen where authors responded positively. In most cases responses by authors were helpful.
Do we receive the same (or very similar) numbers in tables, figures and models? As the articles use different models and present their main results in terms of different statistics (model coefficients, F-statistics, correlation), the purely numerical deviation between our
PLOS ONE
results and the original results is not informative in isolation. Also, as we used different software implementations, some deviation was to be expected. Therefore, we define similar results as having the same implied interpretations, regarding sign and magnitude of effects. If the signs of the coefficients was the same and the ordering and magnitude of coefficients roughly the same, we regarded the results as successfully reproduced. We were able to fully reproduce 6 out of 11 articles (see also Table 4). Here differences were marginal and did not lead to a change of interpretations. An example (original and reproduced coefficients of article [15]) can be seen in Fig 2. For another two articles at least parts of the analysis could be reproduced (e.g. one out of two models used by the authors). For the 8 articles, that we found to be fully or partly reproducible, we were able to follow the data preprocessing and identify the most likely model specifications. Only three out of the 11 papers could not be reproduced at all, one because of implementation differences [13] and one due to problems preparing the data set used by the authors [18]. In [14] it was unclear how the data was originally analysed and without responses from the authors to our contact attempts via email we were not able to determine whether the different conclusions reached by our analysis are due to incorrect analysis on side of the authors or missing information.
Note that for some of the results, a considerable amount of time and effort needed to be invested to reverse engineer model settings. In the following we summarize the reproduction process for each paper individually, in order to give more insights about the specific problems and challenges that we encountered. (see also Table 4).
In [9] problems arose with the provided data set. The data description was found to be insufficient. Variable names in the data set differed from the ones in the code provided by the authors. We were able to resolve this problem based on feedback from the authors. When running the analysis using R and the R package PROreg [28], results differed from the original results due to details in the implementation and a different optimization procedure. The reproduced coefficients had the same sign as in the original study. However, differences in magnitude were large for some of the coefficients, likely due to differences in the optimization procedure. Given our definition, we were unable to reproduce the results. A second model fitted by the authors was not reproduced, due to convergence problems (model could not be fitted at all).
PLOS ONE
We were able to reproduce the results in [10] without contacting the authors. Some difficulty arose from the very sparse model description in the publication, such as, which variables were included as fixed or random effects. Also no source code was available. However within reasonable trial of different model specifications we obtained very similar results as in the original publication.
In [11] the number of observations differed between the publication and the provided data set. Upon request one of the authors provided a data set, that was almost identical to the one used in the study. The performed descriptive analysis and correlation analysis yielded the same results. A second difficulty arose, as the authors did not specify the correlation structure used in their model, but instead relied on the Stata routine to determine the best fitting correlation structure using the Quasi-Likelihood information criterion. If the correlation structure yielding the coefficients closest to the ones in [11] is used, the coefficients are almost identical. However, we also performed the aforementioned model search procedure in R but ended up with a different correlation structure as the best fitting. Using the correlation structure found best by our R implementation, would lead to a change in interpretation of the coefficients.
In [12] difficulties arose from different implementations in the software used. Also the model description was incomplete, which required us to try all possible combinations of variables to include. However, the correlation structure was well described and with feedback from the authors we were able to obtain the same results deviating only on the third decimal.
[13] used a cross-classified LMM, via the SAS "PROC mixed procedure". Reproduction in R was difficult, as no R package offered the exact same functionality. After trying several R implementations, we settled on the nlme R package [29]. The random effects were not specified in the publication. Also SAS code to shed light on this question was not available. Other questions regarding preprocessing and model specifications could be resolved through the feedback of the authors, but we did not receive the needed information on the random effects. As such we could not reproduce the results.
In [14] the data set used for modeling was not given as a file. Instead the authors provided links to the website where the data had been initially obtained from. We were not able to obtain the same data set given the sources and the description. This might be due to changes in the online sources. Still, differences in summary statistics were not substantial. We were unable to reproduce the same model due to unclear model specification. Our attempts led to some vastly different estimates. Possible reasons for failure are an insufficient model description or even incorrect analysis.
We were able to reproduce the results in [15] with only minor differences in the estimated coefficients. Feedback from the authors was required to find the correct correlation structure used in their GEE model, which was not explicitly stated in the paper.
The results in [16] were computationally reproducible. Despite minor differences in the coefficients we arrived at the same interpretations and differences were most likely due to different optimization procedures in the softwares used. The correlation structure was not stated in the article, but we were able to find the correct one using reverse engineering (grid search).
For the reproduction of [17] we had problems with data preprocessing. This was partly due to the unclear handling of missing values and due to details of the dimensionality reduction procedure used in preprocessing. The authors provided the final data set when we contacted them. The model specifications of the GEE used by the authors were not stated, but we were able to reproduce the exact same results as the authors by reverse engineering the correlation structure and link function. During this we found that using different model specifications or slightly different versions of the data set leads to substantially different results. Given the above definition this article was reproducible.
The results in [18] could not be reproduced. The (DNA) data was given in raw format as a collection of hundreds of individual files, without any provided code or step by step guide for preprocessing, making reproduction of the data set to be used in the statistical analysis impossible for us . Figures and Tables of the clinic data were reproducible. The results in [19] were reproducible. All necessary model specifications for their GEE model and reasoning behind it were explicitly stated in the paper. The original analysis was carried out in M-plus, but reproduction in R gave almost identical results.
What are characteristics of papers which make reproducibility easy/possible or difficult/impossible? Based on the discussion of the individual papers we identified determinants of successes and failures. We found that the simpler the methods used in the paper the easier it was to reproduce the paper. Papers dealing with classical LMMs (papers [10,14]) were reasonably easy to reproduce.
The data provided by the authors played a major role as well. If the clean data was provided, reproducing was much easier than for papers providing raw data (papers [14,17,18]), where preprocessing was still necessary. For one paper [18] getting and preparing the data was so complex that we gave up. Even after the authors provided us with an online tutorial on working with this type of data, we were far from understanding what needed to be done. If specialists (e.g. bioinformaticians) on working with this type of data had been involved, we might have had better chances.
We believe that with code provided-even if it is written using software we do not have access to-computational reproducibility is easier to obtain. It is hard to make this conclusion based on the 11 papers we worked with, because only one provided partial code and 1 provided code on request, but they also did not contradict our prior beliefs.
What are learnings from this study? What recommendations can we give future authors for describing their methods and reporting their results? Trying to reproduce 11 papers gave us a glimpse at how hard computational reproducibility is. We used papers published in an open access journal, which provided data and the authors were supportive of the project. We think it is fair to assume that these papers are among the most open projects available in academic literature at the moment. Nevertheless we were only able to reproduce the results without contacting the authors for two papers.
We not only recommend authors to provide data and code with their paper, but we suggest that this should be made a requirement from journals.
Further points
One paper published raw names of study participants, which we saw as unnecessary information and with that as an unreasonable breach of the participants. We informed the authors who updated the data on the journal website.
Discussion
In this study we aimed at reproducing the results from 11 PLOS ONE papers dealing with statistical methods for longitudinal data. We found that most authors use tables and figures as tools for presenting research results. Although all papers in question had data available for download, only one paper came with accompanied source code. From our point of view the lack of source code is the main barrier in reproducing results of the papers. For some papers we were still able to reproduce results by using a strategy of reverse engineering the results and by asking the authors. In an ideal situation, however, the information needed should not be hidden within the computers and minds of original authors, but should be shared as part of the article (optimally in the form of a research compendium with paper, data, code, and metadata).
One of the authors initially contacted asked us to refrain from reproducing their paper on the grounds that students would not have the capabilities to do such complex analyses. We did not include the article in our study, but strongly disagree with this statement, especially since the students in question all have a strong statistics background and benefited from the guidance of researchers. Furthermore the students checked each other's works in an internal peer review. We would even go so far as to claim that a lot of other statistical work is less understood by the researcher and less thoroughly checked by peers before it is combined into a publication. Working as a big team gave us the option to conduct time intensive reverse engineering attempts of results, which small research teams or single researchers would potentially not have had.
We did not choose the papers randomly, but based on the set of potential papers given to us by PLOS ONE and then selected all papers meeting our criteria (see Fig 1). We can and should not draw conclusions from our findings on the 11 selected papers on the broader scientific landscape. Our work does, however, give us some insights on what researchers, reviewers, editors and publishers could focus on improving in the future: Publish code next to the data. To PLOS ONE we propose to include code in their open data policy.
Reproducing a scientific article is an important contribution to science and knowledge discovery. It increases trust in the research which is computationally reproducible and raises doubt in the research which is not.
Technical details
All results including detailed reports and code for each of the 11 papers are available in the GitLab repository https://gitlab.com/HeidiSeibold/reproducibility-study-plos-one. All files can also be accessed through the Open Science Framework (https://osf.io/xqknz). For all computations all relevant computational information (R and package versions, operating system) are given below the respective computations. The relevant information for this article itself is shown below. | 6,970.6 | 2021-06-21T00:00:00.000 | [
"Computer Science"
] |
Cost-effectiveness analysis of cognitive behaviour therapy for treatment of minor or mild-major depression in elderly patients with type 2 diabetes: study protocol for the economic evaluation alongside the MIND-DIA randomized controlled trial (MIND-DIA CEA)
Background Depression and elevated depression symptoms are more prevalent in patients with type 2 diabetes than in those without diabetes and are associated with adverse health outcomes and increased total healthcare utilization. This suggests that more effective depression treatment might not only improve health outcome, but also reduce costs. However, there is a lack of evidence on (cost-) effectiveness of treatment options for minor and mild-major depression in patients with type 2 diabetes. In this paper we describe the design and methods of the economic evaluation, which will be conducted alongside the MIND-DIA trial (Cognitive behaviour therapy in elderly type 2 diabetes patients with minor or mild-major depression). The objective of the economic evaluation (MIND-DIA CEA) is to examine incremental cost-effectiveness of a diabetes specific cognitive behaviour group therapy (CBT) as compared to intensified treatment as usual (TAU) and to a guided self-help group intervention (SH). Methods/Design Patients will be followed for 15 months. During this period data on health sector costs, patient costs and societal productivity/time costs will be collected in addition to clinical data. Person-years free of moderate/severe major depression, quality adjusted life years (QALYs), and cumulative costs will be estimated for each arm of the trial (CBT, TAU and SH). To determine cost-effectiveness of the CBT, differences in costs and effects between the CBT group and TAU/SH group will be calculated. Discussion CBT is a potentially effective treatment option to improve quality of life and to avoid the onset of a moderate/severe major depression in elderly patients with type 2 diabetes and minor or mild-major depression. This hypothesis will be evaluated in the MIND-DIA trial. Based on these results the associated economic evaluation will provide additional evidence on the cost-effectiveness of CBT in this target population. Methodological strengths and weaknesses of the planned economic evaluation are discussed. Trial registration The MIND-DIA study has been registered at the Current Controlled Trials Register (ISRCTN58007098).
results the associated economic evaluation will provide additional evidence on the costeffectiveness of CBT in this target population. Methodological strengths and weaknesses of the planned economic evaluation are discussed.
Trial registration: The MIND-DIA study has been registered at the Current Controlled Trials Register (ISRCTN58007098).
Background
Depression is a highly prevalent disorder with a substantial impact on quality of life and societal cost [1,2]. This applies in particular to patients with diabetes, since depression has been shown to be more prevalent among these patients as compared to those without diabetes [3][4][5]. Major depression and elevated depression symptoms were present, respectively, in 11 and 31% of individuals with diabetes according to a meta-analysis [4]. Previous research demonstrates that comorbid depression in patients with diabetes is associated with poor self care, i.e. adherence to medication, diet, exercise and smoking cessation [6][7][8], additive functional impairment and work disability [9], poorer glycaemic control [10], higher risk of microvascular and macrovascular complications [11,12], decreased quality of life [13], higher mortality [14,15], increased healthcare utilization and higher costs for other health conditions [7,[16][17][18][19] as compared to patients with diabetes only.
Hence, the more effective depression treatment might not only improve health outcomes, but also reduce total health service utilization and therefore costs. Put differently, additional costs for improved depression treatment could be offset by reduction in other healthcare costs. For example, among older adults with diabetes in the IMPACT trial [20] systematic depression treatment had significant clinical benefit with no increase in overall healthcare costs.
Cost-effectiveness of antidepressive therapies has been mainly evaluated for major depression co-occurring with diabetes. However, since both major depression and depressive symptoms are associated with adverse outcomes in diabetes, there is a need to examine cost-effectiveness of treatment options for minor and mild-major depression as well, which is underscored by high prevalence of elevated depression symptoms in patients with diabetes and growing prevalence of diabetes due to the demographic change.
In what follows, we outline the economic evaluation of a cognitive behaviour therapy for treatment of minor or mild-major depression in elderly patients with type 2 diabetes that will be conducted alongside the MIND-DIA trial. Full details on the design and methods of the MIND-DIA trial will be provided elsewhere (Petrak et al, in prep-aration). Here, after giving a brief overview of the MIND-DIA trial, we focus on describing design and methods of the associated economic evaluation (MIND-DIA CEA).
Methods/design
Overview of the MIND-DIA trial MIND-DIA trial is a multicentre, open, observer-blinded, randomized controlled trial, which will evaluate the effectiveness of a diabetes-specific cognitive behaviour group therapy (CBT) for treatment of a minor or mild-major depression in subjects 65-85 years of age with type 2 diabetes comparing it to intensified treatment as usual (TAU) or a guided self-help intervention "Successful aging with Diabetes" (SH). Approval for conducting this study was granted by the local Medical Ethics Committee (Ethikkommission bei der Landesärztekammer Hessen).
Study sample
A total number of 315 subjects will be included in the study. Patients will be recruited in approximately 20 centres specialising on diabetes treatment in Germany. In the participating centres all 65 to 85 year old patients with type 2 diabetes, who give informed consent, will be screened in a two-stage procedure (Patient Health Questionnaire (PHQ-9) [21] followed by the Structured Clinical Interview for the DSM IV (SCID) [22]). All patients with minor depression (adapted from the DSM-IV-TR research criteria: 3-4 symptoms rather than 2-4 symptoms are required, and a past history of major depression is not an exclusion criterion), or mild-major depression (5 to 6 depressive symptoms according to DSM-IV-TR criteria) will be included in the trial, provided that they meet other inclusion and exclusion criteria (inclusion of patients with diabetes mellitus type 2 diagnosed at least 6 months before entering the trial, residence near the institution where intervention and control treatments will take place (< 1 hour access); exclusion: e.g. history of schizophrenia, psychotic symptoms, or dementia, current antidepressant or relevant psychoactive medication).
Interventions
Patients included in the MIND-DIA trial will be randomized to one of the following interventions: CBT, SH or TAU. CBT is a manual-based diabetes-specific cognitive behaviour therapy, delivered by trained psychologists in small groups in an outpatient setting. The guided self-help group intervention 'Successful aging with Diabetes' (SH) with a focus on living and ageing with diabetes will be delivered by trained moderators (elderly care nurses, nurses or others). SH will include information regarding diabetes and ageing shared by members of the group. This intervention was conceptualized as a control condition for unspecific group effects (e.g. cohesion). Hence, no formal therapeutic aspects will be involved. In patients assigned to intensified usual care (TAU), treating physicians will be notified about the patient's minor or mild depression symptoms and cognitive function. Further, information on available therapeutic options will be provided to physicians. However, any treatment option may be chosen (antidepressant medication, psychotherapeutic interventions or no specific intervention), since care as usual for a minor depression is currently not formalized.
For patients of the CBT and SH groups the trial will comprise two phases: (1) 12 weeks of open-label therapy (weekly sessions of two hours each) and (2) one year maintenance phase, during which both group interventions (CBT and SH) will be reduced to one session per month. Usual diabetes therapy is not a part of the protocol and will be delivered by the treating physicians 'as usual'. For the first funding phase, the duration of the MIND-DIA trial is expected to be approximately 36 months. Recruitment of patients started in May 2009.
Clinical outcomes
Primary clinical endpoint will be the improvement in health-related quality of life (HRQoL) at one year followup after the 12 weeks of therapy as measured by the Mental Component Summary Score of the SF-36. Secondary clinical endpoints are Physical Component Summary score of the SF-36, improvement in HRQoL measured by the EQ-5D; reduction of minor or mild-major depression symptoms (QIDS-C-16); prevention of moderate/severe major depression (Depression module, Structured Clinical Interview for DSM-IV, SCID); improvement of glycaemic control (centrally measured HbA1c) and mortality.
Sample size calculation
The power calculation was based on expected differences in SF-36 z-scores. Differences of δ = 0.6 between CBT and TAU and of δ = 0.4 between CBT and SH were assumed. For the latter comparison 132 patients per intervention group were needed to detect a significant difference with a power of 90% (2-sided t-test, α = 0.05). Given 132 patients in the CBT group it is sufficient to enroll 51 patients in the TAU group to achieve a power of 95% for the comparison CBT vs. TAU. It is therefore planned to recruit a total number of 315 patients (132 in CBT, 132 in SH and 51 in TAU).
Economic evaluation alongside the MIND-DIA trial (MIND-DIA CEA)
Outline of the economic analysis The economic evaluation will be performed alongside the MIND-DIA clinical trial over the complete trial period. Costs and effects will be discounted at the rate recommended by the German guidance for economic evaluation issued by the Institute for Quality and Efficiency in the Health Care Sector (IQWiG), which is currently 3%. The economic evaluation will be undertaken from the perspective of the German statutory health insurance and from the societal perspective on costs. Accordingly, health sector costs, patient costs and societal productivity/time costs will be included in the analysis ( Table 1).
The three outcomes estimated for the economic analysis will be person-years free of moderate/severe major depression, quality adjusted life years (QALYs), and cumulative cost accrued in each arm of the trial (CBT, TAU and SH). To estimate cost-effectiveness of CBT, an incremental costeffectiveness ratio (ICER) will be calculated, i.e. the ratio of the difference in costs between CBT and TAU/SH divided by the difference in effects. Under statutory health insurance perspective on cost, ICER will be calculated using health sector costs only. Adopting the societal per- spective, also patient costs and societal productivity/time costs will be added to the calculation of the ICER.
Estimating effects Calculation of depression-free years
Person-years free of moderate/severe major depression will be calculated based on incidence of moderate or severe major depression in different treatment groups.
Translation of HRQoL measures into QALYs
For the purposes of economic analysis, measures comparable across disease areas are preferred. Most popular outcome measure for this purpose are the quality adjusted life-years (QALYs), which explicitly combine length and quality of life in a single measure, weighting survival (a set of health states) by utility scores. Utility weights reflect preferences for a particular health state and are measured on a scale from 0 to 1, where 0 and 1 represent death and full health, respectively [23,24]. Although the MIND-DIA study does not include a utility measurement as part of its protocol, it does include SF-36 und EQ-5D questionnaires measuring health related quality of life. Standardized algorithms exist to translate EQ-5D and SF-36 scores into utility weights suitable for calculation of QALYs [25,26].
The descriptive system of the EQ-5D allows for 243 unique health states. A preference-based scoring function can be used to convert the descriptive information to a summary index score (utility weight). More than 15 value sets are available for scoring the EQ-5D, based on rating scale and time trade-off (TTO) valuation derived from general population surveys in various countries (including the United Kingdom, Germany, and the United States) [27]. In this study the scoring function derived from a survey of the general population in Germany will be used to calculate utility weights from EQ-5D responses [28].
Brazier and colleagues reported work on deriving a reduced health status index from the SF-36 that they termed the SF-6D [29] and more recently, they have published an algorithm that allows the estimation of utility weights for all states of the SF-6D index [30]. Following this published algorithm, SF-36 scores observed in the trial will be converted to utility weights. Since the values underlying this algorithm were obtained in the United Kingdom, utilities derived from SF-36 scores will be only used to perform a sensitivity analysis.
In the MIND-DIA study, the SF-36 and EQ-5D instruments will be administered at baseline, after the intervention (at 3 months), and then quarterly during a 1 year follow-up (see Table 1), generating a maximum of six possible observations for each patient enrolled in the trial. QALYs will be calculated assuming linear interpolation between measurement points and calculating the area under the curve to give a QALY score per patient over the trial period [24].
Estimating costs: measurement and valuation of resource use Measurement of resource use
Resource use directly associated with CBT and SH (e.g. patients' and staff time due to screening and treatment) will be derived from the therapy protocols. Information on the other healthcare consumption will be obtained from trial participants by means of a cost questionnaire, which was developed for this purpose and incorporated into the case report files of the MIND-DIA trial. The questionnaire will be administered before the intervention (baseline), at 6, 12, and 15 months of the trial and refers to the previous 6 or 3 (for the last assessment) months (see Table 1). The cost form includes structured no/yes questions on the utilization of different medical services under the following categories: primary care visits, visits to emergency departments, visits to specialists, hospital stays, medication, and other therapies/paramedical care.
If patients indicate that they received specific medical care over the past 6 or 3 months, they are asked to specify the volume: e.g. number of contacts with healthcare providers, number and length of hospitalizations, types and dosage of obtained medications. In the cost questionnaire patients are also asked to indicate whether health care services obtained by them were paid by the health insurance or self-paid, which makes an assessment of out-ofpocket expenses possible.
Furthermore, (leisure) time losses will be measured by (i) registering the number of 'disability' days (days of reduced activity at home) with the cost questionnaire and (ii) estimating average times of receiving medical care from data on health services utilization in the trial groups. Note that the majority of patients enrolled into the trial are expected to be retirees. Hence, only (leisure) time losses and not productivity losses (i.e. time missed from work) are captured in the analysis.
Health sector costs
To estimate costs from the statutory health insurance point of view, direct healthcare resource use of the interventions and other reported healthcare utilization (consultations, hospital days, etc.) will be multiplied by unit costs/prices. Currently, there are no German guidelines for costing in economic evaluations containing standard unit costs. Hence, healthcare resource use will be valued by unit costs/prices obtained from published sources and official statistics for Germany (e.g. charges and rates from administrative databases, pharmacy retail prices).
Patient costs
To estimate patient costs, reported consumption of healthcare services paid out of pocket will be multiplied by unit costs/prices available from official statistics and from providers. Patient travel costs will be calculated based on the amount of health care utilization and on average distances to health care providers.
Societal productivity/time costs
Days of reduced activity at home and time of receiving medical care will be valued using average net wage rates, which represent opportunity costs of lost unpaid work and leisure according to the human capital approach.
Statistical analysis of costs and effects
Mean total costs, health sector costs, patient costs and productivity/time costs of interventions as well as corresponding cost differences between the CBT group and TAU/SH group will be calculated. Sampling uncertainty will be estimated using bootstrap procedure because cost data are non-normally distributed.
Incidence rates of moderate/severe major depression in all groups will be estimated, along with 95% confidence intervals. The incidence rate ratio (the incidence rate of moderate or severe major depression in the CBT group over the incidence rate in the TAU/SH group) will be analysed by regressing depression status at follow-up on the type of intervention in a Poisson model.
Effect in terms of QALYs will be analysed using linear regression on type of intervention and -if necessary -on baseline utility score, which has been shown to be important for the unbiased assessment of mean QALY differences between treatment groups [31].
Imputation of missing information on costs and effects
Data will be analysed according to the intention to treat principle. A multiple imputation approach based on propensity scoring will be used to account for missing information with regard to effects and costs. Baseline variables (e.g. age, gender, cost at baseline, etc.) will be entered into a logistic regression to predict the chance of a missing value [32,33]. Available data will be arranged into quintiles based on this predicted probability (propensity score) and a replacement value for missing data will be selected at random from the available data points within the same quintile. By choosing a value at random within the same quintile the principle of multiple imputations could be employed, whereby each missing value is replaced by m > 1 simulated values [34][35][36]. Each of m resulting data sets will be analysed as described above and combined to produce a single result that takes uncertainty in the imputation process into account.
Determining cost-effectiveness
If a significant impact of CBT on both effects and costs is demonstrated, ICER will be estimated in terms of costs per year free of moderate/severe major depression and per QALY gained. ICER will be estimated for the total cost (health sector costs plus patient costs plus societal productivity costs) and for the health sector costs (statutory health insurance perspective) separately. The non-parametric bootstrap method will be employed to generate confidence intervals around the ICER estimates derived from the study sample [37,38]. Uncertainty surrounding the ICER will also be presented on the cost-effectiveness plane [39,40] and as the cost-effectiveness acceptability curve [41,42].
Sensitivity analyses
Besides statistical uncertainty (sampling variation) with regard to costs and effects, every economic evaluation may contain some degree of data imprecision (e.g. resource costs/prices) and methodological controversy (e.g. derivation of utility weights, discount rate), which should be accounted for. To handle this type of uncertainty, sensitivity analysis is usually employed [23,24,43]. In the sensitivity analysis (uncertain) parameter(s) of the base-case analysis are varied to determine if changes in these parameters influence the results. Univariate sensitivity analyses will be performed by varying health service unit costs and utility weights (see Table 2). Further, in order to assess how a simultaneous change of several variables affects the cost-effectiveness ratio, a multivariate sensitivity analysis will be performed. To fully appreciate the potential influence of missing responses and of the imputation method chosen, sensitivity analyses examining the effect of alternative imputation methods (linear extrapolation and complete case analysis) will be conducted. When conducting sensitivity analysis, we will report both the revised
Methodological considerations Identification of resource use/attribution
Research has shown consistent association between depression symptoms and increased total healthcare utilization and costs in patients with diabetes, even after controlling for co-morbidities. Consequently, it is difficult to identify (and to measure) resource use related to depression. In the context of the trial, it might be argued that, if true randomisation is achieved, any differences in cost between treatment arms can be attributed to the study intervention [44]. Hence, data on utilization of a broad range of health services will be collected in the MIND-DIA trial. On the one hand, this approach allows avoiding the neglect of any unexpected changes in resource use related to the interventions being compared. On the other hand, however, it may complicate the detection of statistically significant difference in health service costs, since the latter have been shown to be highly variable and therefore to require larger overall sample sizes [44,45].
Measurement of resource use
Information on healthcare utilization other than CBT and SH sessions will be collected by self-report by means of the cost questionnaire. To our knowledge no standard and validated instruments for collecting resource use data in clinical trials are available in Germany. Hence, we developed a data collection instrument specifically for the MIND-DIA trial. The questionnaire was pilot tested, but has not yet been validated against other data sources. Recall bias may potentially occur, since resource use will be measured over the previous 6 or 3 months. However, there is no conclusive evidence regarding whether a prospective (a cost diary) or a retrospective (a questionnaire) instrument should be better applied and regarding an appropriate recall interval (44). Van den Brink et al. found that for the assessment of healthcare utilization in economic evaluations alongside clinical trials, a cost questionnaire may replace a cost diary for recall periods up to 6 months [46] and that such patients' self-reports are a valid source of data on days of hospitalization and outpatient visits, whereas costs of medication may be underestimated [47].
Time costs
In this study days of reduced activity at home and time of receiving medical care will be monetary valued using average net wage rates representing opportunity cost of unpaid work and leisure according to the human capital approach. It is contentious, however, whether lost (leisure) time should be measured in terms of costs or effects [23,24,48,49]. In particular, lost (ability to enjoy) leisure time might as well be reflected in QALYs, since health related quality of life instruments, e.g. the EQ-5D and the SF-36, explicitly ask about problems in performing leisure activities and social activities [48]. Thus, double counting may occur. We nevertheless decided to capture time costs in the numerator of the cost-effectiveness ratio, when adopting the societal perspective, because it is unclear to what extent changes in (leisure) time are measured by QALYs and the true societal loss may be underestimated otherwise. Furthermore, this approach will help to avoid some unpalatable equity implications, since time missed from work due to treatment or illness is most often captured in monetary terms.
Conclusion
Depression and depression symptoms co-occurring with type 2 diabetes are highly prevalent and associated with a wide range of adverse outcomes, including less effective self-care, more severe physical symptoms, greater functional impairment and disability as well as increased healthcare utilization and expenditure. However, there is a lack of evidence on effectiveness of treatment options for minor or mild-major depression co-occurring with diabetes. Therefore, the MIND-DIA trial will evaluate the effectiveness of cognitive behaviour therapy for treatment of sub-threshold depression in elderly patients with type 2 diabetes, comparing it to a self-help group intervention and to usual care.
The negative impact of co-morbid depression on both health effects and total costs suggests that the more effective treatment of depression might not only improve health outcomes in patients with diabetes, but also change the pattern of health services use and therefore total healthcare costs. Costs for depression treatment might be balanced by reduction in other healthcare utilization or even lead to savings in total costs. Thus, besides testing the effectiveness of CBT as a treatment option to improve quality of life and to avoid the onset of a moderate or severe major depression in elderly patients with type 2 diabetes and minor or mild-major depression, costeffectiveness of the CBT should be examined as well. The economic evaluation conducted alongside the MIND-DIA trial will provide additional evidence on whether CBT is a cost-effective strategy in this target group. Importantly, since patients are followed for 15 months, long-term incremental costs and effects of the CBT will be captured.
the manuscript. GG gave support relating to the statistical analysis. FP is the coordinator and principal investigator of the clinical trial. KP, MH, and MM are the coordinating investigators of the clinical trial. All co-authors read, edited, and approved the final manuscript. All authors participated in the work sufficiently to take public responsibility for respective parts of the paper. | 5,518.8 | 2009-07-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
Moderna's Vaccine Using the K-Nearest Neighbor (KNN) Method: An Analysis of Community Sentiment on Twitter
: The COVID-19 is still in Indonesia. The government has made efforts to stop the COVID-19 virus, by moving vaccination program. There are various types of vaccines, one of which is moderna vaccine or MRNA-1273 that applied intramuscularly. The vaccination programs using modern vaccines creates different opinions in public, especially among Twitter users. The opinion uploaded will be the data on Public Sentiment Analysis on Twitter About Moderna Vaccines Using K-Nearest Neighbor Method research. In this study, TF-IDF method is used for weighting the words and KNN for classifying the sentiment into two groups of sentiments, namely positive and negative. The tools used in this research are Rapid miner to collect tweet data and Python for sentiment classification and evaluation. From the test results Based on 50 training data when k = 3 it is known that the accuracy value is 80%, precision is 80%, recall is 100% and F-Measure is 89%.
Introduction
Currently the world is being hit by an outbreak of covid-19 which is very disturbing the activities of people around the world and until now it cannot be predicted when it will end (Baj et al., 2022).This virus can spread very quickly through direct touch or through the air.Various efforts have been made by the government to stop the spread of this virus.One way is to carry out a vaccination program using moderna vaccines.The responses of the Indonesian people regarding this moderna vaccine also varied (Harapan et al., 2020).There are those who welcome it and some who oppose this vaccination program.In the current era of digitalization, people are more inclined to express themselves and their views through social media, one of which is Twitter (Ramadhani & Wahyudin, 2022).Public opinion on Twitter will become data for sentiment analysis research on modern vaccines.
Sentiment Analysis is a stage of text analytics to obtain various data sources from the internet and several social media platforms.To obtain opinions from users who are on the platform (Alshuwaier et al., 2022).
Sentiment analysis is the process of understanding and classifying emotions (positive or negative) contained in writing using text analysis techniques (Veritawati et al., 2015).
Several related studies are used as references in this research such as the research entitled "Analysis of Sentiments to Astra Zeneca Vaccination on Twitter Using the Naïve Bayes and KNN Methods" which discusses public opinion on Twitter about the Astra Zeneca vaccine, in this case it is grouped into three, namely sentiment positive, neutral and negative (Ramadhani & Wahyudin, 2022).Research with these data has an accuracy value for the Naïve Bayes method of 88.56% +/-4.71%(micro average: 88.62%) while for the KNN method the results obtained from sentiment analysis are: 74.78% +/-3.74%(micro average: 74.77%).Another study entitled "Classification of Tweets on Twitter using the K-Nearest Neighbor (KNN) Method with TF-IDF weighting" conducted sentiment analysis on Kompas and detik news media, then classified them into technology, health, economics, sports and automotive groups (Satrio & Fauzi, 2019).Based on the results obtained from this study, the smaller the k value used, the more accurate the KNN method.Another study entitled "Application of Sentiment Analysis on Twitter Users Using the K-Nearest Neighbor Method" discusses sentiment analysis of the DKI Pilkada 2017 which is then classified into two classes, namely positive and negative.The results obtained from this study are an accuracy of 67.2% for the value of k = 5.
Unlike the research mentioned above, the method used in this study is the Term Frequency-Inverse Document Frequency (TF-IDF) method for word weighting and the K-Nearest Neighbor (KNN) method for classifying sentiment into two classes, namely positive and negative using tools such as Rapid Miner and Python programming (Khalid et al., 2020).
In this study, the data used as an object to be analyzed were taken from Twitter from May 1 to May 16, 2022.If the results are obtained, they will be tested for their truth value using the evaluation measure stage so that they can be sure that the KNN method can be used effectively.effective in the case of analyzing public sentiment on Twitter regarding the Moderna Vaccine.
Method Data Mining
Data mining is a science cluster of combining statistical techniques, mathematics, artificial intelligence, machine learning to extract and identify information from complex databases (Aher & Lobo, 2012).The purpose of data mining is to dig up information about the characteristics of the observed data or object, it can also be used as a reference for making decisions or even predicting future conditions based on the data being analyzed (Silalahi & Simanullang, 2022).The process of working on data mining is described in Figure 1.
Text Mining
Text Mining is a technique for getting a lot of data that was not previously known or rediscovering information sourced from automatically extracted text information.The purpose of this technique is to get useful information from a collection of documents (Lestari & Saepudin, 2021).There are many methods that can be used to extract text data, but the first step is data preprocessing.
K-Nearest Neighbor (KNN)
One simple method for classifying data based on data with the shortest distance is KNN (Akbar & Kusumodestoni, 2020;Na'iema et al., 2022).If this method is used to classify text, it will produce a more optimal value but first weight each word in a text document that will be processed using Term Frequency-Inverse Document Frequency (TF-IDF).Then to calculate the value of the distance between documents using Euclidean Distance.
Research Stages
In this study, the object under study was the public opinion of Twitter users regarding the moderna vaccine.The data used is in the form of tweets in Indonesian.
Here are some steps taken in this research shown in Figure 2.
Data Collection
Data collection in this study is a stage of data mining.Data mining is the process of taking data patterns to be processed and then the output is in the form of very important information.The goal is to understand more about the observed data behavior or often referred to as a description and to estimate conditions that will occur in the future or are called predictions.(Nikmatun & Waspada, 2019).The data collection process in this study used Rapidminer tools.The data taken is a tweet about the moderna vaccine in May 2022 which is saved in a csv format file.
Preprocessing Text
Preprocessing is the stage for preparing raw data before other processes are carried out (Naresh & Kiran, 2019).In general, the data preprocessing stage is carried out to eliminate inappropriate data or change data into a form that is easier for the system to process.Some of the processes carried out at this stage are as follows: Cleansing: Is the stage for cleaning attributes that have no effect such as symbols, numbers and links; Folding case: Is the process of changing all the letters in the tweet document to lowercase.Only letters a to z are processed.Characters other than letters will be left; Tokenizing: At this stage, sentences are cut or separated based on the specified space; Filtering: This is the stage of removing unnecessary words so that the calculation focuses more on words that are far more important; and Stemming: Is the stage to find the basic words.At this stage, the process of taking basic words and removing affixes from existing words is carried out.
Word Weighting
After the data has been cleared, then a value is given to the word per document using the Term Frequency -Inverse Document Frequency (TF-IDF) method using a calculation formula.
Where d is the d document, t is the t word of the keyword, W is the weight of the document and IDF is the inverse of the number of times the word appears.
Clasification
Classification is a method for grouping an object into a particular group or class (Rizki & others, 2019).The classification in this study uses the KNN method whose working system is to classify objects based on data that are closest to the object.Grouping new data based on its nearest neighbors expressed by k. using the following calculation formula.
Where A is a testing or testing document, B is a training or training document and t is the number of terms or words.
Evaluation Measure
Evaluation measurement aims to measure the performance that can be achieved by the system (Akhtar et al., 2021).Evaluation in this study is used to determine whether a system has been optimal in detecting pages that are indicated to have semantic similarities to other pages.The evaluations used are precision, recall, accuracy and F-Measure (F1-Score).Using the confusion matrix shown in Table 1.
Result and Discussion
Data collection (Crawling) Research data collection was obtained using a rapid miner and then stored in a csv format file (Hofmann & Klinkenberg, 2016;Kunnakorntammanop et al., 2019).Before the preprocessing stage is carried out.The data collected will be used as training data as much as 50 data.The stages of crawling data with rapid miner can be seen in Figure 2.
Preprocessing Results
Data preprocessing is a cleaning stage before the data is further processed.The stages of data preprocessing in this study were carried out using the rapid miner application (Sudarsono et al., 2021).The previously collected data will then be managed using a rapid miner through a series of stages such as cleansing, case folding, tokenizing, filtering, and stemming.The cleaning steps for the 1 example tweet obtained are as follows Table 2.In this stage, the dot symbol is removed.The case folding stage will change all letters to lower case.Examples of tweets that go through this stage can be seen in Table 3. Tokenizing stages to separate sentences into single words can be seen in Table 4 and the filtering stage will remove words that have no effect and can be seen in Table 5.
Results of Data Weighting
Data weighting is the stage of giving value or weight to data.The data that has been cleaned will then be given a weight or value for each word.In this study, the data weighting process was carried out using the TF-IDF method.The stages of data weighting in this study were carried out using the rapid miner application.The following is the result of weighting the data.
Test Results with Test Data
This stage is carried out using the Python programming language which is run through Google Collab (Perez & Granger, 2015;Zuraimi & Zaman, 2021).The program will classify sentiments using the KNN method by utilizing the sklearn library available in Python and also classifying based on the value of k that is input into the program (Nguyen et al., 2018).The output results that will be displayed are sentiments in the form of positive or negative.The data used as data training is as much as 50 data that has been labeled with sentiment.The training data consists of 35 positive sentiments and 15 negative sentiments which can be seen in the following figure: The data to be tested is weighted first using the TF-IDF method and then input into the program.The following shows the test data that is entered into the program code using the Python programming language.
Evaluation Measure
Evaluation Measure is a step taken to test the classification results of the system by testing its truth value (Arslan & Arslan, 2021;Iwendi et al., 2021).The data set is divided into two class groups, namely positive and negative (Isnain et al., 2021;Shah et al., 2020).The data used for evaluation measurements in this study were 50 data.The data will then be divided by the percentage of 80% training data and 20% testing data.In this study, system evaluation measurements will be measured using a confusion matrix where the two rows and columns of the confusion matrix are referred to as true and false positives and true and false negatives.Evaluation measure calculations in this study were calculated using python programming.The results of the calculations can be seen in the Figure 7.
Figure 8 is a graphic display in the form of a barchart of the evaluation measure process on the system according to the calculations that have been carried out.Can be seen in the figure 8.
Based on the results of the tests that have been carried out using 50 data, where 40 of the data are used as training data and 10 other data are used as data testing, the results obtained are accuracy of 80%, precision of 80%, recall of 100% and f1-score of 89%.
Figure 2 .
Figure 2. Schema of research
Figure 5 .
Figure 5. Label and training
Figure 6 .
Figure 6.Test Data InputBased on the test data above, the sentiment results obtained from tweets that have been weighted belong to the positive sentiment class.
Figure
Figure 8. Barchart Evaluation Measure
Table 2 .
Tweets with the Cleansing Process
Table 3 .
Tweets with Case Folding Process it.Eid al-Fitr is still Prokes
Table 5 .
Tweets with Filtering Process | 2,976.4 | 2023-05-31T00:00:00.000 | [
"Computer Science"
] |
Weakly Supervised Video Anomaly Detection Based on 3D Convolution and LSTM
Weakly supervised video anomaly detection is a recent focus of computer vision research thanks to the availability of large-scale weakly supervised video datasets. However, most existing research works are limited to the frame-level classification with emphasis on finding the presence of specific objects or activities. In this article, a new neural network architecture is proposed to efficiently extract the prominent features for detecting whether a video contains anomalies. A video is treated as an integral input and the detection follows the procedure of video-label assignment. The extraction of spatial and temporal features is carried out by three-dimensional convolutions, and then their relationship is further modeled using an LSTM network. The concise structure of the proposed method enables high computational efficiency, and extensive experiments demonstrate its effectiveness.
Introduction
In the past few decades, video anomaly detection has increasingly become a research focus because of its wide application, such as in public safety and online video censorship. Along with the popularity of camera hardware, the number of videos acquired by smartphones and surveillance cameras has increased so drastically, that manual processing of these videos becomes unfeasible in many scenarios due to its low efficiency.
Anomaly detection refers to the problem of finding irregular patterns that do not conform to the expectation [1]. Anomalies in a video include not only common irregularities, like vandalism, assault, and traffic accidents, but also some events under certain contexts such as a car entering a pedestrian-only zone. Though it seems that the identification of an abnormal object or event is the unique critical factor to consider, the context of a video is of equal importance for detection. Accordingly, video anomaly detection is different from human action recognition and event recognition, because it is more complicated to define a video anomaly than to barely detect an event or action; it involves a much wider range of activities, and can have a large inter-class variance as a result of complex contexts. In addition, given that video anomaly detection focuses on whether a video contains anomalies, it is as well different from video anomaly localization, which aims to identify all the abnormal frames in a video. The recent research trend suggests that detection and localization can be combined into a single end-to-end pipeline; however, the performance of such a combination remains to be explored since using a video anomaly detector to find the localization of abnormal frames may not be accurate as expected due to its nonlinear characteristics.
In practice, anomaly usually happens in a short time slot, while the principal part of the video can still be considered normal. A natural way to find out whether a video Though a deep learning method requires no manual feature engineering, defining an effective architecture and choosing a suitable framework are critical for its performance. The focus of video anomaly detection is to identify events, activities, and contexts presented in temporal-sequential frames; therefore, dynamic spatial-temporal features are required. In this article, a new framework is proposed to efficiently detect anomalies using weakly supervised video datasets. Three-dimensional (3D) convolutions with maxpooling are adopted to extract the prominent spatial-temporal information, and then the long short-term memory (LSTM) network [8] is used to further model the relationship between these features for classification.
Related Work
Many deep learning methods have been proposed for video anomaly detection. The main difference between these existing methods is the way of how to discern the anomalies from the normal scenarios. A video or a video frame is commonly handled as an outlier when an object or event presented in it is significantly different from the ones learned from the training set. Autoencoder is a commonly adopted technique for these methods; with sufficient samples, the trained autoencoder can generate small reconstruction errors for normal videos and large errors for abnormal ones. In [9], a fully convolutional network (FCN) was proposed to learn both motion features and regular patterns; the regularity score of a video is computed based on the reconstruction error of the autoencoder. The LSTM architecture was later used in [10] to model the temporal relationship between video frames; the combination of FCN and LSTM achieved better performances. Other efficient networks, like recurrent neural network (RNN) and inception models, were also integrated into the autoencoder methods in [11,12], which further improved the performance of detection.
Some of the methods used transfer learning technique and combined the extracted features with other classification methods to carry out the detection [7,[13][14][15][16]; for example, in [14,15], with the features extracted from the pre-trained VGG model [17], unmasking processes were carried out for assigning anomaly scores of video frames. Anomaly detection was treated as multi-class classification in [16], and then methods such as kmeans clustering and support vector machine were used for classification. Recently, generative adversarial networks (GANs) were also proposed for anomaly detection in [18,19]; a properly trained generator can produce highly realistic fake frames that are Figure 1. An example of a weakly surprised anomaly video in the UCF-Crime dataset [5] under the subcategory of abuse (Abuse028_x264.mp4). (The video contains 1412 frames, where the abuse scene lasted around 56 frames corresponding to approximately 4% of the total frames).
Related Work
Many deep learning methods have been proposed for video anomaly detection. The main difference between these existing methods is the way of how to discern the anomalies from the normal scenarios. A video or a video frame is commonly handled as an outlier when an object or event presented in it is significantly different from the ones learned from the training set. Autoencoder is a commonly adopted technique for these methods; with sufficient samples, the trained autoencoder can generate small reconstruction errors for normal videos and large errors for abnormal ones. In [9], a fully convolutional network (FCN) was proposed to learn both motion features and regular patterns; the regularity score of a video is computed based on the reconstruction error of the autoencoder. The LSTM architecture was later used in [10] to model the temporal relationship between video frames; the combination of FCN and LSTM achieved better performances. Other efficient networks, like recurrent neural network (RNN) and inception models, were also integrated into the autoencoder methods in [11,12], which further improved the performance of detection.
Some of the methods used transfer learning technique and combined the extracted features with other classification methods to carry out the detection [7,[13][14][15][16]; for example, in [14,15], with the features extracted from the pre-trained VGG model [17], unmasking processes were carried out for assigning anomaly scores of video frames. Anomaly detection was treated as multi-class classification in [16], and then methods such as k-means clustering and support vector machine were used for classification. Recently, generative adversarial networks (GANs) were also proposed for anomaly detection in [18,19]; a properly trained generator can produce highly realistic fake frames that are indistinguishable for the discriminator, then a high anomaly score will be assigned to a video when a sequential video frame is significantly different from the predicted one.
Along with the availability of large-scale weakly classified datasets, weakly supervised methods are the recent focuses in video anomaly detection. For example, the methods proposed in [5,[20][21][22] adopted the ranking frameworks for the detection, and in order to capture the anomalies, each video in the training set was divided into 32 video segments that were fed separately into the network for training; the outputs of these video segments, which corresponding to their scores, were then ranked and the highest score was chosen to indicate whether the input video contains anomalies. These weakly supervised methods achieved impressive performance. Nevertheless, even with the small segments of a video, a large score can be assigned to a normal scene and a low score for an abnormal scene. Therefore, training with video segments can still lead to bias of discerning anomalies from normal ones. A graph convolution neural (GCN) network was proposed to solve this problem in [23]; the wrongly selected normal segment in an anomaly video was treated as a label noise and an iterative optimization process was used to eliminate label noise. The proposed GCN network achieved better results; however, the training was computationally expensive and can lead to unstable performance due to unconstrained latent space.
Proposed Method
Most of the current weakly supervised models adopted the way of dividing a video into a predefined number of segments to identify the segments that contain anomalies, like the way for a strongly supervised dataset. However, such a division is not accurate in most cases because without knowing the exact spatial and temporal location of anomalies, the integrity of an event can be broken or inter-related events can be separated, which will confuse the classifier; meanwhile, handling multiple segments is time-consuming, and class imbalance may become a problem during the training given anomaly only happens in a small time slot.
To solve the aforementioned problems, a more natural way was adopted to decide whether a video contains anomalies conforming to the idea of MIL: a video is treated as an integral input and is considered normal only when it contains no anomaly. Max-pooling operations are used to replace the division of a video and to capture the most prominent spatial-temporal features corresponding to possible anomalies. In such a way, a video is classified based on the unique score generated by the framework. Hence, detection of anomalies in a video becomes a binary classification with the final output score in the range of [0, 1], where 0 (zero) means no anomaly detected, and 1 (one) means anomaly present. The architecture of the proposed framework is illustrated in Figure 2. It contains three principal parts: the first part is composed of three blocks, with each one including a 3D convolutional layer followed by a max-pooling layer, the stacking of convolutional layers aiming to capture both the temporal and the spatial features of the input video; the second part, is an LSTM architecture followed by a global max-pooling layer, which is used to further model the inter-related features from the first step and extract the most important ones for classification; and the third part, contains two dense layers to generate the final score of the video. The new framework consists of three parts, each one delineated by a red rectangle: the first one includes three blocks composed of one convolutional layer followed by a pooling layer; the second one is composed of an LSTM network and a global pooling layer, and the third one is a combination of two dense layers to generate the final score. The changes of input and output shapes are illustrated by a video assumed to have 1000 frames of 3 color channels with height and width of 120 and 160 pixels, respectively. The framework does not have restrictions on image size, nor the number of frames; it takes a video as an integral input and outputs a unique score to indicate whether it contains anomalies.
Convolutional Layers
As discussed in Section 2, the detection of anomalies in a video relies on the correct extraction of spatial and temporal features of the video. 3D convolutions have been proved to be effective in doing this task; thus, three convolution blocks are used. In the The new framework consists of three parts, each one delineated by a red rectangle: the first one includes three blocks composed of one convolutional layer followed by a pooling layer; the second one is composed of an LSTM network and a global pooling layer, and the third one is a combination of two dense layers to generate the final score. The changes of input and output shapes are illustrated by a video assumed to have 1000 frames of 3 color channels with height and width of 120 and 160 pixels, respectively. The framework does not have restrictions on image size, nor the number of frames; it takes a video as an integral input and outputs a unique score to indicate whether it contains anomalies.
Convolutional Layers
As discussed in Section 2, the detection of anomalies in a video relies on the correct extraction of spatial and temporal features of the video. 3D convolutions have been proved to be effective in doing this task; thus, three convolution blocks are used. In the first layer of the first block, 4 filters are used, with a spatial kernel of 3 × 3 size, to find the spatial relationships on each video frame, and a temporal step of 2 to focus on the changes of objects and backgrounds between neighboring frames; the output after the convolution is then put into a max-pooling layer, but only spatial pooling is used to keep more information of sequential temporal features.
The extracted features are used as the input for the second block where they are convoluted by 8 filters of 4 × 3 × 3 size. The temporal receptive field is increased to 4 so that the layer can capture features presented in a longer temporal duration. Meanwhile, the temporal pooling size in the following pooling layer is set as 2 to extract the more prominent features along with the time change. The increased temporal pooling is compensated by the increased number of filters in this block.
In the last block, the number of filters is doubled again to 16, with the size of 8 × 3 × 3. The doubled temporal step aims for a further combination of information along with the temporal change, and more filters are used to compensate for the increased temporal pooling size (doubled to 4) to obtain the most prominent features.
The output after these three blocks is a combination of the abstract spatial and temporal features ready to be used for further processing. Only three blocks are used here because 3D convolution is computationally expensive, especially when the spatial or temporal receptive field is large; also, too many max-pooling layers may suppress the contextual information too much, leading to the loss of the important temporal relationship between video frames. Consequently, features obtained in this step can still be called "local" features, and their long-temporal changes and relationship need to be further modeled. Fortunately, the convolution operations keep the temporal sequences of the extracted features, so these discrete spatial-temporal features can be processed as a time sequence series. The LSTM network is an efficient architecture to suit this requirement.
LSTM Architecture
While the LSTM was initially developed to solve the vanishing gradient problem for training traditional RNNs, its insensitivity to gap length forms a big advantage over other sequence learning methods in many applications. A single memory unit in the LSTM architecture consists of a cell state and its three gates: an input gate, an output gate and a forget gate. Figure 3 illustrates the structure of a memory unit with the operations defined as follows: where i t is the input gate, o t is the output gate, f t is the forget gate, σ is the sigmoid function, tan h is the hyperbolic tangent function, c t and h t are the cell state and the hidden state of the time step t, w and b are the weights, operator * stands for the element-wise product, respectively. Intuitively, the input information is selectively chosen, discarding the useless part and then combined with the prior information from its precedent unit through a series of nonlinear functions; the prior information is stored in the hidden state extracted from the previous inputs. Along with the training of LSTM, the hidden states store all the useful information from the previous sequences and can be seen as a substantial summary of those video frames, which is essential for the detection of anomalies.
where is the input gate, is the output gate, is the forget gate, is the sigmoid function, tanh is the hyperbolic tangent function, and ℎ are the cell state and the hidden state of the time step , and are the weights, operator * stands for the element-wise product, respectively. Intuitively, the input information is selectively chosen, discarding the useless part and then combined with the prior information from its precedent unit through a series of nonlinear functions; the prior information is stored in the hidden state extracted from the previous inputs. Along with the training of LSTM, the hidden states store all the useful information from the previous sequences and can be seen as a substantial summary of those video frames, which is essential for the detection of anomalies. Therefore, in the second part of the proposed framework, the features extracted in the first part are flattened according to their temporal sequence, and then the formed series is fed into the LSTM network for training. A total of 1024 units are used in the network to capture the memory features of each timestep along the timeline; therefore, along with the training of the LSTM network, the important spatial-temporal features for Therefore, in the second part of the proposed framework, the features extracted in the first part are flattened according to their temporal sequence, and then the formed series is fed into the LSTM network for training. A total of 1024 units are used in the network to capture the memory features of each timestep along the timeline; therefore, along with the training of the LSTM network, the important spatial-temporal features for each time step are stored in the hidden state as a 1024-dimension vector. Given that videos are weakly classified, the state of the last cell of the LSTM network, which corresponds to the reminiscent information at the end of a video, may not contain any useful information for detecting anomalies that happened in the video (a case in point can be seen in Figure 1). Hence, instead of using only the states of the last cell of the LSTM network, all the hidden states are taken into account since they contain all the necessary information for detection.
The hidden states are then put into a global max-pooling layer to extract the most prominent spatial-temporal features among different time phases. The output of the second part is a 1024-dimensional vector that represents the highly condensed contents of the video covering the abstract spatial-temporal relationships of the video frames and the most prominent features ready to be used for classification. Such a pooling operation in the hidden states aims to identify the anomalies even if they happen in a short time slot of a video; it does not require the video to be divided into a pre-defined number of segments and, therefore, is more flexible in finding anomalies when they are presented along with many normal scenes or events in a weakly supervised video.
Dense Layers
The third part of the framework is composed of two dense layers. The first layer contains 128 units and uses the rectified linear unit (ReLU) as the activation function: and the second layer, which is also the last layer of the framework, uses the sigmoid function to generate the final score.
The final output of the proposed framework is a value ranging from 0 (zero) to 1 (one) to indicate whether the video contains anomalies; in the ideal case of binary classification, a score of 1 (one) means that a video contains anomalies, and a score of 0 (zero) means no anomaly in the video. Therefore, the following binary cross-entropy loss is used as the loss function: where t stands the ground truth score of the training sample and p stands for the output score of the framework. With the three-parts architecture and the loss function defined in Equation (8), the proposed framework can be trained by proper datasets and used for detection in new videos. The small kernel sizes of architecture facilitate the computation and implementation, which enables high efficiency with potential use for both offline and online detection.
Results
The proposed framework followed a sound procedure to extract the critical features for anomaly detection. To demonstrate its effectiveness and objectively compare with other state-of-the-art methods, experiments were carried out on two currently available weakly supervised datasets: UCF-Crime and XD-Violence datasets.
The UCF-Crime dataset is a large-scale video dataset introduced in [5]. It consists of 1900 long and untrimmed real-world surveillance videos with a total duration of 128 h. The dataset has been further divided into 13 types of anomalies including abuse, arson, robbery, and road accident, and can be used for activity recognition. However, in our experiments, we discarded these further classifications and only considered a video as either normal or abnormal. As an objective comparison to the performance of other methods, the division of training and testing set follows the way defined in [5]: the training set contains 810 abnormal videos and 800 normal videos, while the testing set contains 140 abnormal videos and 150 normal videos.
XD-Violence is another large-scale video dataset proposed for violence detection [7]. The dataset contains 4754 videos with a total duration of 217 h; besides the videos, audio signals were also provided so that multi-model fusion can be used to improve the detection accuracy. The videos were acquired from multi scenarios, including clips from games, movies, and YouTube. The training set is composed of 3953 videos; among them, 2048 videos contain no anomalies while the remaining 1905 ones contain different levels of violence. The dataset also provides sub-classifications according to the violence type; but, like in the UCF-Crime dataset, such information was discarded in the experiment. The testing set contains 300 normal videos and 500 abnormal videos.
The two aforementioned datasets were selected because both are large-scale and contain a considerable number of training samples, so the framework is less likely to have problems of undertrained or overfitting.
Data Pre-Processing and Augmentation
The training of the proposed framework was straightforward and end-to-end. A video was treated as an integral input formed by sequential video frames and fed to the network. The aspect ratio of a video was assumed to be the traditional 4:3; so, for each video frame, the width and the height were rescaled to 160 and 120 pixels, respectively. All the color channels of the video frame were divided by 255 to normalize their values into the range [0, 1].
Given the training videos were weakly labeled, the exact spatial and temporal locations of the activities or objects, which determine the video attribute, are unknown. Consequently, some data augmentation techniques, like cropping and translation are not applicable due to their possible modification of sensitive information of an event or object. Nevertheless, a safe way was adopted to extend the training samples by generating a new training sample by flipping horizontally all the video frames; such a mirrored change does not lose any critical information for detection, nor alternates any critical features of activities when the background and context have the same alternation. Such augmentation of data can benefit the training, because it emphasizes the detection of events and activities themselves, without adding any artificial interpolation information to the process.
Data Training
Following the procedure of LSTM, each video was fed into the framework as one integral input, so the batch number was set to 1 (one). The stochastic gradient descent (SGD) algorithm was used as the optimizer for both datasets with a learning rate of 0.0002. Hyper-parameters for each layer were fixed as introduced in Section 2 for both datasets.
The proposed framework had no restriction on the number of frames contained in the input video. Therefore, the input videos can have any time duration and number of frames. However, if the time duration of a video was too long, the combined sequential video frames become a burden to the CPU/GPU memory and could lead to an overflow, either in the training when updating the weights, or in the testing when calculating the final score. To handle this problem, the maximum number of frames contained in an input video was set to be 4000; if a video contains more than 4000 frames, it was split into clips with each one containing 4000 frames (the last and the second last clip can have overlaps to satisfy this requirement). For each epoch of the training phase, a clip was randomly selected for the oversized video and fed into the network. For testing, each of the split clips was processed separately, and the final score of the video was defined as the maximum of these scores: where S i was the score of the i-th split video clip. This strategy was adopted only because of the consideration of computational limits from hardware.
Quantitative Analysis
The proposed framework outputs a score ranging from 0 (zero) to 1 (one) that can be considered as the possibility of a video containing an anomaly. For such a binary classification, the area under the ROC curve (AUC) is a conventional index to show how accurate the classifier is.
The receiver operating characteristic curve (ROC) is a graph showing the performance of a classifier at different thresholds; the vertical axis is the true positive rate (TPR), which is also called sensitivity, and the horizontal axis is the false positive rate (FPR). The calculations of TPR and FPR were defined as: where TP, FP, TN and FN stand for the numbers of true positive, false positive, true negative, and false negative video samples in the testing set, respectively. AUC provides an aggregate index of performance across all possible thresholds for classification by measuring the area underneath the ROC curve from (0,0) to (1,1). The AUC shows how well the classification is without focusing on a specific threshold, indicating, therefore, the overall performance of a classifier. Given that the AUC is a commonly adopted index for the UCF-Crimes dataset, Figure 4a shows the changes of AUC for the 50 epochs of training; the highest AUC value is 0.8523. Table 1 listed the experimental results of the proposed framework along with the results of other state-of-the-art methods reported in the literature. To the best of our knowledge, it is currently among the best results concerning this dataset.
well the classification is without focusing on a specific threshold, indicating, therefore, the overall performance of a classifier.
Given that the AUC is a commonly adopted index for the UCF-Crimes dataset, Figure 4a shows the changes of AUC for the 50 epochs of training; the highest AUC value is 0.8523. Table 1 listed the experimental results of the proposed framework along with the results of other state-of-the-art methods reported in the literature. To the best of our knowledge, it is currently among the best results concerning this dataset.
(a) (b) Figure 4. Performance on the two large-scale datasets: (a) The evolution of ROCs during the training for the UCF-Crime dataset (the color of the curve transits from green-early epochs, to red-late epochs); (b) the evolution of PRCs during the training for the XD-Violence dataset (the color of the curve transits from green-early epochs, to red-late epochs). Table 1. A comparison between the performances of the proposed framework and other state-ofthe-art methods on the UCF-Crime dataset (best value found in bold).
Method
Main Features AUC (%) [5] C3D [24] 75.41 [20] C3D, TCN 78.66 [23] TSN 82.12 [25] C3D 83.03 [26] I3D 82.30 [27] C3D/I3D, RTFM 84.03 Our method 3D Convolution, LSTM 85.23 For the XD-Violence dataset, the average precision (AP) of the precision-recall curve (PRC) was more frequently used to show the performance of a classifier. In a PRC curve, TPR becomes the horizontal axis, and the vertical axis is the precision calculated as: where TP and FP follow the definition in Equations (10) and (11). The main difference between a ROC curve and a PRC curve is that the number of true negative samples is not used in PRC, so PRC focuses more on the positive cases, and is more used when there is class imbalance. Figure 4b illustrates the evolution of the PRC curves during the training, and Table 2 presents the performance of the proposed framework along with the ones reported by other existing methods. One can see that the proposed framework also achieved the highest AP score with an impressive enhance to 0.9517 from the baseline of 0.73 on this dataset. Results on both datasets demonstrated the effectiveness of the proposed framework.
Further Discussions
As can be realized from the data in Tables 1 and 2, the proposed framework outperformed other state-of-the-art methods. It achieved a higher accuracy on the XD-Violence dataset than on the UCF-Crime dataset. One of the possible reasons was that the videos of the UCF-Crime dataset have lower resolutions because they were mainly acquired by surveillance cameras, while most of the videos in the XD-Violence were acquired by highdefinition cameras and, therefore, have clearer representations of the objects and events ongoing. Also, the anomalies in the UCF-Crime dataset were acquired from different view angles, and sometimes in far-field view, which adds more difficulties to detection.
Meanwhile, the XD-Violence dataset focuses on anomalies with violent activities, while the UCF-Crime dataset covers much more types of anomalies, so the large intervariance among these types could decrease the detection performance. Also, though the UCF-Crime dataset contains a decent total number of videos with anomalies, the videos of each sub-categorical anomaly remain to be few. Hence, the trained framework can fail to detect certain types of anomalies, and the intra-variance inside the same type of anomaly may further decrease its performance. In summary, larger datasets that cover more samples and types of anomalies are still in demand to further improve detection accuracy.
Given that the training of the new framework shared the same configurations for the two datasets, it would be interesting to see whether the performance of detection can be further improved if the two training datasets are merged. The results of combined training are illustrated in Figure 5. With more available samples and the diverse backgrounds of datasets, the training took longer time to achieve stable performance. One can see from the evolution of these curves that the combined training had a qualitatively better performance on the XD-Violence testing data (blue over red), but lower performance on the UCF-Crime testing data (blue under red). Quantitative analysis showed that the best AUC on the UCF-Crime dataset lowered to 0.8236, while the best AP on the XD-Violence dataset increased to 0.9528, slightly higher than the result obtained before. The two datasets have controversial tendencies of improvement. The combined training increased the true positive rate in the XD-Violence dataset without increasing too much false positive rate, which leads to a higher AP score. Nevertheless, since UCF-Crime covers more types of anomalies, one can see that compared to the training before, the false positive rate was higher, which means certain anomalies could not be detected without misclassifying normal videos, so the overall performance decreased. A possible reason for this phenomenon was that the normal videos in the training samples were more from the XD-Violence dataset than from the UCF-Crime dataset (2048:800), the imbalance may lead to different criteria on detecting whether a video is normal in the UCF-Crime testing data. Regardless, even with the lowered AUC score, the performance of the new proposed framework is still state-of-the-art. leads to a higher AP score. Nevertheless, since UCF-Crime covers more types of anomalies, one can see that compared to the training before, the false positive rate was higher, which means certain anomalies could not be detected without misclassifying normal videos, so the overall performance decreased. A possible reason for this phenomenon was that the normal videos in the training samples were more from the XD-Violence dataset than from the UCF-Crime dataset (2048:800), the imbalance may lead to different criteria on detecting whether a video is normal in the UCF-Crime testing data. Regardless, even with the lowered AUC score, the performance of the new proposed framework is still state-of-the-art. Figure 4 to show the differences. The color of the curve transits from dark-early epochs, to blue-late epochs).
Conclusions and Future Work
An efficient new framework was proposed to perform video anomaly detection. 3D convolutions and the LSTM architecture were used to extract the spatial and temporal features of videos for detection. The proposed framework followed the natural way of Figure 4 to show the differences. The color of the curve transits from dark-early epochs, to blue-late epochs).
Conclusions and Future Work
An efficient new framework was proposed to perform video anomaly detection. 3D convolutions and the LSTM architecture were used to extract the spatial and temporal features of videos for detection. The proposed framework followed the natural way of detecting anomalies in a video and has no restriction nor special pre-processing requirements on the video. The framework does not have restrictions on the temporal duration of the input video. It has a concise structure and is easy to be implemented; the efficient structure enables the possibility of both offline and online detection. Experiments on two large-scale weakly supervised datasets have been carried out, and the results demonstrated its effectiveness over other state-of-the-art methods. An exhaustive search in the hyperparameter space of the model may benefit even further the performance of the proposed method. Our future work will explore this aspect and devote to the fusion among multiple-channel information for detection; for example, by combining the videos with the sounds from the built-in microphone like the work carried out in [7] with the XD-Violence dataset.
Video anomaly detection has been a focus due to the large demands from different applications. The current framework achieved a state-of-the-art performance with a straightforward and effective architecture, but still has unsatisfactory performance in detecting certain types of anomalies. Further improvement and advances in video anomaly detection methods, rely on the availability of more large-scale video datasets that include sufficient training samples and cover various types of anomalies. To summarize, the main contributions of this study are: • A new framework that provides an effective way of detecting anomalies by combining three-dimensional convolutions and the LSTM network.
•
The structure of the new framework has high computational efficiency, which enables its application to videos with different resolutions and for different tasks. Funding: This article is a result of the project Safe Cities-"Inovação para Construir Cidades Seguras", with reference POCI-01-0247-FEDER-041435, co-funded by the European Regional Development Fund (ERDF), through the Operational Programme for Competitiveness and Internationalization (COMPETE 2020), under the PORTUGAL 2020 Partnership Agreement.
Institutional Review Board Statement: Not applicable. | 7,981.4 | 2021-11-01T00:00:00.000 | [
"Computer Science"
] |
Trigger factor assisted soluble expression of recombinant spike protein of porcine epidemic diarrhea virus in Escherichia coli
Background Porcine epidemic diarrhea virus (PEDV) is a highly contagious enteric pathogen of swine. The spike glycoprotein (S) of PEDV is the major immunogenic determinant that plays a pivotal role in the induction of neutralizing antibodies against PEDV, which therefore is an ideal target for the development of subunit vaccine. In an attempt to develop a subunit vaccine for PEDV, we cloned two different fragments of S protein and expressed as glutathione S-transferase (GST)-tagged fusion proteins, namely rGST-COE and rGST-S1D, in E.coli. However, the expression of these recombinant protein antigens using a variety of expression vectors, strains, and induction conditions invariably resulted in inclusion bodies. To achieve the soluble expression of recombinant proteins, several chaperone co-expression systems were tested in this study. Results We firstly tested various chaperone co-expression systems and found that co-expression of trigger factor (TF) with recombinant proteins at 15 °C was most useful in soluble production of rGST-COE and rGST-S1D compared to GroEL-ES and DnaK-DnaJ-GrpE/GroEL-ES systems. The soluble rGST-COE and rGST-S1D were purified using glutathione Sepharose 4B with a yield of 7.5 mg/l and 5 mg/l, respectively. Purified proteins were detected by western blot using mouse anti-GST mAb and pig anti-PEDV immune sera. In an indirect ELISA, purified proteins showed immune reactivity with pig anti-PEDV immune sera. Finally, immunization of mice with 10 μg of purified proteins elicited highly potent serum IgG and serum neutralizing antibody titers. Conclusions In this study, soluble production of recombinant spike protein of PEDV, rGST-COE and rGST-S1D, were achieved by using TF chaperone co-expression system. Our results suggest that soluble rGST-COE and rGST-S1D produced by co-expressing chaperones may have the potential to be used as subunit vaccine antigens. Electronic supplementary material The online version of this article (doi:10.1186/s12896-016-0268-7) contains supplementary material, which is available to authorized users.
Background
Porcine epidemic diarrhea (PED) is a highly infectious and contagious enteric disease of swine [1]. The disease is characterized by severe diarrhea, vomiting, dehydration and death, with a high mortality rate of more than 90 % in suckling piglets [2]. In the early 1970s, the first case of PED was reported in England [3]. Since then, the disease has spread to Europe and most of the Asian swine raising countries, and severely affected swine industry in Asia [4,5]. In 2013, PED suddenly occurred in the United States and rapidly spread across the country [6], as well as neighboring countries [7,8], and subsequently in Asian countries including South Korea [9], Japan and Taiwan [10], and led to significant economic losses in the global swine industry [11]. Thus, it is important to develop an effective vaccine for the prevention of PED.
A coronavirus named porcine epidemic diarrhea virus (PEDV) was identified as the causative agent of PED in the late 1970s [2]. The virus possesses four structural proteins including 150-220 kDa spike (S) glycoprotein, 7 kDa envelop (E) protein, 20-30 kDa membrane (M) protein and 58 kDa neucleocapsid (N) protein [12]. The S protein is a transmembrane glycoprotein localized on the virion surface, which can be divided into the S1 (aa 1-789) domain and S2 (aa 790-1383) domain [13,14]. The S protein plays a pivotal role in the cellular receptor binding, viral fusion, and most importantly in the induction of neutralizing antibodies [15,16]. A neutralizing epitope region, COE (aa 499-638), corresponds to the neutralizing epitope of transmissible gastroenteritis virus (TGEV), was identified based on the nucleotide sequence homology analysis [17]. Subsequently, a novel neutralizing epitope region S1D (aa 636-789) on the S1 domain was reported to have the capacity to induce neutralizing antibodies against PEDV [13]. In addition, a B-cell epitope 2C10 (aa 1,368-1,374) on the S2 domain was also reported to induce neutralizing antibodies [18]. Therefore, spike protein is considered as a primary target for the development of subunit vaccine against PEDV.
Up to the present, a number of expression systems have been used to produce S protein of PEDV, such as mammalian cells [16], transgenic plant [19], yeast [20] and E.coli [13,17]. Despite absence of post-translational modifications (such as phosphorylation, acetylation, and glycosylation), expression of recombinant proteins in E. coli has many significant benefits over other expression systems in terms of cost, ease-of-use, and scale [21]. However, recombinant protein overexpression in E.coli often leads to the misfolding of the protein of interest into biologically inactive aggregates known as inclusion bodies (IBs) [22]. Various expression approaches have been suggested to prevent IBs formation, including utilization of solubility enhancing tag, lowering induction temperature, modulating inducer concentration and changing specialized expression hosts [23]. Moreover, many refolding methods have also been explored for the recovery of soluble proteins from purified IBs [24,25]. However, these approaches are not always effective, since even the most robust protocols only refold a small fraction of the input protein, and it is difficult to purify the refolded fraction [26]. As an alternative strategy, chaperone co-expression strategies have been proposed to facilitate soluble expression of recombinant proteins with success for many different types of proteins [27][28][29]. Chaperones are complex molecular machinery that assists the folding of newly synthesized proteins to the native state and provide a quality control system that refolds misfolded and aggregated proteins [30,31]. In the E.coli cytosol, a ribosome-associated folding catalyst known as trigger factor and molecular chaperone teams such as the DanK-DnaJ-GrpE and the GroEL-GroES are profoundly involved in the folding and refolding process [32][33][34][35].
In the present study, we report the results of the chaperone-assisted production of soluble rGST-COE and rGST-S1D in E.coli as well as the evaluation of immunogenicity and efficacy of purified recombinant proteins as a subunit vaccine antigen.
Results and discussion
Chaperone assisted expression of soluble rGST-COE and rGST-S1D Based on the sequence information of the spike protein of PEDV CV777 strain, we constructed pG-COE and pG-S1D expression plasmids, producing rGST-COE (containing 138 amino acids spanning the region of 499-636 amino acids) [17] and rGST-S1D (containing 153 amino acids spanning the region of 637-789 amino acids) [13], respectively. However, unlike the previous reports [13,17], our initial attempts to produce rGST-COE and rGST-COE using a pGEX 6p-1 vector system with various induction conditions resulted in IBs (data not shown). Since, both of the COE and S1D regions contain 4 cysteine residues in the protein sequence, we also performed expression of these clones in E.coli Rosetta-gami™2 (DE3) (Novagen, Darmstadt, Germany) strain known to enhance disulfide bond formation in the cytoplasm, although the yield of soluble proteins was negligible (data not shown).
Previously, chaperone co-expression system has been reported to improve the solubility of various aggregationprone recombinant proteins, such as human interferongamma, mouse endostatin, human ORP150 and human lysozyme [27,36]. Therefore, we decided to test the effect of chaperone systems, such as trigger factor (TF), GroEL-ES, DnaKJE/GroEL-ES and GroEL-ES/TF, on the production of soluble rGST-COE and rGST-S1D. The pG-COE and pG-S1D expression plasmids were transformed in to the E.coli BL21, BL21/pTF16, BL21/pGro7, BL21/pG-KJE8 and BL21/pG-Tf2 chaperone competent cells. As shown in Fig. 1, we found that co-expression of TF with recombinant proteins at 15°C was most useful in soluble production of rGST-COE and rGST-S1D compared to other chaperone combinations (Fig. 1). Majority of the recombinant proteins were presented in insoluble fraction when expressed alone, while co-expression of recombinant proteins with TF resulted in up to 84 % soluble fraction for the rGST-COE, and 41 % for the rGST-S1D compared to 19~30 % of other chaperone systems. Total amount of recombinant proteins expressed were decreased to 40~50 % value of no chaperone induction. Interestingly, TF system is a cold shock chaperone, while the GroEL-ES and the DnaK-DnaJ-GrpE systems belong to the heat shock chaperones and the ratios of chaperone combinations are critical in the folding or refolding process [30,37]. In this study as we used commercial system, the ratios of chaperones or other conditions were not optimized for the PEDV antigen expression. Nonetheless, TF system is simple and most effective among the tested conditions for the soluble expression of PEDV antigens and do not require co-chaperones or ATP as compared to the GroEL-ES and the DnaK-DnaJ-GrpE systems.
Furthermore, TF, a ribosome associated chaperone, is the first chaperone that interacts with nascent polypeptide chains and assists co-translational protein folding [38]. Thus, TF may be essential for the correct folding of rGST-COE and rGST-S1D during initial de novo folding steps [27,39]. It was reported that formation of GroEL/TF complex can further enhance binding affinity of GroEL to unfolded protein and facilitate protein folding or denaturation. However, co-expression of GroES-GroEL/TF system (E.coli BL21/pG-Tf2) was not successful in our case due to the growth inhibition observed upon chaperone induction (data not shown).
As shown in Fig. 2, SDS-PAGE revealed the expression of TF (56 kDa) induced by L-arabinose and the expression of rGST-COE (Fig. 2a) and rGST-S1D (Fig. 2b) induced by IPTG. Recombinant proteins or chaperone TF was not produced without IPTG or L-arabinose induction, respectively. Insoluble recombinant proteins were produced with IPTG in the absence of L-arabinose. Soluble recombinant proteins were produced only in the presence of TF. As a result, it was confirmed that coexpression of TF highly improved the solubility of rGST-COE and rGST-S1D.
Optimization of expression conditions for soluble rGST-COE and rGST-S1D production
To maximize the soluble expression of rGST-COE and rGST-S1D, expression conditions for E.coli BL21/pTf16/ pG-COE and E.coli BL21/pTf16/pG-S1D were optimized Fig. 1 Chaperone assisted expression of soluble rGST-COE (a) and rGST-S1D (b). Chaperone proteins were induced at 37°C by adding L-arabinose (0.5 mg/ml) or tetracycline (10 ng/ml) prior to the expression of recombinant proteins. Target proteins were induced by adding 0.1 mM IPTG at 15°C for 24 h. Soluble (S) and insoluble (I) fractions were analyzed by SDS-PAGE. The protein bands for rGST-COE, rGST-S1D and chaperone proteins are indicated on the right side of the gel. Lanes: control, no chaperone; +TF, co-expression with trigger factor; +Gro, co-expression with GroEL-GroES complex; +G-KJE: co-expression with DnaKJE/GroEL-GroES complex by regulating induction temperature, IPTG concentration, OD at induction and harvest time. The rate of expression and culture temperature can affect the proper folding and IBs formation of most recombinant proteins [40][41][42]. In order to determine the optimal induction temperature, rGST-COE was induced at different temperatures (15, 21, 28 and 37°C), while TF was induced with L-arabinose from the culture start. SDS-PAGE results showed that optimal temperature for soluble rGST-COE expression was 15°C, as the solubility of rGST-COE was increased progressively as temperature decreased from 37°C to 15°C. (Additional file 1: Figure S1). Interestingly, the expression level of TF was also increased as the temperature decreased. Indeed, it has been reported that TF, unlike other chaperones in E.coli, is a cold-shock chaperone which was induced at low temperature [37]. Our data suggest that the improved solubility of rGST-COE at low temperature may be attributed to the elevated expression level of TF, but not duo to the reduction of expression rate at low temperature.
After determining the optimal expression temperature, the effect of IPTG concentration (0.1, 0.4, 0.7 and 1.0 mM), OD at induction (at OD600 0.6, 0.9, 1.2 and 1.5) and harvest time (12,24,36 and 48 h after induction) were examined on the efficiency of soluble expression. Optimal conditions for soluble expression were 0.1 mM IPTG concentration, induction start at OD600 0.6, and induction time of 24 h or 12 h for rGST-COE and rGST-S1D at 15°C (Fig. 3).
Purification and western blot confirmation of rGST-COE and rGST-S1D The soluble rGST-COE and rGST-S1D fractions were purified by affinity column chromatography using glutathione Sepharose 4B. The soluble fraction obtained after cell lysis and centrifugation was loaded on the column pre-equilibrated with buffer A (140 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4 and 1.8 mM KH2PO4 at pH 7.3). The column was washed successively with buffer A. The bound proteins were eluted using buffer B (50 mM Tris-HCl and 10 mM glutathione at pH 8.0) and fractions containing purified proteins were collected and the purity of the protein was analyzed by SDS-PAGE (Additional file 1: Figure S2). When determined by Bradford assay, the yield of purified soluble rGST-COE and rGST-S1D in flask culture were 7.5 mg/l and 5 mg/l, respectively. The purity of the pooled fractions was >90 % and we used them without further purification. 1 μg of purified proteins were separated in the SDS-PAGE (Additional file 1: Figure S3), and target proteins were detected using mouse anti-GST monoclonal antibody and pig anti-sera against commercial PEDV live attenuated vaccine in a western blot analysis (Fig. 4). The rGST protein was included as a control for the western blot analysis. No clear bands were detected with pre-immune sera used as a control in western blot (data not shown). Pig anti-PEDV immune sera showed weak signal, high background and some non-specific reactivity in western blot (Fig. 4b) compared to the ELISA result (Fig. 6). It seems that those two results are not matching to some extent. But, it may be possible to get those results, because the pig anti-PEDV sera was raised against live attenuated virus, which contained polyclonal antibodies recognizing both conformational and linear epitopes on the native PEDV spike protein. Therefore, the pig anti-PEDV sera could detect both conformational and linear epitopes on rGST-COE and rGST-S1D in an ELISA assay, while pig anti-PEDV sera could react only with linear epitopes on denatured rGST-COE and rGST-S1D in a western blot. In this context, it is possible that the reaction in ELISA was much stronger than in western blot.
Neutralization antibody response of rGST-COE and rGST-S1D
To determine the immunogenicity of rGST-COE and rGST-S1D, mice were immunized subcutaneously with 10 μg of purified proteins for 3 times at 2-weeks intervals. Mouse anti-sera were collected before and at 14, 28, 42 days after immunization. The 96 well plates were coated with purified rGST-COE or rGST-S1D and reacted with corresponding mouse anti-sera diluted at 1:200~1:25600 for the detection of antigen specific serum IgG levels. Both rGST-COE and rGST-S1D strongly elicited antigen specific serum IgG production after third injection of antigens (Fig. 5). In the indirect ELISA, rGST-COE and rGST-S1D also showed significant reactions with pig anti-PEDV immune sera (Fig. 6).
To test whether immunization of mice with rGST-COE and rGST-S1D could elicit specific neutralizing antibodies against PEDV, terminal serum samples collected from immunized mice were subjected to the serum neutralization assay. As shown in Fig. 7, immunization of mice with rGST-COE and rGST-S1D showed significantly higher SN titer than control mice received PBS.
Even though Sun et al. [13] argued only S1D being effective for the induction of neutralizing antibodies, we confirmed through this study the previous report which showed COE and S1D fragments of PEDV spike protein working as neutralizing epitopes [17]. However, unlike the reports of soluble expression of theses antigens [13,17], the expression of these recombinant antigens using a variety of expression vectors, strains, and induction conditions invariably resulted in inclusion bodies without proper chaperone co-expression such as TF. Even though we exactly matched the reports in terms of vector, amino acid sequence including GST fusion, E.coli strain, and optimization of induction conditions, we could not reproduce the soluble production without the chaperone support.
Conclusions
In this study, two recombinant fragments of PEDV spike protein, COE and S1D, were produced in soluble forms using a chaperone, TF co-expression system in E.coli. Fig. 4 Detection of purified rGST-COE and rGST-S1D proteins by western blot analysis using mouse anti-GST monoclonal antibody (a) and pig anti-PEDV immune sera (b). Lanes: 1, rGST-COE; 2, rGST-S1D; 3, rGST (Control) Without TF, these antigens invariably resulted in inclusion bodies in variety of expression vectors, host strains, and induction conditions tested. Co-expression of TF with recombinant proteins at 15°C was most useful in soluble production of rGST-COE and rGST-S1D compared to GroEL-ES and DnaK-DnaJ-GrpE/GroEL-ES systems. The soluble rGST-COE and rGST-S1D were purified and detected by western blot using mouse anti-GST mAb and pig anti-PEDV immune sera, and showed dominant immune reactivity with pig anti-PEDV immune sera in an indirect ELISA assay. Finally, immunization of mice with 10 μg of purified proteins elicited highly potent serum IgG and serum neutralizing antibody titers. Our results suggest that soluble rGST-COE and rGST-S1D may have potential to be used as efficient subunit vaccine antigens for the PEDV prevention. Further studies will be followed to evaluate the efficacy of those recombinant antigens in target animals for the subunit vaccine development.
Strains
The expression plasmids, pG-COE and pG-S1D were constructed by ligating pGEX-6p-1 (GE Healthcare, Uppsala, Sweden) with COE (aa 499-636) and S1D (aa 637-789) regions, respectively, in BamH1 and Xho1 sites. The COE and S1D fragments based on the nucleotide sequence of spike protein of PEDV CV777 strain (GenBank Acc. No. AF353511) were synthesized after codon-optimization by Genscript (Piscataway, NJ, USA). These expression plasmids produced recombinant spike proteins, rGST-COE and rGST-S1D, as glutathione S-transferase (GST)-tagged fusion proteins under the tac promoter (Ptac). Each plasmid was transformed into E.coli BL21, BL21/pTf16, BL21/ pGro7, BL21/pG-KJE8 and BL21/pG-Tf2 competent cells (TaKaRa, Japan). E.coli BL21/pTf16 strain produced the trigger factor (TF) under the L-arabinose inducible promoter. On the other hand, E.coli BL21/pGro7 and BL21/pG-KJE8 strains produced GroEL-GroES and DnaK-DnaJ-GrpE/GroEL-GroES chaperone complex, Fig. 5 Detection of antigen specific serum IgG levels from mice immunized with rGST-COE (a) and rGST-S1D (b) by ELISA. Mice (n = 5) were immunized subcutaneously with 10 μg of purified proteins three times at 2-weeks intervals. Pre-immune sera and sera from two weeks following each injection were serially diluted and assayed by ELISA. Error bars indicate the standard deviations of the means Fig. 6 Reactivity of purified rGST-COE and rGST-S1D with pig anti-PEDV immune sera in an indirect ELISA. The rGST was included as a negative control. Error bars indicate the standard deviations of the means. The asterisk indicates the p values that are less than 0.05 Fig. 7 Serum neutralizing (SN) antibody titers from mice immunized with PBS, rGST-COE and rGST-S1D. Terminal serum samples were serially diluted and subjected to the serum neutralization assay. Error bars indicate the standard deviations of the means. The asterisk indicates the p values that were less than 0.05 respectively. E.coli BL21/pG-Tf2 strain produced GroEL-GroES/TF complex.
Recombinant protein expression
A seed culture was prepared by inoculating with a single colony of recombinant E.coli strains and grown overnight in 5 ml of LB broth containing 100 μg/ml of ampicillin in a 50 ml sterile plastic tubes. For the expression of recombinant proteins, 1 % seed culture was routinely inoculated to 25 ml of LB broth containing 0.5 mg/ml of L-arabinose, 100 μg/ml of ampicillin and 20 μg/ml of chloramphenicol in 100 ml baffled flasks, followed by adding L-arabinose (0.5 mg/ml) and/or tetracycline (10 ng/ml) for the induction of chaperone proteins. Cells were cultured with shaking at 230 rpm and 37°C. The growth was monitored by measuring the absorbance of optical density (OD) at 600 nm with a spectrophotometer. When the OD600 of the culture reaches set point, cultures were cooled to 4°C for 30 min and induced by 0.1 mM IPTG at 15°C for 24 h.
To check the chaperone effect of trigger factor, E.coli BL21/pTf16/pG-COE and E.coli BL21/pTf16/pG-S1D were shake-cultured in LB broth at 37°C until OD600 reached 0.6, cooled to 4°C for 30 min and induced at 15°C for 24 h with or without IPTG (0.1 mM) and with or without L-arabinose (0.5 mg/ml) to detect the protein expression.
Cells were harvested and disrupted on ice by sonication (VCX 750, SONICS, Newtown, CT, USA) using a program (45 cycles of 2 sec On/5 s Off, amp 20 %) and centrifuged at 20,000 g for 15 min at 4°C. The pellets and supernatants were separately stored at −20°C until use. Protein concentration was measured by Bradford assay (BioRad, CA, USA) using bovine serum albumin (BSA) as a standard. Protein expression was evaluated using band intensities of recombinant protein on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) from independent and/or parallel induction samples.
Optimization of soluble recombinant protein expression
To find the optimal conditions for the soluble expression of recombinant proteins, the effect of various induction conditions for E.coli BL21/pTf16/pG-COE and E.coli BL21/pTf16/pG-S1D strains were optimized by regulating induction temperature (15°C for 24 h, 21°C for 16 h, 28°C for 8 h and 37°C for 4 h with 0.1 mM IPTG when OD600 reaches about 0.6), IPTG concentration (0.1, 0.4, 0.7 and 1.0 mM at 15°C for 24 h shake culture starting at OD600 of 0.6), OD at induction (OD600 = 0.6, 0.9, 1.2 and 1.5 at 15°C for 24 h shake culture with 0.1 mM IPTG) and harvest time (12,24,36 and 48 h after induction at 15°C with 0.1 mM IPTG).
Protein purification and analysis
Soluble recombinant proteins were purified from the crude cell extracts by glutathione-affinity chromatography using a glutathione Sepharose 4B (GE Healthcare, Sweden). The column was equilibrated with 5 column volume of binding buffer (140 mM NaCl, 2.7 mM KCl, 10 mM Na2HPO4 and 1.8 mM KH2PO4, pH 7.3) and crude soluble proteins were applied. The column was washed with 5 column volume of binding buffer and bound proteins were eluted with elution buffer (50 mM Tris-HCl and 10 mM glutathione, pH 8.0).
Proteins were analyzed by SDS-PAGE on pre-maid 12 % polyacrylamide mini-gels (KomaBiotech, Korea) run in a Mini-PROTEAN electrophoresis system (Bio-Rad, CA, USA). Gels were stained with Coomassie Blue R250 and images were analyzed by image analysis software of a Che-miDoc™ MP System (BioRad, CA, USA). For the western blot assay, proteins after SDS-PAGE run were transferred to a nitrocellulose membrane (Whatman, USA) in a Mini Trans-Blot system (BioRad, USA). Then the membrane was blocked with Tris-buffered saline/0.1 % Tween 20 (TBST) containing 5 % skim milk (BD/Difco, NJ, USA) for 1 h and probed with mouse anti-GST monoclonal antibody (Genscript, USA) and pig anti-sera against PEDV in TBST at 1:1000 and 1:500 dilutions, respectively, for 1 h both at room temperature. HRP-conjugated goat antimouse IgG (MerckMillipore, Germany) and goat antiporcine IgG (KPL, USA) diluted at 1:5000 were used as secondary antibody. Detection was carried out using an ECL detection kit (GE Healthcare, Sweden).
Indirect enzyme-linked immunosorbent assay (ELISA)
The reactivity of purified proteins with pig anti-sera against PEDV was tested by an indirect ELISA. The 96 well Immuno-plate (SPL, South Korea) was coated with 5 μg/ml of purified proteins in carbonate buffer (pH 9.6) at 4°C overnight. Plates were blocked with 1 % bovine serum albumin (BSA) in PBS for 2 h at room temperature. Pre-immune and immune sera were obtained from pigs before and after immunization of commercial PEDV vaccine (Green Cross, South Korea). Sera diluted at 1:1000 were added to the corresponding wells and incubated for 1 h at room temperature. HRP conjugated goat antiporcine IgG (KPL, USA) at 1:2500 dilution was added and incubated for 1 h at room temperature. Plates were incubated with the TMB substrate solution (Sigma, USA) for 15 min in the dark and reaction was stopped by adding sulfuric acid. Absorbance at 450 nm was measured in an Infinite 200 PRO microplate reader (Tekan, Switzerland).
Mouse immunization and detection of antibody production
Six-week-old female BALB/c mice were purchased from Samtako (Osan, Korea). The experiments were performed in accordance with the guidelines for the care and use of laboratory animals under the approval of animal ethics committee at Seoul National University (SNU-130415-1). The mice, maintained under standard pathogen-free conditions, were provided with free access to food and water during the experiments. Mice (n = 5) were immunized subcutaneously with 10 μg of purified proteins three times at 2-week intervals. Complete Freund's adjuvant (CFA) and incomplete Freund's adjuvant (IFA) were mixed with antigens for the priming and boosting of animals. Sera were collected from the mice at 0, 14, 28 and 42 days after first immunization for the antibody detection and serum neutralization assay.
The induction of antigen specific serum IgG level was measured by ELISA. Plates were coated and blocked using the same method described in the indirect ELISA section. After blocking, 2 fold serial diluted mouse sera were added to the wells and incubated for 1 h at room temperature. HRP conjugated goat anti-mouse IgG secondary antibody diluted at 1:5000 was added and incubated for 1 h at room temperature. Other assay procedures are same as described in the indirect ELISA section.
Serum neutralization (SN) assay
The presence of PEDV specific neutralizing antibodies in the serum collected from immunized mice was determined by a serum neutralizing assay. Terminal serum samples were inactivated at 56°C for 30 min, followed by 2 fold serial dilutions in a serum-free α-MEM containing 1 % antibiotic-antimycotic solution (Invitrogen, USA). PEDV SM98 isolate of 200 TCID 50 /0.1 ml was mixed with an equal volume of diluted sera. After incubation for 1 h at 37°C, 0.1 ml of each virus-serum mixture was inoculated onto Vero cell monolayers in the 96-well tissue culture plates. After adsorption for 1 h at 37°C, virus-serum mixture was removed and plates were washed 3 times with PBS. Serum-free α-MEM medium containing trypsin (2.5 μg/ml) was then added into each well and incubated for 3~6 days at 37°C. The SN titers were expressed as the highest serum dilution resulting in the inhibition of cytopathic effect.
Statistical analysis
The Student's t test was used for all statistical analyses, and p-values of less than 0.05 were considered statistically significant. | 5,874.4 | 2016-05-04T00:00:00.000 | [
"Biology"
] |
On recurrence and transience of multivariate near-critical stochastic processes
We obtain complementary recurrence and transience criteria for processes $X=(X_n)_{n \ge 0}$ with values in $\mathbb R^d_+$ fulfilling a non-linear equation $X_{n+1}=MX_n+g(X_n)+ \xi_{n+1}$. Here $M$ denotes a primitive matrix having Perron-Frobenius eigenvalue 1, and $g$ denotes some function. The conditional expectation and variance of the noise $(\xi_{n+1})_{n \ge 0}$ are such that $X$ obeys a weak form of the Markov property. The results generalize criteria for the 1-dimensional case in [5].
Introduction and main results
For Markov chains with a higher-dimensional state space it is in general difficult to obtain criteria for recurrence or transience which cover a broader class of models. Typically this requires some specific assumptions on the type of model. In this paper we consider discrete time stochastic processes X = (X n ) n≥0 taking values in the positive orthant R d + (consisting of column vectors) with d ≥ 1, which obey non-linear equations of the form X n+1 = M X n + g(X n ) + ξ n+1 , n ∈ N 0 . (1.1) Here M denotes a d × d matrix with non-negative entries and g : R d + → R d + a measurable function. Let us successively discuss our assumptions on M , g and the random fluctuations (ξ n+1 ) n≥0 . We require that M is a primitive matrix meaning that for a certain power of M all entries are (strictly) positive. Then it is known from Perron-Frobenius theory that M has left and right eigenvectors = ( 1 , . . . , d ) and r = (r 1 , . . . , r d ) T belonging to some positive eigenvalue and possessing only positive entries. We assume that this eigenvalue is 1: M = , M r = r .
Further and r are unique up to scaling factors. As is customary we choose them such that r = 1 . (1.2) For the function g we assume that g(x) = o( x ) as x → ∞ (1.3) with some norm on the Euclidian space R d . As to the random fluctuations we demand that X is adapted to a filtration F = (F n ) n≥0 such that E[ ξ n+1 | F n ] = 0 , E[( ξ n+1 ) 2 | F n ] = σ 2 (X n ) a.s. (1.4) for some measurable function σ : R d + → R + fulfilling σ(x) = o( x ) for x → ∞ . (1.5) In view of applications such as branching processes we might summerize these requirements on the whole as the assumption of near criticality. Quite a few models fit into this framework. Here we do not dwell on them but refer to the paper [6] and to the literature cited therein. The assumption (1.4) establishes a weak form of the Markov property. We do not assume that X is a Markov chain but just formulate those assumptions which are required for the martingale considerations in our proofs. Certainly applications of our results will typically concern Markov chains.
The aim of this paper is to establish criteria which allow to decide whether X n → ∞ is an event of zero probability or not. Loosely speaking these are criteria for recurrence or transience of our models. In the univariate case d = 1 this question has been discussed in [5]. Ignoring some side conditions the result there was as follows: If for some ε > 0 and for x sufficiently large then we have recurrence. If on the other hand for some ε > 0 and for x sufficiently large then there is transience. Heuristically this can be understood as follows: In the first regime it is the noise ξ n+1 which dominates the drift g(X n ), while in the second regime it is the other way round. We like to generalize this dichotomy to the multivariate setting. A possible way of generalization is to suitably convert each of the two conditions to all x ∈ R d + with sufficiently large norm x , see Klebaner [7] and González et al [3]. A relaxation of this approach for special choices of g and σ 2 covering new examples has been obtained by Adam [1]. Yet one can do with weaker assumptions. The intuition behind this assertion is that our processes behave in a sense 1-dimensional. More precisely, if the event X n → ∞ occurs, then in view of (1.3) and (1.5) it is the term M X n , which dominates on the right-hand side of (1.1). Thus one would expect that X n will escape to ∞ approximately along the ray r = {νr : ν ≥ 0} spanned by the eigenvector r of M . This suggests that the two conditions above are required only in certain vicinities of this ray. (The last assertion of Theorem 2 below confirms this heuristics.) To formalize these considerations let us introduce some notation. For any x ∈ R d let x := r x ,x := (I − r )x , thus x =x +x , with the identity matrix I. Note thatx is the multiple ( x)r of the vector r and thus belongs to the ray r. From (1.2) r r = r respectivelyx =x meaning that r is a projection matrix. Moreover x = x and x = 0. The two conditionsx ∈ r and x = x determinex ∈ R d uniquely.
For convenience we require the additional moment condition (which could be relaxed) p | F n ] ≤ cσ p (X n ) with p = 2 + δ .
In the case d = 1 we havex = 0 and x · g(x) = xg(x) such that we are back to the result from [5]. Note that due to (1.3) the above condition x 2 ≤ b x · g(x) applies only to vectors x ∈ R d + with x = o( x ) for x → ∞. Since alsox = 0 for x ∈ r, the condition defines a certain vicinity of the ray r (depending on g). Outside this region the relation between g and σ 2 stays arbitrary.
For our second result on divergence of (X n ) n≥0 we first rule out an evident case. We Moreover we strengthen (1.5) to the assumption where δ is as in assumption (A1).
Theorem 2.
Let (A1) to (A3) be fulfilled and let ε > 0. Assume that for every b > 0 there exists some a > 0 such that for Then there is a real number v ≥ 0 such that P lim sup n X n ≤ v or X n → ∞ = 1 .
If also P(sup n≥0 X n > c) > 0 for every c > 0, then P( X n → ∞) > 0 and P X n X n → r r X n → ∞ = 1 .
Again we recover for d = 1 the corresponding result from [5]. Due to (A3) it is now the condition x ≤ bσ(x) giving the vicinity of the ray r, where g(x) and σ 2 (x) are interrelated.
Remark. Let us comment on the assumptions of Theorem 2.
1. Obviously (A2) is also a necessary requirement in Theorem 2. Typically it is easily checked in concrete examples. For Markov chains with a countable discrete state space S ⊂ R d + it says that away from zero there are no absorbing states. In the general case there is the following criterion: (A2) holds if g(x) is uniformly bounded away from zero on sets of the form {x ∈ R d + : u ≤ x ≤ u + 1} with u > 0 sufficiently large. For the proof of this claim adopt the arguments at the end of section 2 in [5] to the process ( X n ) n≥0 .
2. Assumption (A3) cannot be weakened substantially in our general context. This follows from example C, Section 3 in [5]. We note that (A3) is weaker than the corresponding assumption in [5] for the 1-dimensional case.
3. Remarkably, condition (1.7) cannot be relaxed in our general context. It is not enough to require (1.7) just for some b > 0 as we shall see at the end of this paper by means of a counterexample. It is tempting to conjecture that condition (1.6) cannot be weakened, too.
So far we have not specified any choice of the norm on R d . This was not necessary so far, since as is well-known all norms on a finite dimensional Euclidean space are equivalent, and one easily convinces oneself that all our conditions or statements involving norms are preserved if one passes to an equivalent norm. Thus, in examples one may work with the most convenient one, e.g. the l 1 -or l 2 -norm. For our proofs these norms are not appropriate. We shall utilize a norm specificially suited for our purposes. This norm is introduced in section 2. The proofs of the theorems are then presented in section 3 and 4. They use ideas from [5] and [8] and are based on the construction of Lyapunov functions of the form Section 5 contains the counterexample.
For notational convenience we use the symbol c for a positive constant which may change its value from line to line.
A useful norm
Let us briefly put together the facts on matrices which we are going to use. Recall that M is a primitive matrix with Perron-Frobenius eigenvalue 1 and corresponding left and right eigenvectors and r. Then as is well-known from Perron-Frobenius theory (see This maximum is called the spectral radius of the matrix M − r . It follows from matrix theory (see [4], Lemma 5.6.10) that one can construct a matrix norm ||| ||| on the space of all d × d matrices such that ρ := |||M − r ||| < 1 .
From this matrix norm we obtain (see [4], Theorem 5.7.13) a functional on R d via where C x denotes the d × d matrix having all columns equal to x.
is a norm, since the properties of norms transfer from ||| ||| directly to . This is the norm we are going to work with in the sequel. It has the property Ax ≤ |||A||| · x (2.1) for x ∈ R d and any d × d matrix A. Indeed C Ax = AC x and the property |||C Ax ||| ≤ |||A||| · |||C x ||| of matrix norms gives the claim. In particular By equivalence of norms we may change from to any other norm. In particular there is a constant λ < ∞ such that To see this observe that from the inequality (2.1) we have x ≤ γ x with γ = |||I − r |||. Also x := 1 |x 1 | + · · · + d |x d | defines a norm on R d , since i > 0 for all i = 1, . . . , d.
Thus by equivalence of norms we arrive at (2.3).
In order to apply these results to our process (X n ) n≥0 note that we have (I − r )M = M − r = M (I − r ) and X n = 0, thuš for some c < ∞. (Here we need that g(x) has only non-negative components.) Further observe that for any µ > 0 and a, b ≥ 0 we have Applying this estimate twice to the right-hand side of (2.4) we obtain for any µ > 0 with a suitable c < ∞.
Proof of Theorem 1
First observe that if we replace X n by X n := X n + r for all n ≥ 0 then equations (1.1) and (1.4) as well as assumption (A1) still hold, if g(x) and σ 2 (x) are replaced by g(x) := g(x − r) and σ 2 (x) := σ 2 (x − r). Note that the assumptions (1.3) and (1.5) are not affected if g and σ 2 are substituted by g and σ 2 , and the same holds true for the conditions formulated in Theorem 1 if one replaces ε by ε/2. Thus without loss of generality we may assume X n ≥ 1 for all n ≥ 0 throughout the proof. Then for any α > 0 L n := X n 2 ( X n ) 2 + α log X n , n ∈ N 0 , is a sequence of non-negative random variables. We show that for large α it possesses a supermartingale property. The proof uses the following estimate, where I(A) denotes the indicator variable of an event A.
Lemma 2.
If α is chosen large enough, then there is a number s > 0 such that Proof. Since M = we have the equation X n+1 = X n + g(X n ) + ξ n+1 . (1 + µ)ρ 2 X n 2 + c ( g(X n )) 2 + c ξ n+1 for some sufficiently large c < ∞. Now ρ < 1, thus, if µ is sufficiently close to 0, X n+1 In view of (A1), if we further enlarge c, Next from (3.1) and Lemma 1 (with t = X n + g(X n ) and h = ξ n+1 ) for η > 0 log X n+1 ≤ log( X n + g(X n )) By means of (1.3), (1.4), (A1) and the Markov inequality and choosing η sufficiently small it follows for X n sufficiently large with some c < ∞. Because of (1.5) there is a number s > 0 such that for X n ≥ s a.s.
for X n ≥ s and s sufficiently large. If we let α ≥ 6c/ε − c we arrive at for X n ≥ s. We are now ready for the conclusion: If (α + c) g(X n ) · X n ≤ µ X n 2 , then obviously E[L n+1 | F n ] ≤ L n a.s. for X n ≥ s. If on the other hand µ X n 2 ≤ (α + c) g(X n ) · X n then by equivalence of norms there is a b < ∞ such that X n 2 ≤ b g(X n ) · X n . Now the assumption of Theorem 1 comes into play, and again E[L n+1 | F n ] ≤ L n a.s., if only X n is large enough. Thus the claim of the lemma follows.
We complete the proof of Theorem 1 now as in [5]. Suppose that the event X n → ∞ has positive probability. Then the same holds for the event L n → ∞, and there is natural number N such that P(E) > 0 for the event Define the stopping time T N := min{n ≥ N : L n < s} .
In view of Lemma 2 the process (L n∧T N ) n≥N is a supermartingale. It is non-negative and thus a.s. convergent. However, on the event E we have T N = ∞ and L n → ∞ and consequently L n∧T N → ∞. This contradicts the assumption P(E) > 0, and the proof is finished.
Proof of Theorem 2
Here we may replace X n by X n + 3r. Therefore without loss of generality we assume X n ≥ 3 for all n ∈ N 0 . Now we consider the processes L = L α,β,γ,j given by with the jth component X n,j of X n , 1 ≤ j ≤ d, and with α, β > 0 and γ ≥ 0. In view of the Jensen inequality we may without loss of generality restrict ourselves to the case 2 < p ≤ 3, in which the following estimate is valid.
Then there is a constant c < ∞ such that for all t ≥ 3 and h > 3 − t Proof. See formula (6) in [5]. Lemma 4. Let 0 < β < κδ − 1 and γ ≥ 0 such that (1 + γ/ j )ρ 2 < 1. Then, if α is sufficiently large, there is a real number s > 0 such that Proof. We proceed similarly as in the proof of Lemma 2. Here instead of (3.2) we have the estimate By assumption on γ and for µ > 0 sufficiently small this implies with some c < ∞. Next from Lemma 3 with t = X n and h = g(X n ) + ξ n+1 , from (2.5) and (3.1) and from g(X n ) ≥ 0 for X n sufficiently large. Combining this estimate with (4.1) and rearranging terms we L = L α,β,γ,j . Observe that for some s > 0 and for m, m > 0 and t > s fulfilling If we choose α, β, γ and s as demanded in Lemma 4, then (m ∧ L n ) n≥0 becomes a non-negative supermartingal, which thus is a.s. convergent. Then up to a null-event there arise three possibilities. Either L n → 0, then X n → ∞. Or lim inf n L n ≥ m, then lim sup n X n ≤ t. Or else L n has a limit 0 < L ∞ < m, then s ≤ lim inf n X n < ∞. In order to transfer these alternatives to the process (X n ) n≥0 we choose different β 1 , β 2 > 0 and a γ > 0 fitting the assumptions of Lemma 4. We consider the processes L 0 := L α,β1,0,1 , L 1 := L α,β1,γ,1 , . . . , L d := L α,β1,γ,d , L d+1 := L α,β2,0,1 and for some s, t, m > 0 the events We let α, s, t large and m small enough such that the above conclusion for L = (L n ) n≥0 applies simultaneously to all processes L 0 , . . . , L d+1 . Then P(E ∪ E ∪ E ) = 1. Let us show that P(E ) = 0 for s sufficiently large. We have Thus the sequence X n is convergent on E with s ≤ lim n X n < ∞. This means that the random variablesX n = r X n converge on E . Next from the definition of L 0 it follows that the sequence X n converges on the event E with some limit Z. If Z = 0 thenX n → 0, and we obtain that X n =X n +X n is convergent on E . If on the other hand Z > 0, then we see from the convergence of L 1 n , . . . , L d n that the components X n,1 , . . . , X n,d all converge on E . Again we conclude that X n is a convergent sequence on the event E . Let X ∞ be the limit. Now, given u > 0, if we choose s sufficiently large then from s ≤ lim n X n < ∞ on E we obtain u ≤ X ∞ < ∞ by equivalence of norms. Therefore assumption (A2) may be applied and we obtain P(E ) = 0 and consequently P(E ∪ E ) = 1. By equivalence of norms this translates into the first assertion of Theorem 2.
For the second assertion we switch back to the supermartingale m ∧ L with γ = 0. Let c > t be such that If now P(E ) = 1, then lim n m ∧ L n ≥ α(log t) −β a.s. which contradicts the last inequality. Therefore it follows P(E) > 0. This gives the second assertion.
For the last assertion we first show that ξ n+1 = o( X n ) a.s. on the event X n → ∞ . If again α, β, γ and s are chosen in accordance with Lemma 4 then (L n∧T N ) n≥0 is a non-negative supermartingal and thus a.s. convergent. It follows ∞ k=0 σ p (X k ) ( X k ) p < ∞ a.s. on the event T N = ∞ . Now in view of the first assertion of this theorem {T N = ∞} ↑ { X n → ∞} for N → ∞, if only s is sufficiently large. Therefore ∞ k=0 σ p (X k ) ( X k ) p < ∞ a.s. on the event X n → ∞ .
Because of (A1) and the Markov inequality this entails for every η > 0 ∞ k=0 P( ξ k+1 > η X k | F k ) < ∞ a.s. on the event X n → ∞ , and the martingale version of the Borel-Cantelli Lemma (see [2] By induction On the other handX n / X n = r/ r . This yields the last claim of Theorem 2.
However, due to the definition of g(x), the condition (1.7) will never be satisfied for b > 1, no matter how g and σ are chosen. We shall see that indeed the conclusion of Theorem 2 fails, even though (1.7) can be achieved for b ≤ 1 (but not all b). The reason is that the process X again and again leaves the region defined by the inequality x ≤ σ(x). | 5,062.4 | 2016-05-13T00:00:00.000 | [
"Mathematics"
] |
Automated machine learning for the identification of asymptomatic COVID-19 carriers based on chest CT images
Background Asymptomatic COVID-19 carriers with normal chest computed tomography (CT) scans have perpetuated the ongoing pandemic of this disease. This retrospective study aimed to use automated machine learning (AutoML) to develop a prediction model based on CT characteristics for the identification of asymptomatic carriers. Methods Asymptomatic carriers were from Yangzhou Third People’s Hospital from August 1st, 2020, to March 31st, 2021, and the control group included a healthy population from a nonepizootic area with two negative RT‒PCR results within 48 h. All CT images were preprocessed using MATLAB. Model development and validation were conducted in R with the H2O package. The models were built based on six algorithms, e.g., random forest and deep neural network (DNN), and a training set (n = 691). The models were improved by automatically adjusting hyperparameters for an internal validation set (n = 306). The performance of the obtained models was evaluated based on a dataset from Suzhou (n = 178) using the area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1 score. Results A total of 1,175 images were preprocessed with high stability. Six models were developed, and the performance of the DNN model ranked first, with an AUC value of 0.898 for the test set. The sensitivity, specificity, PPV, NPV, F1 score and accuracy of the DNN model were 0.820, 0.854, 0.849, 0.826, 0.834 and 0.837, respectively. A plot of a local interpretable model-agnostic explanation demonstrated how different variables worked in identifying asymptomatic carriers. Conclusions Our study demonstrates that AutoML models based on CT images can be used to identify asymptomatic carriers. The most promising model for clinical implementation is the DNN-algorithm-based model.
Introduction
Coronaviruses are widely distributed pathogens in humans and other animals and can cause enteric, neurologic, and respiratory illnesses ranging from the common cold to fatal infections [1].Timely and accurate diagnosis of COVID-19 is of utmost importance for the prompt treatment of patients and their isolation.The diagnosis is confirmed by reverse-transcription polymerase chain reaction (RT-PCR).Typical manifestations of COVID-19 pneumonia are para-pleural ground-glass opacity (GGO), interlobular septal thickening, central consolidation of the focus and banded atelectasis [1,2].The National Health Commission of the People's Republic of China initially proposed screening based only on clinical and chest computed tomography (CT) findings.However, recently, asymptomatic carriers have perpetuated the ongoing pandemic of this viral disease [3][4][5].It is difficult to timely and accurately reflect the internal viral load on the basis of throat swab samples.Negative RT-PCR results for throat swab samples are not the gold standard of exclusion.Transmission of the novel COVID-19 from an asymptomatic carrier with normal CT findings has been reported.The CT images of the asymptomatic patients are initially judged as normal by radiologists.However, some asymptomatic infections develop into pneumonia in later weeks [6].The rapid person-to-person transmission among asymptomatic carriers is difficult to discover in the clinic.As the full liberalization of COVID-19, early recognition of COVID-19 pneumonia would help determine the degree of the disease and promote early treatment, thereby preventing viral pneumonia.Thus, it is not enough for clinicians alone to assess the CT characteristics of asymptomatic patients.Applications of artificial intelligence (AI) will help identify CT characteristics specific to asymptomatic patients.
AI is rapidly entering the medical domain and is being used for a wide range of health care and research purposes, including disease detection [7], empirical therapy selection [8], and drug discovery [9].The complexity and growing volume of health care data indicate that AI techniques will increasingly be applied in almost every medical field in the upcoming years.Recent studies have demonstrated that AI may prove extremely helpful in the medical imaging domain due to its high capability for identifying specific disease patterns.Studies have proposed several machine learning models that can accurately predict COVID-19 disease severity [10][11][12].A comprehensive bibliometric analysis was performed to summarize all accessible techniques for detecting, classifying, monitoring and locating COVID-19 patients, including AI, big data and smart applications [13].They concluded that AI-assisted CT was better at diagnosing COVID-19 pneumonia due to its high precision and low false-negative rates.However, models have rarely been built to separate asymptomatic from healthy individuals.This study was designed to (1) develop predictive models by using automated machine learning (AutoML), characterized by automated hyperparameter adjustment, and (2) choose the best performing machine learning model based on CT radiomic features for the identification of asymptomatic COVID-19 patients.
Machine learning models have often been criticized for being black-box models.We tried to stare into this so-called "black box" to identify the variables that drive model performance and understand the extent of these variables' effects on model performance.In this study, we aimed to generate multiple machine learning models, assess their performance, and select the highest-performing model for clinical practice.
Patient cohorts
This retrospective case-control study was approved by the institutional review board of the First Affiliated Hospital of Soochow University (Suzhou).Individuals enrolled in our study were treated at Yangzhou Third People's Hospital (Yangzhou) from August 1st, 2020, to March 31st, 2021.Patients (n = 119) confirmed to have COVID-19 by RT-PCR were included in the case group, presenting with no typical symptoms and no obvious abnormalities in CT images.All positive COVID-19 patients underwent a chest CT exam within 48 h after the RT-PCR test, and the identified CT scans were reviewed by two experienced radiologists who reached a consensus on the results.Participants in the control group (n = 75) were from the health examination population of a hospital from a nonepizootic area; these subjects had two negative RT-PCR results for COVID-19 within 48 h.Each throat swab was collected at least 24 h apart.Chest CT exams were diagnosed as normal in the control group by two experienced radiologists who reached consensus on the results.The exclusion criteria of the control group included (1) various types of pneumonia (e.g., viral, bacterial and mycoplasma pneumonia), (2) pulmonary tumours, (3) pulmonary emphysema or pneumatocele, (4) tuberculosis, and (5) bronchiectasis.
We randomly split the CT images (n = 997) of the aforementioned individuals (n = 194) into training (n = 691) and internal validation (n = 306) datasets to develop the models.Furthermore, these models were tested on CT images (n = 178) of individuals enrolled based on the aforementioned inclusion and exclusion criteria from Suzhou from 1st January 2021 to 31st January 2021.The flowchart of our study is shown in Fig. 1.
Chest CT exams
The identified CT images were directly searched and downloaded from a medical image cloud platform (www.ftimage.cn).The lung window was applied to generate 5∼8 images for one individual axial slice in a CT scan with 5 mm thickness, 1500 ± 100 Hounsfield unit (HU) window width and a − 600 ± 50 HU window level.The images were saved in PNG format.
Image preprocessing
All CT images were pre-processed, and the lung lobes were masked as the region of interest (ROI) using the image processing toolbox in MATLAB (version: R2021b; Natick, MA).We extracted 32 features from each ROI using 5 feature extraction algorithms, including texture features based on a grey histogram (GH) (n = 6), texture features based on a grey-level co-occurrence matrix (GLCM) (n = 6), Gabor filter features (GB) (n = 3), Gauss Markov random field features (GMRF) (n = 12) and Tamura features (T) (n = 5).Three authors worked together to perform all image segmentations.Three authors independently extracted features from the same set of randomly selected images.To test the differences in image preprocessing between these authors, the Kruskal-Wallis H test with Dunn post hoc test was used.Furthermore, intraclass correlation coefficient (ICC) analysis was used to calculate the stability between the three authors.Subsequent analysis was continued only when there were no statistically significant differences (P > 0.05) in the Kruskal-Wallis H test and the features had excellent stability (ICC > 0.75).
Model development and validation based on AutoML
Model development and validation were conducted in R software (version: 4.1.0,The R Foundation) with the H2O package installed from the H2O.ai (cluster version: 3.36.0.2) platform (www.h2o.ai).AutoML is a function in H2O that automatically builds a series of machine First, the dataset from Yangzhou was randomly split into a 'training' (70%) set and a 'validation' (30%) set.Second, the training set was used to develop models to predict the probability of COVID-19 infection based on six algorithms, namely, the distributed random forest (RF), random grid of gradient boosting machine (GBM), random grid of deep neural network (DNN), fixed grid of generalized linear model (GLM), random grid of eXtreme gradient boosting (XGBoost) and stacked ensemble (SE) algorithms.Notably, DNN is defined as multilayer perception, a multilayer feedforward artificial neural network containing numerous hidden layers and hyperparameters that works well on tabular data in the H2O official document.The models were then ranked according to their performance on the training set by the AutoML leaderboard.Furthermore, fivefold cross-validation was used to validate these models, and fine-tuned hyperparameters were applied to elevate the performance of the models.The models were developed from the training set based on different algorithms, and the performance of the models was improved by automatically adjusting the hyperparameters and calculating the mean square error (MSE) in the internal validation set.The above process was repeated five times, and then the models with the minimum MSE were obtained.Finally, the performance of the obtained models was verified in a dataset from Suzhou (n = 21).
Statistical analysis
Continuous variables were described as the mean ± standard deviation (SD) if normally distributed or as the median and interquartile range (IQR) if not.The differences in feature extraction among the three authors were compared using the Kruskal-Wallis H test with Dunn post hoc test.There was no statistical significance when P > 0.05, which is representative of feature stability.Image preprocessing and feature extraction were conducted in MATLAB (version: R2021b; Natick, MA), and statistical analysis was performed with R software (version: 4.1.0,The R Foundation) connected with the H2O.ai platform.Data visualization involved a receiver operating characteristic (ROC) curve with an area under the curve (AUC) for model discrimination.Model performance was evaluated based on the AUC, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1 score.The F1 score is the harmonic mean of precision and recall.The actual classifications and predictive probabilities were listed as a confusion matrix consisting of true positive (TP), true negative (TN), false positive (FP) and false negative (FN).The formulas are listed as follows: accuracy =
Feature selection and model optimization using AutoML
A total of Heatmaps of variable importance demonstrated the different weights of 32 texture features for different models based on the training set (Fig. 3a).Many models determined that the Tamura roughness was an important variable for predicting the outcome.The models we proposed were highly correlated (Fig. 3b).
Performance of the best model
As shown in Table 2, the DNN model showed the best ability to distinguish asymptomatic COVID-19 patients from normal controls.The sensitivity, specificity, PPV, NPV, F1 score and accuracy of the DNN model were 0.820, 0.854, 0.849, 0.826, 0.834 and 0.837, respectively (Table 2).To interpret the DNN model, we enumerated several important variables in sequence in Table 3.The texture mean based on GB ranked first, with a relative importance value of 1.000, followed by R based on GB (value = 0.935).Four parameters based on GMRF had values of 0.922, 0.894, 0.833 and 0.818.Correlation based on GLCM and line-likeness based on T ranked fourth and fifth, with values of 0.901 and 0.897, respectively.
A plot of local interpretable model-agnostic explanation (LIME) demonstrated how different variables work in separating the asymptomatic from the normal.The red contradicted the prediction, while the blue supported the prediction.As shown in Fig. 4a, positive case 1 was predicted to be asymptomatic, with a probability of 1.00.The texture mean based on GB contributed the most to the prediction, followed by R based on GH.Other cases were explained and are shown in Fig. 4. Additionally, negative case 1 in Fig. 4b was judged as normal by the DNN model, with a probability of 0.79.The texture mean based on GB also had the highest weight based on the DNN model.
Principal findings
We used AutoML to successfully generate multiple machine learning models, assess their performance, and select the highest-performing models for predicting asymptomatic surviving COVID-19 infection.Our study demonstrates that machine learning models that use CT image characteristics can identify asymptomatic patients.Clinicians can just type in the image omics features and get a prediction probability.Positive nucleic acid result was hard to get just from one or twice throat swab samples.If some patients was suspected with COVID-19 carriers, but with no typical symptom, no typical CT viral pneumonia imaging performance, this AI model we built could help identify these asymptomatic patients, or could provide evidence for clinician to get the deeper airway samples like tracheoscopic perfusion.AI, including machine learning and deep learning, has been widely used in medical fields such as disease diagnosis [14], lesion detection [15], and prognostic analysis [16].Previous studies revealed the potation of AI in medical imaging [17][18].A systematic review summarized a total of 48 studies about AI methods applied to COVID-19 diagnosis, biomarker discovery, therapeutic evaluation and survival analysis from January 2020 to June 2022 [19].This review provided evidence to delineate the potential of AI in analysing complex gene information for COVID-19 modeling on multiple aspects including diagnosis.These gene information is very significant.Baktash et al. trained an ensemble bagged tree model using clinical parameters but not CT scan for detecting atypical COVID-19 presentations with an accuracy of 81.79%, sensitivity of 85.85% and specificity of 76.65% [20].These studies showed that AI has the potential to diagnose the COVID-19.Yan et al. retrospectively collected 206 patients with positive RT-PCR for COVID-19 and their chest CT scans with abnormal findings, and results showed that the CNN model was able to differentiate COVID-19 from other common pneumonias based on the CT scan level [21].His study showed that machine learning just using CT scam might identify the COVID-19.Thus, we developed a series of deep learning models to identify asymptomatic COVID-19 patients based on CT images, which achieved good performance with accuracy values ranging from 0.933 to 0.980 in the test set [22].This published study by our team showed that machine learning showed high accuracy in diagnosing asymptomatic COVID-19 patients.However, previous deep learning model is a kind of black box model, where we don't know how deep learning frameworks recognize these two kinds of CT images.In this study, we used the image omics code to extract interpretable features, such as shape features, first-order statistical features, gray scale co-occurrence matrix and so on.Based on these features, ).Our study used machine learning algorithms to differentiate asymptomatic patients from normal subjects based on CT images and achieved high accuracy, indicating that AI is an efficient and informative tool for medical systems and promotes better decision-making.
The advantage of AutoML is that it is not limited to dealing with numerous medical data by powerful computational capability; it can also reduce time-consuming costs and labour-consuming costs.Uthman et al. developed five AI classifiers to predict whether a study was eligible for their systematic analysis of complex interventions using AutoML, indicating that the best classifier yielded a workload saving of 92% [23].Zhang et al. compared four AutoML frameworks, AutoGluon, TPOT, H2O and AutoKeras, that performed better than traditional machine learning algorithms, such as support vector machine algorithms [24].The authors indicated that AutoML could reduce the time and effort devoted by researchers due to its automatic model optimization.In our study, AutoML code was introduced from the openaccess H2O.ai platform.The process of parameter tuning and optimal algorithm selection was automatic, and we set the running time of AutoML to 30 s.The promising results demonstrated that AutoML is time-efficient and labour-saving with comparable predictive performance.
Radiomic medicine can extract a large amount of texture feature information from images to reflect the heterogeneity of damage.For example, GH is a first-order statistical feature that depicts the distribution of greylevel intensities [25].The GLCM mainly reflects the characteristics of the internal structure of the image through the change in density [26][27][28].Filters can display the spatial heterogeneity of tumours using wavelet transformation [16].GMRF is used to remove inconsistency in the pixel level of slide images [29].Therefore, even if no lesions are found on the CT images, we can analyse different types of texture features extracted to determine whether the lung tissue is damaged.Our results showed that the best model was the DNN model.The XGBoost model, SE model, RF model, GLM, and GBM model performed slightly worse than the DNN model.We used the AUC as our metric of model utility because it accounts for model sensitivity and specificity.According to the DNN model, the texture mean based on GB ranks first in importance.R based on the Gabor filter, the 6th parameter of GMRF, correlation based on GLCM, linelikeness based on the Tamura algorithm, the 11th parameter of GMRF, the 12th parameter of GMRF and the 7th parameter of GMRF ranked in sequence among CT characteristics using the DNN model.Our results showed that these CT characteristics occupied a decisive position in distinguishing asymptomatic carriers.
The diagnosis of asymptomatic COVID-19 carriers is difficult due to no abnormal pathological changes in the lung in radiological images and no apparent symptoms, such as fever, cough and expectoration [2,30].A comprehensive review summarized currently available AI devices to monitor and detect asymptomatic COVID-19 carriers early using vital data [31].Ozturk et al. differentiated normal from COVID-19-infected subjects using deep learning (DL) models, achieving an average accuracy of 98.08% based on X-rays [15].Yasar et al. [32] developed machine learning (ML)-based and DL-based classifiers to distinguish between COVID-19 and non-COVID-19 on CT images, with over 0.9197 AUC values under 2-and 10-fold cross validation.This study used the AutoML method based on CT radiomic features to study asymptomatic COVID-19 patients to find changes in nonfocus areas that humans cannot find.These models also had high sensitivity values, specificity values, and NPVs.
Clinical insights into the Black Box
The trade-off between predictive power and interpretability is a common issue when working with black-box models, especially in medical environments where results have to be explained to medical providers and patients.Interpretability is crucial for questioning, understanding, and trusting AI and machine learning systems.
According to our variable importance heatmap, many models determined that the Tamura roughness exhibited substantial weight for predicting the outcome.The Gabor filter-texture mean was also an influential variable.The confusion matrix of six models for the three datasets provided insight into the black box.The GBM model presented the highest specificity.The DNN model presented the highest sensitivity.The LIME plot of the DNN model allowed us to determine the importance of variables and provided information on how the variables influenced the models' predictions.It provided numerical information on variables' effects.For example, the LIME showed that the GB-Texture mean was associated with an increased probability of negative and a decreased probability of positive results.The large weight ratio of GB-Texture to predict the result supports the idea that CT with low GB-Texture indicates an increased risk of infection.Further exploration is needed to confirm clinical findings and show clinical thresholds.
Limitations
Firstly, a total of 1,175 images from 173 cases were included in our study; thus, the sample number was relatively insufficient.Further exploration in more cities was needed.Secondly, there was no complete biological explanation of the radiomic features in this study, and further exploration is needed in the future.Thirdly, the best DNN model achieved the highest AUC and F1-score, but the specificity and PPV were lower than those of the GLM and SE models.This result indicated that there might be misdiagnosis if the DNN model is used in clinical practice.Fourthly, demographics of the participants was not analysed in this study.Whether the difference existed among the participants was not sure.This is the limitation for broader application.Lastly, manual image preprocessing was conducted before AutoML analysis, which was time-and labour-consuming.Despite the high consistency of image preprocessing, the heterogeneity of devices from different institutions is still inevitable.
Conclusion
In conclusion, we believe that AutoML models based on radiomic features of chest CT images can effectively classify asymptomatic COVID-19 carriers.In the future, we plan to continue research in three areas: first, deep radiomics, which can automatically segment the lung lobes and extract radiomic features using novel technologies, i.e., transfer learning.In addition, augmenting dataset samples from multiple centres is helpful to further ensure model generalization and robustness.Prospective experiments also need to be considered to evaluate model reliability in clinical decisions.Furthermore, we should investigate the association between radiomic features and biological significance to explore new mechanisms to improve our model.
Fig. 2
Fig. 2 Confusion matrix of the six models for the three datasets
Fig. 3
Fig. 3 Heatmaps of variable importance (a) and model correlation (b) based on AutoML in the training set
Fig. 4
Fig. 4 Local interpretable model-agnostic explanation (LIME) of the deep learning model in the test set.(a) shows how eight key features contributed to predicting positivity for the eight COVID-19 cases.(b) shows how eight key features contributed to predicting negative results for the eight normal cases
Table 1
). Six models based on six algorithms were developed, and the performance of the DNN model ranked first among all models, with an AUC value of 0.898 in the test set.As shown in Table 2, all models achieved excellent performance in the training set, with accuracy, sensitivity, specificity, PPV, NPV, F1 score and AUC values beyond
Table 1
Differences in image preprocessing among the three authors using the Kruskal-Wallis H test and ICC analysis The confusion matrix of the six models in the three datasets is depicted in Fig.2.False-positive findings in the test set varied by different models, with 17.98% (16/89) for the XGBoost model, 8.99% (8/89) for the SE model, 46.07%(41/89) for the RF model, 2.25% (2/89) for the GLM, 0 (0/89) for the GBM model and 14.61% (13/89) for the DNN model.With regard to true-positive findings, the DNN model detected 73 COVID-19 images among 89 positive images, with the highest sensitivity value of 0.820 for the test set.Other models showed comparable but inferior sensitivity: 0.809 for the RF model, 0.787 for the SE model, 0.742 for XGBoost and 0.719 for the GLM.The GBM model misclassified 58 among 89 positive images, with the lowest sensitivity of 0.348.
Table 2
Performance of the six models for the three datasets DNN, deep neural network, GBM, gradient boost machine; GLM, general linear model; RF, random forest; SE, Stacked ensemble; XGBoost, eXtreme gradient boosting; AUC, area under the receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value; * , the highest AUC value in the test set.
Table 3
Variable importance rankings for the best AutoML model algorithm (DNN) | 5,234.6 | 2024-02-27T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Implementation and Optimization of Image Processing on the Map of SABRE i.MX_6
Vision relieves humans to understand the environmental deviations over a period. These deviations are seen by capturing the images. The digital image plays a dynamic role in everyday life. One of the processes of optimizing the details of an image whilst removing the random noise is image denoising. It is a well-explored research topic in the field of image processing. In the past, the progress made in image denoising has advanced from the improved modeling of digital images. Hence, the major challenges of the image process denoising algorithm is to advance the visual appearance whilst preserving the other details of the real image. Significant research today focuses on wavelet-based denoising methods. This research paper presents a new approach to understand the Sobel imaging process algorithm on the Linux platform and develop an effective algorithm by using different optimization techniques on SABRE i.MX_6. Our work concentrated more on the image process algorithm optimization. By using the OpenCV environment, this paper is intended to simulate a Salt and Pepper noisy phenomenon and remove the noisy pixels by using Median Filter Algorithm. the meaningful addition of the visual quality of the images and the algorithmic optimization assessment.
I. INTRODUCTION
Study Background
Denoising plays an additional important role in modern image processing and analysis. Image denoising approaches are with an aim to preserve the details of an image as well as to remove the random noise to the degree that is possible. It is one of the most used concepts in most image-processing applications. A digital image is subject to a variety of noise that affects the quality of an image. This noise is salt and pepper that is generated by an image sensor defect. Salt and pepper noise is mainly caused by defective pixels in camera sensors that are frequently found in digital transmission. Once an image is corrupted by salt and pepper noise, the pixel values may have any random value inside the maximum as well as minimum values in the dynamic range [1]. In signal processing, it is often desirable to perform some notable noise reduction on an image or signal. The median filter is a nonlinear digital filtering technique that is often used to remove noise. The removal of salt and pepper noise is normally achieved by using median-type filters [2].
The Sobel operator, also sometimes referred to as the Sobel Filter, is used in image processing and computer vision, particularly in edge detection algorithms. This creates an image that emphasizes the edges and transitions. This is known as Loop unrolling or Loop unwinding. It is a Loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size (space-time trade-off).
This transformation is undertaken manually by the programmer or by an optimizing compiler. On a single processor, multithreading is generally implemented by time-division multiplexing (as in multitasking). Here the processor (CPU) alternates between different software threads.
Study Objective
The objective of this research is to implement by optimizing a chain of image processing on the map SABRE i.MX_6 (ARM_Cortex-A9) in implementation.
Specific Objectives
To design algorithmic optimization for image processing (lower complexity algorithm in the mathematical sense, appropriate data structures).
Optimization for efficient use of hardware resources (access to data, cache management, multi-thread).
Expected Outputs
The main outputs of this research will be: • Mathematic moulding and realization • Optimization.
• Analysis for board and statistics.
II. RELATED WORKS
The most important issue in image processing is to remove noise from images. This is achieved by maintaining their details as well as features such as texture edges and colours [3][4][5][6][7][8][9]. Image denoising also affects the rate of segmentation, classification and similar functions. After the images are captured, some interference occurs in the pixels during the digitalization process. Moreover, vibrations ensue on the sensors during the imaging process [4-8, 11, 12].
This deterioration is categorized as salt and pepper noise (SPN) [13,14]. SPN generally reduces the image quality [15]. Consequently, many linear/nonlinear filters have been developed to sort out this problem.
SPN is easily removed with numerous filters, however, only when practically applied to some few noisy pixels [16,17] however, others work on all noisy pixels [18]. To fix the new value of a pixel, these filters use a window that consists of the neighbouring pixels of the noisy pixel recognized as the center pixel. The most common filter is the median filter (MF) [19,20]. MF works on the whole item on all pixels. Applying the filter in this manner, nonetheless, blurs the image as well as distorts from the original pixel values. Standard Median Filtering (SMF) works well in low-intensity noise, by applying a small window size on it [21,22]. The scheme [23] has to remove SPN by a noise level of 90% by using an adaptive median filter. This paper focuses on the quality of an image [24] for different noise densities in the range of 10%-90% with other nonlinear filters. Each method has its advantages and disadvantages.
In this paper, algorithmic optimization has improvised advanced steps with the system architecture modelling. The proposed model manages to remove noise. This research will add value to other former models by introducing algorithmic optimization.
The Outline of the System
The diagram below describes the flow of the system's data. (1).
After that, the flames are processed as grey images and the flames will be entered into the Doping System. In the doping system, the flames will have added some random white and black pixels these points are 'Salt-Pepper Noisy'. srand((unsigned)time(NULL)); The use of a median filter on these noisy points will be removed. Then the image will also be smoother in visual terms. The median filter is an important process before the Sobel edge detection process. Here, the noisy points will be enlarged by the edge detection algorithm. The median filter is non-linear.
This means for two images A(x) and B(x): The Sobel operator performs a 2-D spatial gradient measurement on an image. This emphasizes regions of high spatial frequency that correspond to edges.
Typically, it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. In theory, at least, the operator consists of a pair of 3×3 convolution kernels as shown in Figure 2. One kernel is simply the other rotated by 90°. This is quite similar to the Roberts Cross operator.
Typically, an approximate magnitude is computed using: This is much faster to compute. The angle of orientation of the edge (relative to the pixel grid) gives rise to the spatial gradient given by: 3.2 Salt-Pepper Noise Simulation
Salt-Pepper Noise
SPN is a form of noise sometimes seen in images. It presents itself as sparsely occurring white and black pixels. Fat-tail distributed, or "impulsive" noise is sometimes called SPN or spike noise [27]. An image containing salt-and-pepper noise will have dark pixels in bright regions and bright pixels in dark regions [28].
This type of noise can be caused by various means including analogue-to digital converter errors and bit errors in transmission [29]. It can be mostly eliminated by using dark frame subtraction and interpolating around dark/bright pixels. Dead pixels in LCD monitors produce a similar, but non-random, display. will produce more severe smoothing [31]. Figure 8 indicates the effect of the median filter for clearing noisy points.
Sobel Filter
The Sobel operator is slower to compute than the Roberts Cross operator. It is largely convolution kernel smoothed the input image to a greater extent.
It also makes the operator less sensitive to noise. The operator also generally produces considerably higher output values for similar edges, compared with the Roberts Cross.
Statistic Data of Original Code
Based on the 'gprof' instruction of the Linux System we analysed our code and acquired the original information.
The program parameters were set as below. The test result was shown in Figure 12. loop as assignment operations. In theory after this process, there are "height * width * 9" loops that will be reduced as "height * width" loops mean "height * width * 8" loops will be removed in Median Filter Module every flame.
Statistic of Loop Unrolling Optimization
This paper tested the time cost of the program that processed by loop unrolling and achieved the statistic result of Figure 13. | 1,939 | 2021-12-15T00:00:00.000 | [
"Computer Science"
] |
Autophagic flux determines cell death and survival in response to Apo2L/TRAIL (dulanermin)
Background Macroautophagy is a catabolic process that can mediate cell death or survival. Apo2 ligand (Apo2L)/tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) treatment (TR) is known to induce autophagy. Here we investigated whether SQSTM1/p62 (p62) overexpression, as a marker of autophagic flux, was related to aggressiveness of human prostate cancer (PCa) and whether autophagy regulated the treatment response in sensitive but not resistant PCa cell lines. Methods Immunostaining and immunoblotting analyses of the autophagic markers p62 [in PCa tissue microarrays (TMAs) and PCa cell lines] and LC3 (in PCa cell lines), transmission electron microscopy, and GFP-mCherry-LC3 were used to study autophagy induction and flux. The effect of autophagy inhibition using pharmacologic (3-methyladenine and chloroquine) and genetic [(short hairpin (sh)-mediated knock-down of ATG7 and LAMP2) and small interfering (si)RNA-mediated BECN1 knock-down] approaches on TR-induced cell death was assessed by clonogenic survival, sub-G1 DNA content, and annexinV/PI staining by flow cytometry. Caspase-8 activation was determined by immunoblotting. Results We found that increased cytoplasmic expression of p62 was associated with high-grade PCa, indicating that autophagy signaling might be important for survival in high-grade tumors. TR-resistant cells exhibited high autophagic flux, with more efficient clearance of p62-aggregates in four TR-resistant PCa cell lines: C4-2, LNCaP, DU145, and CWRv22.1. In contrast, autophagic flux was low in TR-sensitive PC3 cells, leading to accumulation of p62-aggregates. Pharmacologic (chloroquine or 3-methyladenine) and genetic (shATG7 or shLAMP2) inhibition of autophagy led to cell death in TR-resistant C4-2 cells. shATG7-expressing PC3 cells, were less sensitive to TR-induced cell death whereas those shLAMP2-expressing were as sensitive as shControl-expressing PC3 cells. Inhibition of autophagic flux using chloroquine prevented clearance of p62 aggregates, leading to caspase-8 activation and cell death in C4-2 cells. In PC3 cells, inhibition of autophagy induction prevented p62 accumulation and hence caspase-8 activation. Conclusions We show that p62 overexpression correlates with advanced stage human PCa. Pharmacologic and genetic inhibition of autophagy in PCa cell lines indicate that autophagic flux can determine the cellular response to TR by regulating caspase-8 activation. Thus, combining various autophagic inhibitors may have a differential impact on TR-induced cell death.
Background
Autophagy is a self degradation process that can mediate cell death as well as survival [1]. Autophagy induced during starvation, growth factor deprivation, hypoxia, endoplasmic reticulum (ER) stress, and microbial infection can prevent cell death [2]. However, it can be also associated with cell death due to excessive mitophagy, leading to loss of mitochondrial membrane potential (Δψ m ), caspase activation, and lysosomal membrane permeabilization [3]. Microtubule-associated protein 1 light chain 3 beta (LC3B; also called ATG8) is used as a marker of autophagy; it is lipidated upon autophagy induction and it is required for autophagosome formation [4,5]. p62/SQSTM1 (p62) facilitates the degradation of polyubiquitinated substrates by autophagy, causing its own degradation; thus it is used as an indicator of autophagic degradation [6].
Apo2L/TRAIL (TNFSF10; clinical name, dulanermin) belongs to a small subset of pro-apoptotic protein ligands in the tumor necrosis factor (TNF) superfamily. As a soluble, zinc-coordinated homotrimeric protein, it has emerged as a promising candidate for cancer therapy through its capacity to trigger apoptosis in many types of cancers without causing significant toxicity to normal cells [7,8]. Apo2L/ TRAIL has also been shown to induce autophagymediated cell death in glioma cells [9,10]. Induction of autophagy in PCa cells has been shown to mediate cell survival, and therefore, inhibiting autophagy enhances drug-induced cell death [11,12].
Recently, p62 has been implicated in activation of caspase-8 through Apo2L/TRAIL-mediated polyubiquitination of caspase-8 and its association with the deathinducing signaling complex [13][14][15]. Consistent with this notion, a study examining apoptosis in individual cells has demonstrated that differences in the extent of caspase-8 activation may be responsible for the cell-tocell variation in Apo2L/TRAIL responsiveness [16].
In this study, we investigated the association between autophagic flux and PCa aggressiveness by immunohistochemical staining for the autophagic marker, p62, in human prostate tissue microarrays (TMA) consisting of low to high Gleason scores. In addition, we examined the effect of autophagy on cell death following Apo2L/TRAIL treatment (TR) in PCa cell lines. Our data suggest differential expression of p62 as disease progresses and that autophagic flux can determine the cellular response to TR by regulating caspase-8 activation.
Results
p62 is over-expressed in the cytoplasm of advanced human prostate cancer cells To understand the role of autophagy in human PCa progression, we examined the expression of SQSTM1/p62 (p62), a marker of autophagic flux in a TMA of human prostate tissue. The TMA consisted of 51 PCa cases with Gleason Scores (GS) ranging from 6 (3 + 3) to 8 (4 + 4). Benign prostate tissue adjacent to tumor tissue was present in TMA cores from 40 of the 51 PCa cases. Using immunohistochemical staining, we determined the presence or absence of autophagy signaling by observing p62 protein localization, which, when involved in autophagic degradation, resides in the cytoplasm. In contrast, nuclear p62 participates in directing nuclear polyubiquitinated proteins to promyelocytic leukemia bodies [17], with no defined role in autophagy.
Next, p62 mRNA expression was examined in two independent PCa microarray data sets available in the Oncomine database; p62 mRNA levels were increased in PCa with high GS compared to adjacent normal prostate tissue ( Figure 1D). As p62 was shown to be involved in the degradation of proteins and cellular organelles by autophagy in the cytosol [18], the increase in mRNA as well as cytoplasmic localization suggested that autophagy might be a relevant process in high-grade PCa. Therefore, we further extended this study to determine how autophagy signaling in PCa affects the response to chemotherapy by investigating the effect of autophagy signaling in our TR-sensitive versus TR-resistant PCa cell lines [19].
Apo2L/TRAIL treatment (TR) induces autophagy
LNCaP-derived C4-2 human PCa cells are resistant to TR [19,20]. Following TR, clonogenic survival declined by up to 10-fold in PC3, but not in C4-2 cells (Figure 2A). To study the role of autophagy in regulating cell survival, we examined the induction of autophagy in these cell lines.
Detecting the presence of autophagic vesicles by using transmission electron microscopy (TEM) is one of the most widely used and sensitive techniques to monitor autophagy [21]. Autophagosomes are typically identified as vacuolar structures bordered by a thin membrane, sequestering cellular contents. The immature vesicles have an electron density lower than or equivalent to the cytoplasm whereas the mature or late-stage vesicles are characterized by higher density. We show such representative structures in Figure 2B. For quantification in Figure 2C, the area covered by these structures was measured and normalized to the total cytoplasmic area analyzed. TEM data suggested that more autophagosomes were present constitutively in PC3 compared to C4-2 cells ( Figure 2B and C). In C4-2 cells there was an early but transient increase in the number of autophagosomes at 6 h following TR, which declined by 24 h (Figure 2B, bottom and C, right). In PC3, however, autophagosome formation significantly increased only after 24 h TR ( Figure 2B, bottom and C, left). Also, TR plus chloroquine (CQ), which inhibits autophagic degradation, compared to CQ alone, resulted in a greater accumulation of autophagosomes in C4-2 compared to PC3 cells, suggesting that autophagic flux was higher in C4-2 cells ( Figure 2B and C). These results demonstrate that C4-2, but not PC3 cells, exhibit a rapid induction and effective completion of autophagy in response to TR.
Next, we confirmed autophagic flux by determining LC3B localization using GFP-mCherry-LC3, which allows distinction between autophagosomal and autolysosomal LC3B as yellow (indicating co-localization of GFP and mCherry) and red signals, respectively. By time-lapse confocal imaging, PC3 cells showed no change in green and red fluorescence for LC3B until 42 min, suggesting that autophagosomes remained intact following TR ( Figure 3A and B; Additional file 1: Movie S1 and Additional file 2: Movie S2). In contrast, C4-2 cells showed decreased green fluorescence intensity for LC3B at 21 min, with a small decrease, if any, in red fluorescence, indicating that autophagosomes were converting into autolysosomes ( Figure 3A and C).
C4-2 cells was associated with cell survival whereas the low flux in PC3 cells was associated with TR-induced cell death.
Immunofluorescence staining for endogenous LC3B showed an increase in the number of LC3B puncta in C4-2 but not PC3 cells following 8 h TR (Additional file 3: Figure S1A). PC3 cells had, however, more LC3B puncta constitutively compared to C4-2 cells, indicating a greater autophagosomal content at baseline, but with no further induction of autophagy following TR. Addition of CQ led to accumulation of LC3B puncta in both PC3 and C4-2 (Additional file 3: Figure S1A). In PC3 cells the number of LC3 puncta did not significantly increase following TR plus CQ compared to CQ alone. However, in C4-2 cells the number significantly increased following TR and was even higher upon addition of CQ (Additional file 3: Figure S1A).
Similar results were obtained with analysis of p62 levels. An autophagy-specific substrate that acts as a scaffold protein and forms protein aggregates, p62 is directed with its targets to autophagic degradation [22,23].
As shown by confocal immunostaining ( Figure 3G) and western blotting (Additional file 3: Figure S1B), after TR expression of cytoplasmic p62 aggregates are increased in PC3, whereas p62 expression decreased in C4-2 cells. Pretreatment with CQ, however, prevented TR-induced degradation of p62 in C4-2 cells (Additional file 3: Figure S1B). These findings were extended to all additional PCa cell lines examined. Similar to C4-2, LNCaP, DU145, and CWRv22.1 cells exhibited minimal cell death (< 10%) as opposed to~50% cell death in PC3 cells following TR (Additional file 3: Figure S1C). Similar to C4-2 cells, p62 levels as analyzed by western blotting and immunostaining, were decreased in the other TR-resistant cell lines following TR treatment (Additional file 3: Figure S1D and S1E). Nevertheless, CQ treatment inhibited TR-induced degradation of p62 (Additional file 3: Figure S1D and S1E). These results suggest that in response to TR, autophagy is induced and high autophagic flux is associated with cell survival whereas low flux is associated with cell death. Inhibition of autophagy enhances TR-induced cell death in C4-2, but not PC3 cells Next, pharmacological and genetic approaches were used to study the effect of autophagy induction and completion on TR-induced cell death in PCa cell lines. Pharmacological inhibition of autophagy by 3-MA or CQ sensitized C4-2 cells to TR-induced cell death, with no significant effect on PC3 cells ( Figure 4A and B). For genetic inhibition of autophagy, lentiviral-mediated shRNA-expressing stable clones of ATG7 and LAMP2 were generated in PC3 and C4-2 cells. LC3B-II levels were reduced in shATG7, while they increased in shLAMP2-expressing cells (Additional file 4: Figure S2A and S2B). p62 levels were increased in both shATG7-and shLAMP2-expressing C4-2 and PC3 cells, suggesting that autophagy inhibition resulted in accumulation of p62 (Additional file 4: Figure S2A and S2B). The extent of knockdown was > 50% for both ATG7 and LAMP2 in PC3 and C4-2 cells (Additional file 4: Figure S2A and S2B). As with 3-MA and CQ, ATG7 and LAMP2depleted C4-2 cells formed fewer colonies ( Figure 4C) and underwent cell death (Additional file 5: Figure S3A) in response to TR, indicating that they were more sensitive to TR when autophagy was inhibited. Interestingly, ATG7 depletion in PC3 cells inhibited TR-induced cell death significantly ( Figure 4D and Additional file 5: Figure S3A). Thus in TR-sensitive cells, with an intrinsic defect in TR-induced autophagic flux, inhibition of autophagosome formation at an early step was protective.
Overall, our data suggest that autophagic clearance of toxic cellular components is essential for the PCa cells to survive TR-induced cell death that is associated with autophagy induction. In TR-sensitive cells TR induces autophagosome-formation; however, due to impaired autophagic flux, autophagosome-associated toxic cellular aggregates are formed, and this results in cell death. Therefore, inhibiting autophagy induction could antagonize its effect. In TR-resistant cells that are proficient in autophagic flux, TR-induced accumulation of cellular aggregates is prevented and the cells survive. Thus, inhibition of the autophagic pathway in TR-resistant cells leads to accumulation of protein aggregates and sensitizes these cells to TR. Thus, TR-induced autophagy causes cell death in TR-sensitive cells, whereas it has a prosurvival role in TR-resistant cells due to differential autophagic flux.
Caspase-8 can be proteolytically cleaved to a p18-kD fragment through its association with p62 aggregates, leading to its complete activation and ensuing apoptosis [13]. Since differential autophagic flux in PCa cells determined cell death in response to TR, we investigated whether the impaired or inhibited autophagic flux led to cell death in response to TR by accumulation of p62 and subsequent activation of caspase-8. Our data suggest that, indeed, PC3 cells with impaired flux showed the pro-and cleaved (p43/p41)-forms of caspase-8 and its fully activated p18-kD form following TR ( Figure 5A). In contrast, C4-2 cells showed only the p43/p41 forms of caspase-8, indicating that the full activation of caspase-8 necessary for apoptosis was absent ( Figure 5A). TR-induced cell death was significantly impaired in PC3, with minimal effect on C4-2 cells following inhibition of caspase activation by the pan-caspase inhibitor z-VAD-fmk or the caspase-8 specific inhibitor z-IETD-fmk, as determined by annexinV/PI staining (Additional file 5: Figure S3B). z-IETD-fmk inhibition of caspase-8 also prevented cell death in PC3 cells expressing shATG7 and shLAMP2 ( Figure 5B). Consistently, in C4-2 cells inhibition of autophagic flux using CQ pretreatment, as measured by inhibition of p62 degradation following TR treatment ( Figure 5C), led to TR-induced accumulation of the fully activated p18-kD form of caspase-8 ( Figure 5C). Similarly, in PC3 cells both 3-MA pretreatment and siBECN1-expression led to a decrease in TR-induced cleaved caspase-8 levels ( Figure 5D and E, respectively). These results confirmed that autophagy induction was required for TR-induced apoptosis in PC3 cells, which depended on caspase-8 activation.
Thus, a constitutive defect in autophagic flux in response to TR causes inhibition of autophagic clearance of p62 aggregates that, in turn, results in caspase-8 activation, leading to cell death in PC3 cells. However, in TR-resistant C4-2 cells, complete autophagy signaling leads to clearance of p62 aggregates, and hence activation of caspase-8 is prevented, thereby facilitating cell survival.
Discussion
In this study we show that autophagy is critical for PCa pathogenesis, as p62 is overexpressed in the cytoplasm of high grade PCa. In contrast, in benign tissue it is only expressed in the cell nuclei, suggesting that p62 has a more basic function apart from autophagy [17]. Interestingly, cytoplasmic p62 expression is positively associated with the aggressiveness of the disease. These findings suggest that p62 could be a potential molecular biomarker for PCa progression and that elevated autophagy might be an important factor for disease progression, maintenance of tumor homeostasis in higher grade PCa, or both. In addition to its principal role of maintaining cellular homeostasis in health and disease, during chemotherapy, autophagy counterbalances the cellular stress generated by chemotherapeutic agents as well as provides energy to maintain cellular homeostasis [24,25]. Therefore, autophagy inhibition has recently emerged as a potential therapeutic approach to induce cell death in cancer cells. The dependence of PCa on this pathway is, therefore, exploitable for therapeutic benefit.
We have previously shown that the combination of CPT-11 with TR increases apoptosis in C4-2 PCa cells, which are otherwise resistant to TR [19,20]. Autophagy mediates cell survival in tumor cells and serves as a mechanism of resistance against many chemotherapeutics, including TR [26]. Here, we have identified autophagy as a mediator of cell survival in four TR-resistant PCa cell lines. These TR-resistant PCa cells exhibited high autophagic flux, in contrast to TR-sensitive PC3 cells, in which autophagic flux was low, preventing completion of autophagy and leading to autophagosome accumulation.
TR led to degradation of p62 and prevented caspase-8 activation in TR-resistant cells. However, in TR-sensitive cells, accumulation of autophagosomes and p62 protein aggregates, which were associated with impaired autophagic degradation, led to caspase-8 activation and apoptosis. Consistently, inhibition of autophagy pharmacologically or genetically by shRNA-mediated knockdown of ATG7 and LAMP2 sensitized C4-2 cells to TR. Importantly, inhibition of autophagy at the different steps in the autophagy pathway led to different outcomes for TR-induced cell death in PC3 cells. Thus, when autophagy was inhibited by 3-MA, siBECN1, or shATG7 before the association of LC3 with the autophagosomal membranes, p62 aggregate formation and subsequent caspase-8 activation was prevented, and cell death was inhibited. In contrast, inhibiting autophagic degradation using CQ or shLAMP2 had no effect on TR-induced cell death in PC3 cells. These findings suggest that accumulation of autophagosomes and p62 protein aggregates in the absence of autophagic degradation is sufficient for TR-induced cell death. Levels of LAMP2 were decreased in TR-sensitive compared to TRresistant cells (data not shown), supporting the notion that autophagic degradation was inhibited in TR-sensitive cells. This was also true in a small-lung carcinoma model, where LAMP2 was down-regulated in TR-sensitive as compared to TR-resistant groups [14]. These findings further support our conclusion that autophagic degradation is impaired in TR-sensitive tumor cell lines.
Conclusions
In summary, we define how the extent and nature of autophagic signaling can determine the response to TR that may be exploited for further clinical development of dulanermin or agonistic antibodies against the Apo2L/ TRAIL receptors. Establishing the extent of autophagy in different grades of PCa will be useful for designing better therapeutic modalities by combining autophagy inhibitors currently in clinical trials [27]. It has been recently suggested that of the NIH-funded PCa clinical trials currently recruiting, 60% are testing interventions known to exert at least a moderate effect on autophagy [28]. These include autophagy inhibitors (e.g. hydroxychloroquine) and therapeutics that impact autophagy, either directly, such as mTOR inhibitors (e.g. everolimus), or indirectly, such as inhibitors of PI3K and AKT. Moreover, in patients with caloric or metabolic deregulation (such as those who are obese) autophagy may have a greater impact, as might agents that modulate it and, therefore, based on our findings, such patients could be stratified for individualized treatments.
Immunohistochemical staining for p62 in human prostate cancer samples Immunohistochemistry with a monoclonal p62 antibody was performed on formalin-fixed paraffin-embedded TMA sections mounted on poly L-lysine-coated slides. The sections were deparaffinized in xylene and rehydrated through graded alcohols into distilled water. Antigen retrieval was in 0.1 M citrate buffer using a pressure cooker at 95°C for 15 min. Immunostaining was performed by using an immunohistochemistry kit according to the manufacturer's instructions (Invitrogen) after incubation with primary antibody to p62 (Santa Cruz Biotechnologies) for 1 h. Non-immune pooled mouse immunoglobulin was used as a negative control. Sections were incubated in secondary antibody for 1 h followed by colorimetric detection using chromogen in accordance with the manufacturers' protocols (Dako). Slides were then counterstained with haematoxylin, rinsed, and dehydrated through graded alcohols into non-aqueous solution and cover-slipped with mounting media.
Analysis of GFP-mCherry-LC3 puncta
During autophagy, LC3B-II is recruited to the autophagosomal membranes and continues to be present on the membranes of completed autophagosomes, which can be visualized as a yellow signal because GFP and mCherry co-localize. In autolysosomes, because of the acidic pH, the GFP fluorescence is diminished while mCherry still remains stable. Thus, the conversion of yellow LC3B-II puncta to red LC3-II puncta provides a readout for autophagic flux. Cells with stable expression of GFP-mCherry-LC3 were grown overnight before treatment and fixed with 2% paraformaldehyde, washed several times with PBS, mounted using Vectashield, and analyzed using an HCX Plan Apo 63×/1.4N.A. oil immersion objective lens on a Leica TCS-SP2 confocal microscope (Leica Microsystems AG). LC3B puncta were quantified using the Red and Green Puncta Co-localization Macro with the Image J program, as described [29,30].
Confocal immunostaining
Cells were plated at 2 × 10 5 cells/cm 2 on 22 × 22 mm coverslips in 35-mm culture dishes. Immunostaining was performed as previously described [30]. Briefly, following the respective treatments, cells were fixed with 2.0% paraformaldehyde/PBS for 15 min, washed 3× for 10 min each, permeabilized with 0.1% Triton X-100 in PBS for 5 min and blocked in 10% FBS in PBS for 1 h. The coverslips were then immunostained using the antibodies diluted in blocking buffer, followed by fluorescently-conjugated secondary antibody. DAPI was added to stain nuclei before the penultimate washing. They were then mounted in Vectashield (Vector Laboratories). Images were collected using an HCX Plan Apo 63×/1.4N.A. oil immersion objective lens on a Leica TCS-SP2 confocal microscope (Leica Microsystems AG).
Electron microscopic analyses
Cells were immediately fixed in 2.5% glutaraldehyde/4% formaldehyde in 0.1 M Cacodylate buffer, pH 7.3, for 24 h, followed by post-fixation with 1% osmium tetroxide for 1 h. After en bloc staining and dehydration with ethanol, the samples were embedded with eponate 12 medium (Ted Pella Inc, Redding, CA). Thin sections (85 nm) were cut with a diamond knife, double-stained with uranyl acetate and lead citrate, and analyzed using a Philips CM12 electron microscope (FEI Company) operated at 60 kv. Cells with more than 10 vacuoles were scored as autophagy positive. The autophagic area was quantified as described previously [30]. At least 10 cells per sample were used for quantitation. The size of autophagic structures was represented as relative area values calculated by selecting specific areas using Image J. The autophagic area was calculated as the percentage fraction normalized to the total cytoplasmic area.
Cell death and survival analyses
Cell viability was determined by staining with annexinV-FITC and propidium iodide (PI) followed by flow cytometric analysis on a FACScan with a 488 nm argon laser (BD Biosciences). Analyses were performed with the Cell Quest program. For clonogenic survival, cells were plated in 6-well plates in triplicate. Following drug treatments, cells were allowed to grow for 14 days, fixed, and stained in methanol:acetic acid (75:25, v/v) containing 0.5% crystal violet (w/v) to visualize colonies of at least 50 cells. The absolute number of colonies was plotted.
Confocal time-lapse imaging
Cells were grown in 35-mm glass bottom dishes (MatTek) and imaged using an UltraVIEW VoX spinning disc confocal microscope (Perkin Elmer) equipped with a high-sensitivity cooled 14-bit EMCCD C9100-13 camera (Hamamatsu). During imaging, cells were kept in a heated incubation chamber at 37°C with CO 2 (LiveCell, Pathology Devices, Inc.). Volocity image acquisition software was used to capture the images (Perkin Elmer). The track analysis and intensity measurements were done with Image Pro 7.0 (Media Cybernetics).
Statistical analyses
We analyzed the correlation between cytoplasmic expression of p62 and Gleason score by Pearson's correlation analysis (Prism v.5, GraphPad Software, Inc.). All the remaining data were obtained from at least three independent experiments carried out in triplicate with the error bars denoting SEM. P values were determined by Student's t-test (Microsoft Excel). | 5,252.8 | 2014-03-23T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Application of Statistical Mixture Models for Ternary Polymer Blends
Este trabalho mostra como modelos estatísticos podem ajudar na obtenção de misturas poliméricas complexas. Neste caso foram estudadas misturas poliméricas ternárias cujos valores de energia e tensão de ruptura foram medidos para treze amostras com proporções diferentes. As respostas foram tratadas estatisticamente usando modelos cúbicos especial e completo. O comportamento dos valores das respostas para toda a faixa de composição das blendas PS/PMMA/PVDF e PS/PBMA/PVDF foram adequadamente descrito pelo modelo cúbico completo. Diagramas ternários marcados por valores de curvas de nível são úteis para analisar como estas propriedades mecânicas mudam quando se varia a proporção dos componentes. Uma modelagem estatística correta para toda a faixa de proporção de componentes para a blenda PS/PEMA/PVDF requer modelos de misturas mais sofisticados.
Introduction
Polymeric materials used in all kinds of technological applications are almost always a complex mixture of polymeric components and several additives.Polymer blending has been established as a current way of achieving desired properties 1 .However, many aspects should be considered to predict the final properties of a wide range of possible formulations even if only few components are considered.Suitable mechanical properties are usually one of the main goals of mixing polymers.Immiscible polymer blends are frequently prepared (or developed) to supply a more attractive commercial product in the sense of price or processability, but in detriment of their mechanical properties, which sometimes have to be recovered by adding a third component, a compatibilizer.Block copolymers are normally chosen for this purpose, but homopolymers can also be useful for compatibilization 2 .If the degree of miscibility between the homopolymer compatibilizing agent C and each of the blend polymer components (A and B) is low but still higher than the one between A and B, C may be located at the A/B interface.When the degree of miscibility between C and at least one of the polymer components is considerably high, it dissolves into A and B rich phases.In any case the compatibilizer acts by decreasing the interfacial tension and therefore improving dispersion and interface adhesion among the polymer blend components.
Here statistical modeling of a limited number of experiments is shown to be helpful in describing properties of polymer blends with wide composition ranges.The mechanical properties of ternary polymer blends with Poly(vinylidene fluoride) (PVDF), Polystyrene (PS) and poly(methacrylates) were investigated and discussed using different models.PVDF and PS are immiscible polymers.The introduction of polymethacrylates (PXMA) (X = methyl, ethyl or butyl) into PVDF/PS has a compatibilizing function.PVDF/PMMA and PVDF/PEMA are known as miscible pairs.PVDF and PMMA have a lower critical solution temperature (LCST) at 330 °C and an upper critical solution temperature (UCST) at 140 °C.Between these temperatures the blend is miscible in the melt state over the entire range of compositions.Miscibility is probably due to H-bond formation between the carbonyl groups of PMMA and the acidic hydrogens of PVDF, with an enthalpy of mixing of -1.9 kJ/mol for blends containing 50% of each polymer and with the Flory-Huggins parameter, χ, varying from -0.7 to -0.1 [3][4][5][6][7] .PVDF and atactic PEMA are also miscible with a LCST between 220 and 250 °C, with a χ = -0.34 8,9.PS and PMMA are not miscible, but the χ value is low, χ = 0.01 2,10 .
Statistical Mixture Modeling
Experimental mixture designs and models permit the determination of optimum values of ingredient proportions with the execution of a minimum number of experiments 11,12 .The mixture models are derived from the general polynomial equation used in response surface analysis that expresses how a predicted response value, y ^, changes with varying values of the q experimental factors being investigated.For mixtures the q factor values, or ingredient proportions, xi, are related by since the proportions of the ingredients in mixtures always sum to 1 (or 100%).Substitution of this equation into Eq. 1 results in the mixture model for three ingredients (q = 3) with redefined coefficients indicated with asteriks.
The first three terms form the linear mixture model.Its bi * coefficients (i = 1, 2, 3) can be determined by simply performing response measurements on the pure components of the mixture being investigated.These pure components are represented by points at the vertices of the mixture concentration triangle in Fig. 1.The quadratic model includes the next three terms that have bij coefficients indicating synergic or antagonistic interaction effects on the response values between two of the mixture ingredients.To determine these effects, experiments on binary mixtures are necessary.Using a multi-variant statistical criterion, the binary 50/50 mixtures shown in Figure 1 are the most appropriate ones to be investigated for precise model determination.If the b123 * x1x2x3 term is added to the quadratic model, the result is a special cubic model already used by the authors to model mechanical resistance of a PS/PMMA/PVDF ternary blend 13 .A response measurement on at least one ternary mixture is necessary to evaluate b * 123, with best choice being the (33/33/33) mixture, indicated at the center of the concentration triangle in Fig. 1.This design results in the smallest statistical uncertainties in the mixture model parameter values.
If the response dependence on the ingredient proportions is too complex to be described by the above models the full cubic model containing all the terms in Eq. 3 can be used.Besides the ternary mixture, binary 33/66 mixtures, also shown in Fig. 1, rather than the 50/50 mixtures, are recommended for model determination.Since it is not possible to know a priori which model will best represent the experimental data, experiments in our investigation were performed for all the mixtures indicated in Fig. 1.The models tested in this work contain a maximum of ten parameters (the full cubic model for three ingredients) and the thirteen distinct mixtures used permit an analysis of variance (ANOVA) of residuals which provide statistical F indices of regression significance and lack of fit to all contemplated models.Replicate determinations for each mixture were performed so that experimental error could also be estimated.
Mechanical tests
Polymer mixtures were extruded in a Custom Scientific Instruments CS 194 mixing extruder, with rotor temperature at 150 °C and rotor rate of 220 rpm.The extruded rods were then cut into pellets and re-extruded to assure mixing efficiency.Sheets of 8 x 0.4 mm were obtained.Sheets 60 mm long were used for the mechanical tests in a EMIC tensile machine with rate of 5 mm/min.The tensile strength at break and the energy absorbed by the sample before breaking (integral of the stress-strain curve) were measured.
X-Ray diffraction
The sample crystallinity was evaluated using a Shimadzu X-Ray diffractometer with Cu Kα radiation.
Results and Discussion
Values of tensile strength at break are presented in Tables 1-3.For the statistical models a minimum number of formulations were chosen following the experimental design of Fig. 1.The area below the stress-strain curve gives the energy that the sample is able to absorbe before breaking.It is related to the impact resistance.Quite low values were measured for binary PS/PVDF blends (0.007-0.08 J).Addition of polymethacrylates increased the energy values considerably to 0.3-1.3(for 33/33/33 blends) showing their efficiency in improving phase adhesion and dispersion in the multicomponent system.The effect was much more evident in the case of blends with PEMA.The interaction parameters for ternary blends are usually given by B, where Bij = RTχij/V, ∆B = B21 + B13 -B32 with χij representing the Flory-Huggins interaction parameter between i and j, φi is the i th volume fraction and V is the base molar volume.Although B is strictly valid for non polar systems in equilibrium, the concept has been widely used in the field of polymer blends 14 .For PVDF/PS, B = 18 x 10 6 J/m 3 ; for PVDF/PEMA, B = -12.8x 10 6 J/m 3 ; and for PS/PEMA, B = 5.0 x 10 6 J/m 3 .Using Eq. 4 the B123 values can be estimated for PS/PEMA/PVDF.The B values tend to decrease (miscibility increases) as the PEMA content increases, as shown in Table 4. Blends with lower B values are also able to absorb more energy before breaking, as shown in Table 4.For PVDF/PMMA, B = -17.3x 10 6 J/m 3 and for PMMA/PS, B = 0.2 x 10 6 J/m 3 .
Considering only the binary PVDF/PXMA mixtures, a strong synergic effect was observed for X = methyl and ethyl.Both high strength and energy at break values were observed with 66 wt % PVDF.
The values of energy and strength at break responses obtained for the experimental design in Figure 1 are presented in Tables 1, 2 and 3 for the PS/PMMA/PVDF, PS/PEMA/PVDF and PS/PBMA/PVDF systems.The response values are averages of duplicate or higher replicate determinations.Standard deviations for all the response averages are also included in these tables.Full cubic models obtained by multiple linear regression using Eq. 3 proved to be the best in fitting properties of the PS/PMMA/PVDF and PS/PBMA/PVDF mixtures (see Table 5).They were statistically significant well above the 95% confidence level for energy and strength at break responses.The PMMA models present marginally significant lack of fit values at the 95% confidence level; however almost all of the explainable variance has been described by the model.The lack of fit for the PBMA models is higher than for the PMMA ones.The high percentage variance (96%) reproduced for the energy at break response is an indication that the energy at break values predicted by the full cubic model are in good agreement with the experimental results.The percentage variance of the experimental strength at break values obtained by the full cubic model is much less, 83%.Even though this regression model is highly significant, their predicted strength at break values are less accurate than those for the PS/PMMA/PVDF system.The model parameters for the energy and strength at break responses of the PS/PMMA/PVDF and PS/PBMA/PVDF mixture energies are presented in Tables 6 and 7.
The best results for the energy and strength at break data of the PS/PEMA/PVDF system were obtained for special cubic models for both responses; however for this system the regressions are not statistically significant presenting extremely high lack of fit.Evidently all the models are too simple to reproduce the complex behavior of the responses for this system.The reason for this lack of fit is that the models, even the full cubic one, are not capable of repro- 6 and 7.The corresponding analysis of the variance values are given in Table 5.The anomalous behaviors of the 50/50 PEMA/PVDF energy and strength at break values may result from a higher crystallinity.Crystallinity was evaluated by X-Ray diffraction for PEMA/PVDF binary blends obtained in the same conditions by extrusion.Values are shown in Table 8.Crystallinity decreases from 73% for pure PVDF to 40% when 33% PEMA is added but increases again to 50% when the PEMA content in the binary blend is 50%.The models should describe better totally amorphous systems or systems for which crystallinity linearly decreases with addi- 2 and 3 and the appropriate t-distribution parameters.b) Unit of Joules, J. X = PMMA, PBMA and PEMA.tion of the amorphous component.The crystallinity may decrease the strength at break to unexpected values.The bPVDF, bPS, bPVDF,PS and dPVDF,PS parameters depend only on the PVDF and PS components.For this reason their values are expected to be constant, as the third component changes among PMMA, PBMA and PEMA.This can be confirmed by examining the values of these parameters in Tables 6 and 7.The bPVDF energy at break values are all statistically significant as can be seen in Table 6 with values varying form 3.87 to 4.06 J for the three ternary systems.At the 95% confidence levels this parameter is equivalent to the 3.90 J energy at break value measured for pure PVDF and given in Tables 1-3.In a similar way the measured strength at break value of 64.4 MPa in these tables are equivalent to all the bPVDF parameter values in Table 7.Also the bPS energy at break values in Table 6 are equivalent at the 95% confidence interval to the measured average energy at break value of 0.64 J except for the 0.67 PS/PBMA/PVDF.For strength at break values bPS is significant at the 95% confidence level of the PBMA and PEMA ternary systems.
The bPVDF,PS energy at break parameters are all statistically significant and vary from -9.71 J for the PEMA system to -10.10 J for PMMA.These results clearly indicate the existence of an antagonistic interaction between PVDF and PS independent of whether the third component is PMMA, PBMA or PEMA.They are consistent with the fact that PVDF and PS are immiscible.An analogous observation can be made for the bPVDF,PS parameter for strength at break.The intensity of the antagonistic interaction between PVDF and PS does not depend on the third component present since the bPVDF,PS parameter values are almost the same.The remaining model parameter characterized by only PVDF and PS, dPVDF,PS, is not evaluated to be an important model parameter and is statistically insignificant for all the strength at break and energy values Large synergic effects between PVDF and both PMMA and PEMA are indicated by the positive bX,PVDF, X = PMMA and PEMA, energy and strength at break values in Tables 6 and 7.All the 66/33 and 33/66 PMMA:PVDF and PEMA:PVDF mixtures have energy and strength at break values that are much larger than those found for the three pure PMMA, PEMA and PVDF components.Even the 50/50 PEMA:PVDF strength at break value (105 MPa) is much larger than the values of this response for the pure PEMA and PVDF components, 47 and 64 MPA.This stems from the high PEMA/PVDF and PMMA/PVDF miscibilities (low Bij values).
The evidence for the strong synergic interactions involving PMMA-PVDF and PEMA-PVDF contrast with those for PBMA-PVDF.The bPBMA,PVDF values are significant and negative, -7.61 J and -124.79MPa.The 66/33, 50/50 and 33/66 binary PBMA-PVDF mixtures all have energy and strength at break values well below the corresponding pure PVDF values, 3.90 J and 64.4 MPa, and below or only slightly higher than the response values for pure PBMA, 0.51 J and 6.52 MPa.
The only significant binary interaction involving the PXMA components and the PS polymer occurs for the PBMA/PS interaction for the energy at break response.The bPBMA,PS value of 4.57 ± 0.55 J indicates a small but highly significant synergic effect between the polymeric components.However this binary effect is not significant for the PBMA/PS interaction for strength at break.Antagonistic three component interactions for energy and strength at break values for the PS/PMMA/PVDF and PS/PEMA/PVDF systems are indicated by the negative bPMMA,PVDF,PS and bPEMA,PVDF,PS parameter values in Tables 6 and 7.These results indicate interactions affecting both response values only when all three ingredients of each system are present simultaneously in the mixture.However the effect of these ternary antagonistic interactions on the energy and strength at break values are more than compensated by contributions from the bX,PVDF terms.On the other hand the bPBMA,PVDF,PS values of 6.21 ± 3.87 J and 286.58 ± 229.50 MPa indicate synergic effects involving all three mixture ingredients that can not be explained by possible binary interactions.
The dij cubic parameters are more difficult to interpret.Negative dx,PVDF coefficients are found for the PS/PMMA/PVDF and PS/PEMA/PVDF ternary systems.In contrast the dX,PVDF values for the PS/PBMA/PVDF systems are positive and statistically significant although their absolute magnitudes are smaller than those for the other ternary systems.The dX,PS parameters are not statistically significant except for the PS/PBMA/PVDF energy and strength at break results.The energy at break dPBMA,PS parameter is negative whereas the strength at break parameter is positive.This contrasts with all other statistically significant model parameters for which identical signs are observed for both the energy or strength at break responses.
Isoresponse contour curves for the energy and strength at break properties as a function of the PS/PMMA/PVDF and PS/PBMA/PVDF component proportions are presented in Fig. 2. The response surface contour curves for the energy and strength at break responses of the PS/PMMA/PVDF system have essentially the same forms as can be seen in Figs.2a and 2b.This indicates that these properties are highly correlated for this ternary system.Values of energies and strengths at break are predicted to be maximum for binary mixtures of about 30% PMMA and 70% PVDF.The left hand sides of both the energy and strength at break concentration triangles, representing mixtures rich in PS and/or PVDF, have response values close to zero.Table 9 shows values of tensile strength at break for PS/PMMA/PVDF which were not used to construct the models and the isoresponse contour curves.They can therefore be used to confirm the model efficiency.As can be seen all the values are in agreement with the predicted values in Fig. 2.
The energy at break values of the PS/PBMA/PVDF binary and ternary mixtures are all smaller than the energy at break of pure PVDF.All the PS/PBMA/PVDF measured energy at break values are predicted by the mixture model to be less than 4 J, much less than the energy at break values observed for the PMMA/PVDF 50/50 and 33/66 binary mixtures.This same ordering is also observed for the strength at break values of the PS/PBMA/PVDF and PS/PMMA/PVDF systems.This indicates that PMMA is a better compatibilizer than PBMA for the PVDF/PS system.As shown from the bPXMA,PVDF values a synergic interaction was observed between PMMA and PVDF but not between PBMA and PVDF.
Conclusions
Full cubic statistical models describe with high significance the mechanical properties of the PS/PMMA/PVDF and PS/PBMA/PVDF ternary blends.For ternary blends with PEMA in contrast to those with PMMA or PBMA, the lack of fit was high due to exceptionally low energy and strength at break values measured for 50/50 PEMA/PVDF mixtures.At least for ternary blends containing PMMA or PBMA, isoresponse contour curves could be used to predict the mechanical properties for the whole range of mixture compositions from just a limited number of experimental points.As expected statistical models describe better totally amorphous systems or systems for which crystallinity decreases linearly with addition of the amorphous component.
Statistical bij parameters determined for these models reflect the miscibility between each polymer component pair.Results also show better performance for PEMA and PMMA as compatibilizers than for PVDF/PS blend when mixed with PBMA.
Figure 1 .
Figure 1.Component proportions of PXMA (X = M, B, E), PVDF and PS used in the thirteen mixtures investigated in this work.
Figure 2 .
Figure 2. Response surfaces for (a) energy at break and (b) strength at break data of the PS/PMMA/PVDF mixtures and for (c) energy at br eak and (d)
Table 1 .
Average energy and strength at break values and their standard deviations for the mixture design of the PS/PMMA/PVDF blends a .
a) Standard deviations were calculated from replicate measurements for each of the above mixtures.The energy and strength at break values given are averages of these determinations.
Table 2 .
Average energy and strength at break values and their standard deviations for the mixture design of the PS/PBMA/PVDF blends a .
Table 3 .
Average energy and strength at break values and their standard deviations for the mixture design of the PS/PEMA/PVDF blends a .
Table 4 .
Interaction parameters, B, calculated with Eq. 4 and measured energy at break, E, for PS/PEMA/PVDF blends.
Table 5 .
Analysis of the variance of the energy and strength at break regression results for the three ternary systems a .the low PEMA/PVDF 50/50 experimental energy at break value since it is situated between two high values for the PEMA/PVDF 66/33 and 33/66 binary mixtures.This was confirmed by removing the PEMA/PVDF 50/50 mixture values from the data set and repeating the regression analysis.The model for this reduced data set was highly significant and had no detectable lack of fit.Similar behavior is found for the strength at break values.Here also the 50/50 PEMA/PVDF binary mixture has a much lower value than the 66/33 and 33/66 mixtures for these polymers.The mixture model parameter values and their standard errors for the energy and strength at break responses of the PS/PEMA/PVDF system for the reduced data set are in-cluded in Tables a) MSreg, MSr, MSlf and MSpe are the mean squares (sum of squares divided by number of degrees of freedom) of regression, residuals, lack of fit and pure error respectively.The Fν1,ν2 are tabulated 95% confidence values for the F distribution with ν1 and ν2 degrees of freedom.When the calculated MSlf/MSpe ratio is smaller than Fν1,ν2 the regression does not have significant lack of fit.In these cases the fact that the MSreg/MSr ratios are larger than their corresponding Fν1,ν2 values indicates a significant regression equation.The % variance results refer to the percentage of experimental results explained by the regression followed by the percentage explainable variance in parentheses.b) Regression results for the PS/PEMA/PVDF data sets removing the response for 50/50 PEMA/PVDF.ducing
Table 6 .
Complete cubic mixture model parameters and their 95% confidence interval values for the energy at break responses of the three ternary a) 95% confidence interval values obtained calculated using the values of Tables
Table 7 .
Complete cubic mixture model parameters and their 95% confidence interval values for the strength at break responses of the three ternary a) See Table6.Units of MPa.b) See Table6.
Table 8 .
Percentage of crystallinity of PEMA/PVDF binary blends.
Table 9 .
Strength at break for PS/PMMA/PVDF system. | 4,909.8 | 1997-01-01T00:00:00.000 | [
"Materials Science"
] |
Whole genome analysis of CRISPR Cas9 sgRNA off-target homologies via an efficient computational algorithm
Background The beauty and power of the genome editing mechanism, CRISPR Cas9 endonuclease system, lies in the fact that it is RNA-programmable such that Cas9 can be guided to any genomic loci complementary to a 20-nt RNA, single guide RNA (sgRNA), to cleave double stranded DNA, allowing the introduction of wanted mutations. Unfortunately, it has been reported repeatedly that the sgRNA can also guide Cas9 to off-target sites where the DNA sequence is homologous to sgRNA. Results Using human genome and Streptococcus pyogenes Cas9 (SpCas9) as an example, this article mathematically analyzed the probabilities of off-target homologies of sgRNAs and discovered that for large genome size such as human genome, potential off-target homologies are inevitable for sgRNA selection. A highly efficient computationl algorithm was developed for whole genome sgRNA design and off-target homology searches. By means of a dynamically constructed sequence-indexed database and a simplified sequence alignment method, this algorithm achieves very high efficiency while guaranteeing the identification of all existing potential off-target homologies. Via this algorithm, 1,876,775 sgRNAs were designed for the 19,153 human mRNA genes and only two sgRNAs were found to be free of off-target homology. Conclusions By means of the novel and efficient sgRNA homology search algorithm introduced in this article, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity.
Background
Derived from the microbial clustered, regularly interspaced, short palindromic repeats (CRISPR) system, the Cas9 endonuclease has become an effective and reliable tool for genome editing in eukaryotes [1][2][3][4][5][6]. The magnificence of the working mechanism of Cas9 is that it can be guided by a 20-base sgRNA, immediately upstream the short DNA motif of Cas9, the so called protospacer adjacent motif (PAM), to almost any genome loci where the DNA sequence is complementary to the sgRNA [1][2][3][4]. The PAM sequence is absolutely required for Cas9 to function and depends on the species of Cas9. For SpCas9, the most used Cas9 species, the PAM sequence is NGG, where N can be either A, C, G, or T. The very first step in making use of the sgRNA-Cas9 system for genome editing is to locate a primary PAM within the target region. Immediately upstream the PAM, the 20 bases of DNA sequence is the guide RNA sequence. Though they can be on either the sense or antisense strand, the PAM and sgRNA sequences must be on the same DNA strand.
Certain rules regarding the design of active sgRNAs have been proposed [6,7]. As the gene editing mechanism of sgRNA-Cas9 is to generate indels via DNA repairing mechanisms, it is not difficult to understand that for mRNA genes, the target site should better be inside the gene coding sequence and be near the start codon. Another design rule is the GC content. It was found that higher sgRNA GC content could result in higher Cas9 activities [8]. In addition, the design of sgRNA should avoid certain sequences, for example, polyT [7].
One of the most important design rules is to avoid potential Cas9 off-target activity. Unfortunately, a significant number of experiments discovered undesired off-target cleavages by Cas9 at off-target genome sites where the DNA sequences are homologous to the 20base sgRNA, though with one or more mismatches [7][8][9][10][11][12][13][14][15][16]. Considering the large size of some genomes, for example human, mouse and rat genomes, avoiding off-target Cas9 activities immediately becomes the most critical challenge in the application of the sgRNA-Cas9 technology. Systematic research has revealed sequence features governing sgRNA off-target interaction. However, the possible off-target Cas9 cleavages remain a defect and a challenge in sgRNA-Cas9 applications.
The large number of off-target studies of the sgRNA-Cas9 system has led to significant discoveries. Jinek et al. was the first to identify a seed sequence that is less tolerant to mismatches for sgRNA-Cas9 activity [1]. The definition of the seed sequence is generally considered to be the 12 bases on the 3′ end of sgRNA sequence, immediately upstream PAM [1,[10][11][12]. Mali et al. found that sgRNA-Cas9 system can tolerate one to three target mismatches, and two mismatches inside the seed sequence can eliminate off-target activity [11]. Based on their data, Fu et al. concluded that off-target activity can be observed with up to five mismatches when the concentrations of both sgRNA and Cas9 are relatively high [9]. Hsu et al. discovered that off-target activity depends on the number and positions of the mismatches between sgRNA and target DNA sequence [10]. Lin et al. systematically studied the sgRNA-Cas9 off-target activities when there are indels between target DNA and sgRNA sequences [13]. Their results showed that sgRNAs with low GC content have less tolerance to mismatches. They also found, that a bulge in sgRNA or DNA preserves less Cas9 activity, a result later confirmed by Doench et al. [7].
Making the off-target activity of sgRNA-Cas9 system even more complicated, it has been observed that secondary PAM sequences, in addition to the NGG motifs, can render Cas9 activity [3,7,17,18]. Though these secondary PAMs are far less effective compared to the NGG PAMs, they must be taken into consideration for off-target searches [3,7]. For SpCas9, the secondary PAMs include NAG, NCG, and NGA [3,7].
The complexity of the Cas9-sgRNA off-target interaction and the large size of human genome led us to wonder the probability that a given sgRNA sequence has at least one off-target homology. Theoretically, will it be possible to apply the Cas9-sgRNA system without any potential off-target homologies that may introduce unwanted genome editing? In this article, we analyze this question from a mathematical perspective, and then present a very efficient algorithm for sgRNA off-target homology search. This algorithm can complete a whole genome sgRNA design and off-target search in about 40 h under a default setting, an efficiency that cannot be achieved by other available sgRNA software. Via this algorithm, we searched the off-target homologies for all sgRNAs designed for all human mRNA genes. The computational results confirmed our mathematical analysis.
Methods
The human genome was the sequence source used in this study. As SpCas9 is the most widely used CRISPR-Cas9 system, this study focuses on the mathematical and computational analysis of sgRNA-SpCas9 system. Human mRNA refseq sequence was downloaded from NCBI as the source for sgRNA sequence design. The off-target site search for designed sgRNA sequences were conducted on human chromosome sequences hs_ref_GRCh38.p2 which were also downloaded from NCBI. Computational programs were implemented in Java and executed on a 2016 Dell Precision 7510 laptop computer with Intel(R) Core(TM) i7-6820HQ CPU @ 2.7 GHz and 64.00 GB RAM.
Mathematical analysis
One crucial assumption made in this mathematical analysis is that the nucleotides A, C, G, T appear randomly at any single location. As there are repeated sequences in human genome, treating the human genome as a purely random combination of A, C, G, T must be regarded as a simplifying assumption. Furthermore, we also assume that human genome has exactly three billion 23-base regions for sgRNA offtarget search on one DNA strand. Since the sgRNA can be designed on both the sense and antisense strands, the off-target homologies must be searched on both DNA strands. Thus, the total length of human genome contains six billion 23-base regions. For off-target homology search, we then make the following assumptions: 1. All off-target homologies must have a primary NGG PAM or a secondary PAM immediately downstream the sgRNA binding location.
2. All off-target homologies can have up to four base mismatches within a given sgRNA sequence. If there are at least five base mismatches, the DNA sequence in study is not considered an off-target homology.
The reason for defining four instead of five base mismatches as the cut-off is because we have found only one active off-target homology with five base mismatches in the literature, and the off-target activity in that case could be eliminated by lowering both the Cas9 and sgRNA concentrations [9]. 3. All off-target homologies can have at most one bulge plus one base mismatch [8,13]. This implies that a bulge penalty equals three base mismatches. 4. All off-target homologies can have up to two base mismatches or one indel within the seed sequence of sgRNA.
No off-target homology can have a DNA bulge that
is of two-bases, though an off-target homology can have a RNA bulge of two-bases but with no base mismatch at the same time. No off-target homology can have a bulge of two bases inside the seed sequence.
Based on the above five assumptions, we computed the possible combinations of homologies given a sgRNA sequence. The results are summarized in Table 1. The following explains how the data in Table 1 were obtained.
The number of combinations of DNA sequences with different numbers of mismatches is computed by the expression. m n À Á Â 3 n , where m = the length of the DNA sequence in consideration, n = number of mismatches. Thus, for the seed sequence of 12 bases, there are 1, 36 and 594 combinations respectively for zero, one and two base mismatches.
As the total base mismatches cannot exceed four, the available base mismatches for the remaining non-seed regions would be zero, one, two, three and four, and can only have a maximum of three or two base mismatches if the seed sequence has one or two base mismatches. So, the total combinations of homologies with up to four base mismatches is computed as: The computation of the number of combinations of indels deserves a detailed explanation. There are two cases, DNA bulge, i.e. there is an additional base in the DNA sequence, and RNA bulges, i.e., when there are one or two less bases in the DNA sequence. For both DNA bulge and RNA bulge, there are two subcases, i.e. a bulge with zero or one base mismatch. However, for RNA bulge of two bases (there are two bases less inside the aligned DNA sequence), the number of base mismatches must be zero. In addition, if the bulge is inside the seed sequence, then no base mismatch is allowed to be inside the seed sequence.
We start with the DNA bulge with zero mismatches, which means that the 20-base RNA sequence is in fact aligned with a 21-base DNA sequence and all the 20 bases of sgRNA must have an exact match to a base in the DNA sequence. In a 20 vs 20 exact alignment, there are a maximum of 20 positions in the DNA sequence to insert one additional base, and this additional base can be either one of A, C, G, T. There are two additional restrictions when considering a DNA bulge: a DNA bulge can be considered only when there are at least five base mismatches between the sgRNA and DNA sequences (20 bases vs 20 bases) and the introduction of the bulge can trade off more than the number of base mismatches that a bulge penalty equals. Thus, when introducing a bulge inside the DNA sequence, the DNA fragment left of the bulge must be at least four bases such that there are enough base mismatches to be traded off by the bulge. Therefore, there are 16 × 4 = 64 combinations. Via When there is an indel and a mismatch, the computation becomes a bit more complicated. For DNA bulge, the bulge can be anywhere but the mismatch can only be inside the non-seed region if the bulge is already inside the seed sequence. Thus, the maximum combinations of the indel plus a base mismatch would be However, for RNA bulge case, the expression would be The last condition in consideration is the RNA bulge of two bases. Since a two-bases RNA bulge can only be inside the non-seed region, there are only two different ways to form such a RNA bulge because the introduction of such a bulge must trade off at least five base mismatches. The combinations would be 2 Â 4 Â 4 ¼ 32 Based on data in Table 1, the probability for a 23-base single DNA region to be an off-target homology for a given sgRNA sequence is 0.00000005471. Considering the fact that there are six billion 23-base single DNA sequences, The probability for a sgRNA to have no potential off-target homology is 2.67 × 10 −143 , and the expected number of off-target sites is 328.
Based on the above mathematical analysis, it seems that for a given SpCas9 sgRNA sequence, potential off-target homologies in the human genome are unavoidable.
Computational algorithm
We implemented a sgRNA design and off-target search algorithm in Java. The sgRNA design is based on the rules outlined in [6,7] with the following exceptions: 1) sgRNA are designed only inside the first half CDS sequence; 2) all sgRNAs do not contain a run of four T or four A.
As the off-target search must be conducted through all the human chromosome sequences, the off-target search of sgRNA can be very time expensive. The high efficiency of our off-target search process comes from two critical algorithmic innovations which are explained below in detail.
The first innovation is that an indexed database based on the seed sequence variations is dynamically constructed before any homology search work starts. Based on assumption 4, for a DNA region to be an off-target homology of a given sgRNA, it must have a good alignment with the sgRNA seed sequence such that there should be at most two mismatches or one indel. Hence, the off-target homology search starts with finding those DNA sequences that are variations of the sgRNA seed sequence. The seed sequence consists of 12 bases, so there are 4 12 different 12base variations in total. If we assign 0, 1, 2, 3 to A, C, G, T respectively and convert DNA sequence to a base-4 number system, then each 12-base variation can then be represented as a unique integer using the expression P 11 i¼0 N Â 4 i , where N = 0, 1, 2, 3, representing A, C, G, T respectively.
Since the package was implemented in Java whose int data type can only hold integers ranging from −2 31 to 2 31 -1 and the human genome has about three billion base pairs, i.e. six billion bases, we decided to divide the 24 chromosomes into two groups with roughly equal number of nucleotides. For each group, a twodimensional array G ij is constructed as follows: i = the integer value of each 12-base sequence, the row G[i] stores all the positions of the 12-base sequence (equivalent to integer i) in the group of chromosomes. A positive G[i] [j] indicates that the position is on the sense strand while a negative G[i][j] means that the 12-base sequence is found on the anti-sense strand. Given the integer G[i][j], a conversion system matches it to a specific chromosome, a specific NT record, and a specific position inside the NT sequence. An important tip in constructing the two-dimensional array G ij is that G ij only stores the location information of those 12-base sequences followed by a primary PAM or a secondary PAM.
Given a 20-base sgRNA sequence, based on its 12base seed sequence, all variations of its 12-base seed sequence are generated according to Assumption 4, which are interpreted as: 1) a variation can have at most two mismatches with this seed sequence; 2) a variation can have at most one indel when aligned against the seed sequence. The homology search algorithm then finds all the exact positions inside each NT record for all the different variations very quickly and then uses a dynamic programming algorithm to determine if there is an offtarget homology at each position.
The second innovation is the efficient dynamic programming algorithm for homology determination. The dynamic programming algorithm is illustrated in Table 2.
The construction of Table 2 is explained as follows. Given a DNA sequence marked as d and a sgRNA sequence marked as r, for d to be an off-target homology of r, it must have a PAM (either primary PAM or secondary PAM) that aligns with the PAM of r.
For DNA bulges of 1 base or 2 bases, which are marked as L1 and L2 respectively in Table 2, the values are computed as: For RNA bulges of 1 base or 2 bases, which are marked as R1 and R2 respectively in Table 2, the values are computed as: The above algorithm computes the number of base mismatches only, which are the values in Table 2. For L1, L2, R1 and R2, as there is a specific bulge for each case, the total number of mismatches should add the specific bulge penalty. In our default setting, a bulge penalty equals three base mismatches (counted as two if inside the seed sequence), a RNA bulge extension penalty equals one base mismatch, and a DNA bulge extension penalty equals two base mismatches. Thus in Table 2, when L1 is computed, though it is shown that L[13] = L [14] = L [15] = L[16] = 1, they are in fact = 1 + DNA bulge penalty = 4. The result shows that by shifting the 5′ fragment (up to either the 13th, 14th, 15th, or 16th base) one base to the left, we can achieve an alignment with only one base mismatch and one DNA bulge.
The above algorithm illustrates the general condition. There are some special cases that the implementation must also consider: Since the seed sequence has more stringent requirements on the number of mismatches, the number of base mismatches and indels within the seed sequence should be counted and stored to determine whether or not a specific alignment should be considered as an off-target homology. In the example shown in Table 2, though the case of L1 can achieve a good alignment with only one base mismatch and one DNA bulge, d is eventually not considered a homology to r because both the DNA bulge and the base mismatch are inside the seed sequence, There are a total of five cases that are computed in this algorithm: H, L1, R1, L2, R2. If in one case d is found to be a homology to r, there is no need to go on to the next case. For cases L1, R1, L2, and R2, a shortcut can be applied. If (m + bulge penalty) become larger than the number of base mismatches allowed, there is no need to continue computing for that case because it is guaranteed that the alignment represented by this case is not a homology.
Results and discussion
We first simulated a human genome of size three billion base pairs in which A, C, G, T are randomly distributed. With this simulated genome, we examined the off-target homologies for 1,000,000 sgRNAs randomly designed from the simulated genome and the 1,876,775 sgRNAs designed for the 19,153 human mRNA genes based on the above design rules. The off-target homology search identified 326 homologies per sgRNA in average for the group of 1,000,000 sgRNAs and 325 homologies per sgRNA in average for the group of 1,876,775 sgRNAs. Both results are fairly close to the mathematically expected 328 homologies. In fact, the mathematically expected values should be slightly larger than the computational experimental values because of two reasons. The first reason is that the mathematically calculated number of combinations for the case with one indel plus one base mismatch is the possible maximum number. The real number should be slightly smaller. The second reason can be explained by using the sequence alignment (DNA) ACCCCT/acccct (RNA) as an example. Removing any C will generate the same RNA bulge ACCCT/acccct, i.e. the computational experiment will detect one RNA bulge while the mathematical model would count four times. Overall, in agreement with our mathematical model, no sgRNA was found to be free of homologies with the simulated genome.
The computational experiment with human genome identified that only two out of the 1,876,775 sgRNAs were validated to be free of off-target homology. This confirms our mathematical analysis that theoretically, it is almost impossible for a sgRNA to have no potential off-target homologies. A total of 1,415,606,013 off-target homologies were found, indicating 754 off-target homologies per sgRNA. This number is significantly larger than the mathematical expected value. We believe that the large discrepancy was resulted from the fact that human DNA sequence is not a random composition of A, C, G, T. There are a large number of repeated sequences in human genome [19]. As we once pointed out [20], some sgRNAs with repeated sequences have an unusually large number of off-target homologies, which contributes to the large discrepancy.
It is worth to point out that of the 1,415,606,013 homologies, about 2.70% are with indels. Thus, even though the off-target homologies are mostly base mismatches, indels are a significant portion of off-target homologies and should be considered. Some sgRNA off-target search algorithms, for example, CasFinder and CRISPOR, do not detect indels, and thus miss a significant number of off-target homologies [21,22].
The time cost to complete a whole genome sgRNA design and off-target homology examination is mostly on the homology examination. The time cost is a linear function of the number of sgRNAs. Furthermore, based on our homology examination algorithm, it is easy to understand that the time cost is also a function of the off-target homology definition. Under our default homology examination settings, the time cost to complete the whole genome design and off-target examination for the 1,876,775 sgNRAs is about 40 h. It is roughly about 77 s for every 1000 sgRNAs.
Compared with CasFinder which is built upon Bowtie, our package is much more efficient. Under a similar homology examination setting (the seed sequences allows maximum two mismatches, the 20-base sequence allows totally up to four base mismatches but no bulge, and the secondary PAM is only NAG), CasFinder took 624 h to complete the design and off-target examination of its 927,104 sgRNAs while our algorithm took about 22 h to examining 1,876,775 sgRNAs [21]. Roughly speaking, our algorithm is about 57 times faster than CasFinder.
Cas-OFFinder employed a similar strategy as our algorithm except that they first computed the variations of the 20-base guide sequence with up to certain number of mismatches [23]. With each varied sequence, they tended to find an exact match in the genome. We also compared our algorithm's efficiency with theirs under the same conditions: up to five base mismatches, no indels, and only consider the NGG and NAG PAM. Cas-Offinder's maximum speed via GPU is about 3.01 s per sgRNA sequence. However, when comparing the CPU efficiency, Cas-Offinder's maximum speed is about 60.03 s per sgRNA sequence, while ours is about 3.15 s per sgRNA sequence. Because each sgRNA has very high probability to have off-target homologies that can result in off-target Cas9 activity, avoiding potential off-target activity is in fact the most challenging and critical factor in designing sgRNAs. In addition to its efficiency, another advantage of our algorithm is that it guarantees to find all the potential off-target homologies based on the off-target homology setting. It has been reported that a few tools are likely to miss significant number of potential homologies [22,24]. Thus, we compared our algorithm with CRISPOR (http://crispor.tefor.net/) and Cas-OFFinder (http://www.rgenome.net/cas-offinder) which were considered to be superior in locating off-target homologies [22]. Using the EMX1 guide sequence (GAGTCCGAGC AGAAGAAGAA) as an example, Table 3 shows that our algorithm achieves as good as both Cas-OFFinder and CRISPOR.
Under exactly the same conditions, our algorithm found exactly the same off-target homologies as Cas-OFFinder and CRISPOR did. The only difference is that, by default, our algorithm searched for off-target homologies anchored with all the secondary PAMs including NAG, NCG and NGA. The web-tool of Cas-OFFinder did not search for any secondary PAM, while CRISPOR considered only a few PAMs (NAG, AGA, GGA, TGA).
The large expected number of homologies for each sgRNA has been motivating scientists to search for different solutions. A double nicking approach was then introduced to enhance genome editing specificity [11,25]. The double nicking method is based on the Cas9 nickase mutant that can only break one single strand of DNA. To obtain a double stranded cleavage, simultaneous nicking via two individual sgRNAs each targeting a different strand is necessary [25]. The offset, the distance between the 5′ ends of the two sgRNA sequences (sgRNA pair), must be between −4 and 20 for the paired nicking to work well, and if the offset of the paired sgRNAs is less than −34 or larger than 110 bases, the paired-sgRNA-Cas9 system completely loses its efficacy [25]. Thus, a potential off-target homology for paired sgRNA nicking must have two single off-target homologies positioned in a way that their offset is between −34 and 110 bases inclusive. After 387,679 sgRNA pairs were designed for the 19,153 mRNA genes, 175,712 sgRNA pairs were found to be free of off-target homologies, covering 14,665 mRNA genes. This confirms that the double nicking method is much more reliable than the original SpCas9-sgRNA system in avoiding off-target homologies, a finding reported before [16,25].
Conclusions
A novel and efficient sgRNA homology search algorithm was introduced in this article. Via this algorithm, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity. | 5,978 | 2017-11-01T00:00:00.000 | [
"Biology"
] |
BCFT and OSFT moduli: an exact perturbative comparison
Starting from the pseudo-B0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}_0$$\end{document} gauge solution for marginal deformations in OSFT, we analytically compute the relation between the perturbative deformation parameter λ~\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\lambda }$$\end{document} in the solution and the BCFT marginal parameter λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document}, up to fifth order, by evaluating the Ellwood invariants. We observe that the microscopic reason why λ~\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\lambda }$$\end{document} and λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} are different is that the OSFT propagator renormalizes contact-term divergences differently from the contour deformation used in BCFT.
Introduction and conclusion
In the recent years there has been published overwhelming evidence that the various consistent open string backgrounds (i.e. D-branes) can be described analytically as solitons of open string field theory (OSFT) [1][2][3][4][5][6][7][8][9][10][11]. 1 A classical [15] yet not fully understood problem in this correspondence is how the D-branes moduli space is described in OSFT. Given an exactly marginal boundary field j, there is a corresponding family of OSFT solutions, which can be generically found in powers of a deformation parameterλ, 1 See [12][13][14] for reviews.. Physically we expect that the deformation parameterλ, which we used to construct the solution, should be related to the natural parameter λ in boundary conformal field theory (BCFT), given by the coefficient in front of the boundary interaction which deforms the original world-sheet action, On general grounds,λ does not have a gauge invariant meaning, but nonetheless it is useful to understand how λ and λ are related for a given solution, because this can shed light on the different mechanisms by which a classical solution changes the world-sheet boundary conditions. Analytic solutions for marginal deformations with nonsingular OPE ( j j ∼ reg) have been computed to all orders in [2,3]. A different perturbative analytic solution for marginal currents with singular OPE has been constructed in [4] and generalized in [5]. 2 An analytic solution for any selflocal (hence exact [18]) marginal deformation has been constructed nonperturbatively in [9]. Conveniently, this solution is directly expressed in terms of the deformation parameter of the underlying BCFT, λ. In [19] this has been used to explicitly find the relation between the BCFT modulus λ and the coefficient of the marginal field in the solution c∂cj| (λ) . It has been observed that this function of λ starts linearly; then it has a local maximum, and finally it approaches zero for large values of λ. Nontrivial evidence that this behavior may also be present in Siegel gauge 3 has been given in [20] in level truncation, but it has not been possible there to establish the validity of the full equations of motion for large BCFT moduli.
In this note we would like to study this problem in another analytic wedge-based example, which is quite close to Siegel gauge. We will analyze the observables of the solution proposed by Schnabl in [2], in the so-called pseudo-B 0 gauge Let us comment on the relation found. Perhaps the most interesting fact about (1.7) is the origin itself-of the found coefficients of λ 2n+1 . These coefficients are obtained by comparing the Ellwood invariants computed from the solution in powers ofλ, with the coefficients of the Ishibashi states obtained from the marginally deformed boundary state expressed in powers of λ, see Eqs. (4.11)- (4.16). Naively these two quantities reduce to the same worldsheet calculation and therefore one would expect to find perfect match between λ andλ, which is evidently not true. This is explained as follows. At orderλ k , the encountered Ellwood invariants have the structure of OSFT tree-level amplitudes between an on-shell closed string and k on-shell open strings given by the marginal field cj, withλ playing the role of the open string coupling constant. These amplitudes are naively affected by infrared divergences due to the collisions of the marginal fields at zero momentum, which correspond to the propagation of the zero momentum tachyon. The propagator B 0 L 0 gives a uniquely defined prescription to renormalize these singularities; see Sect. 3. On the other hand, in BCFT, the same contact-term divergences are renormalized by contour deformation [18], so that the renormalized boundary interaction e −λ ds j (s) acquires a topological nature. This difference in the renormalization procedure of contact-term divergences is the ultimate reason why λ andλ are different. Had the self-OPE between the currents been regular, we would have found no difference between the two quantities.
We also observe that the growing of the coefficients in (1.7) is in agreement with the findings from other non-Footnote 3 continued generically true for other perturbative solutions; see for example [5,17]. This is also not true for the solution [2] analyzed in this paper and the relation can be computed, if needed, by the same methods of Sect. 4. perturbative approaches (although in different gauges) such as [19,20], and it suggests that the power series inλ may have a finite radius of convergence. It would be desirable to improve our calculation to be able to estimate the growing of the higher order coefficients and the nature of the singularity in the complexλ space. This would be a complementary (perturbative) way of understanding the reason why (in Siegel gauge) the marginal solution breaks down at a critical value ofλ. Indeed, it turns out that our computations in pseudo B 0gauge can be related to the analogous computations in Siegel gauge, whose direct evaluation is notoriously complicated. Work in this direction is in progress [21].
The paper is organized as follows. In Sect. 2 we review the needed material for constructing the boundary state in BCFT [18] and in OSFT [23]. Then we review the construction of the marginal solution in the pseudo-B 0 gauge [2], and we explicitly write it down up to the fifth order. Section 3 describes the regularization procedure implemented by the propagator B 0 L 0 . In Sect. 4 we write down the coefficients of the Ishibashi states in the boundary state in terms of the deformation parameter λ using the standard BCFT prescription by Recknagel and Schomerus [18]. Then we compute the same quantities for the OSFT solution in the pseudo-B 0 gauge. Finally we compare the coefficients of the Ishibashi states in OSFT and BCFT and we obtain the functionλ =λ(λ) up to fifth order. An appendix contains useful formulas for the correlators encountered.
The boundary state and the marginal solution
Let us consider a deformation of a BCFT by a boundary primary operator j (x) of conformal weight one, From the OSFT point of view the new theory can be described by a classical solution, a state in the original BCFT The leading term inλ satisfies the linearized equations of motion If j is exactly marginal, higher orders inλ should exist, and they can be found by solving the recursive equations of motion with the initial condition (2.3). Notice that while in BCFT the perturbation is unique, the OSFT solution is not unique because it can be changed by gauge transformations. We can get rid of this gauge redundancy by computing observables. In particular the information on the marginal deformation can be effectively cast in the boundary state.
Boundary states in bosonic string theory can be written as a superposition of Ishibashi states |V m [24] where |B gh is the universal ghost part. When we deform a given world-sheet theory with an exactly marginal boundary deformation, the boundary state will be deformed to where [. . .] R means that a regularization is needed (and it will be reviewed later on), and |B 0 is the boundary state of the starting BCFT.
On the other hand, given an OSFT solution λ , the boundary state will depend onλ λ −→ |B (λ) . (2.10) The two boundary states should be the same by the Ellwood conjecture [25] and this induces a functional relatioñ λ =λ(λ). (2.11) To obtain this relation we can compare the coefficients of the Ishibashi states. From (2.7) it follows that 4 where V m | is the BPZ conjugate of the Virasoro primary |V n of the matter so that V m |V n = V m |V n = δ n m where we used the fact that Ishibashi states have the generic form |V n = |V n + Virasoro descendants. (2.13) The series expansion of the exponential in (2.12) gives rise to contact divergences and one needs to renormalize them 4 From now on we will only consider the matter part of boundary states.
properly. In the next section we will review the standard procedure of [18]. The way to compute the n m from OSFT was given in [23] by appropriately generalising the Ellwood invariant where TV is a tachyon vacuum solution and V (0,0) As explained in detail in [23], the auxiliary bulk field lives in an auxiliary BCFT aux of c = 0 and has unit one-point function on the disk, In a similar way the open string fields entering in (2.14) are lifted to the extended BCFT For the solution we will be dealing with this lifting procedure is trivial and amounts to the substitution L n → L n + L (aux) n in the equations that will follow. For this reason we will not distinguish between normal and lifted string fields in the sequel.
As far as the solution itself-is concerned, we search for it in the convenient pseudo-B 0 gauge [2], making the following ansatz: with the gauge condition where B 0 is the zero mode of the b ghost in the Sliver frame, obtained from the UHP by the conformal transformation z = 2 π arctan w (2.20) and the operators U r are the common exponentials of total Virasoro operators creating the wedge states [27] in the well known way Solving order by order inλ (2.6) we find The r.h.s. is explicitly given by where the cj insertions are written in the Sliver frame. The solution is therefore where L 0 is the zero mode of the energy-momentum tensor in the Sliver frame, Note that inverting Q B using B 0 /L 0 is only meaningful if the OPE of cj with itself-does not produce weight zero terms, otherwise we would find a vanishing eigenvalue of L 0 . As is well known this is the first nontrivial condition for j to generate an exactly marginal deformation.
At the third order the solution 3 is written in terms of 2 . We write the state [cj, 2 ] as |0 , (2.25) where in the second step we explicitly write the width of the wedge states using the U r operators. The inside insertions have to be placed according to (A.2): to lighten a bit the notation we have defined the graded Finally we can write the solution at the third order as |0 . (2.29) We find the fifth order as |0 .
(2.30)
This procedure can be continued to higher order. 5 Although higher orders can easily be written down, their Ellwood invariants become more and more complicated because they involve a large number of multiple integrals which by themselves need to be properly renormalized, as we will see in the next section.
Contact-term divergences and the propagator
The computation of the Ellwood invariants for the solution we have just presented involves in general contact divergences due to the definition of the propagator B 0 /L 0 . As usual, we start by defining the inverse of L 0 via the Schwinger representation which is well defined for eigenvalues of L 0 with a strictly positive real part. The operator s L 0 is the generator of dilatations z → sz in the Sliver frame. Its action on a primary field with conformal weight h in the Sliver frame is The integral representation (3.1) is only valid for fields with a positive scaling dimension h > 0; if we apply the above integral representation to a state |ϕ −|h| with negative weight −|h|, we find a divergence, as s approaches zero. But this just reflects that the integral representation (3.1) has been used outside its domain of validity. This can easily be remedied in the following way: This prescription amounts to computing the Schwinger integral in its region of convergence by assuming Re( ) > |h|, and then analytically continuing to = 0 6 This analytic continuation allows one to define L −1 0 on every state we encounter during our computations except on weight zero states which remain as an obstruction, as it should be. 7 Pragmatically, this procedure is equivalent to adding and removing the tachyon contribution from the OPE, for example, 6 An equivalent prescription is the Hadamard regularization, we thank M. Frau for discussions of this issue. See also [2,26]. 7 Notice that one could in principle define L −1 0 on negative weight states as (3.5) however, this integral representation does not work for positive weight states. Since the star product generates both positive and negative weight states at the same time, we need a representation of L −1 0 that works on the whole set of fields (except, of course, the weight zero fields). (3.6) and to defining 1/L 0 on the tachyon as
Comparing λ andλ
In this section we perturbatively compute the coefficients of the series expansion ofλ =λ(λ) up to fifth order. On general grounds we expect that b 0 = 0 and b 1 = 1, and this will be verified in the next subsections.
The b k s are computed by equating the coefficients of the Ishibashi states in the boundary state in BCFT and OSFT [22,23], In both cases one can expand the above coefficients in power series of the corresponding deformation parameter The B BCFT k,m coefficients can be found expanding the exponential in (2.12), where the conformal factor 2 2h m comes from the transformation of V m under the map from the disk to the UHP. These integrals need a renormalization, discussed by Recknagel and Schomerus [18]: thanks to the self-locality property of the current j, one can modify the path of each integral to be parallel to the real axis but with a positive imaginary part , with 0 < << 1, In such a way all the contact divergences between the currents are avoided and only the contraction of the currents with the closed string will give contribution. Thanks to this renormalization the loop operator e −λ j (s)ds R becomes a topological defect.
For the sake of simplicity we consider an exactly marginal deformation produced by the operator on an initial Neumann boundary condition of a free boson compactified at the self-dual radius R = 1(α = 1). 8 This deformation switches on a Wilson line in the compactified direction, which can be detected by a closed string vertex operator carrying winding charge, where m is the winding number (which specifies the closed 9 This closed string state has conformal weight ( m 2 4 , m 2 4 ). Performing the renormalized integral (4.4), one obtain with this choice of the current and closed string state (see Appendix 1 for conventions and basic correlators), which can easily be resummed to In the OSFT framework, the analytic computation of coefficients of the Ishibashi state involves the Ellwood invariants and we compute them order by order inλ, starting from (2.14), which gives The relation between λ andλ must be universal, in the sense that it cannot depend on the particular choice of the closed string. In our specific computation we will see that this is the case by verifying that the relation is independent of the winding charge m. Now rewriting (4.2) using (4.3) and (4.1) gives explicitly 8 Since we are considering a compactified theory at the self-dual radius, there are other marginal operators in the enlarged SU (2) chiral algebra. Our choice (4.6) is equivalent to the chiral marginal operators i √ 2 sin (2X (z)) and i √ 2 cos (2X (z)), which have been studied in a similar context in [20,22] 9 If R is not self-dual, our computation goes on unaffected by replacing the self-dual winding mode m with the winding mode at generic radius, m R.
where, as explained in [23], the tachyon vacuum contribution can be replaced with TV → 2 π c(0)|0 . The amplitude then becomes 10 where we used the conformal map defining the identity string field f I (z) = 2z 1−z 2 . Then Consistently we find which confirms that b 0 = 0. (4.21)
First order
As an extra starting check, let us look at the first order, where we have to compute 10 From now on we will write V m instead of V (0,0) m to denote the lifted closed string state associated to the spinless matter primary V m .
where in the last step we wrote the correlator on a cylinder of width one C 1 , without any conformal factor because the conformal weight of all the insertions is zero. Acting with the map this two-point function on the cylinder becomes the two-point function on the disk D, which equals the amplitude computed from BCFT (4.8), and so the corresponding coefficient in theλ/λ relation is b 1 = 1.
This amplitude is depicted in Fig. 1.
The action of the propagator on the double insertion of cj follows the regularization (3.3), so this state can be written as , (4.28) The B 0 ghost acts on c(w) as the contour integral and the amplitude becomes where we have used the obvious rotational invariance of the bc CFT on the cylinder.
Using the Wick theorem (which is reviewed in Appendix A) and in particular (A.17) we obtain here we have used our analytic continuation which, as explained in Sect. 3, amounts to computing the integral in the region of the -complex plane where it converges (Re > 1) and then analytically continue to → 0. In doing this we have also took the freedom of ignoring convergent terms proportional to since we are only interested in the → 0 limit. Computing also the other convergent integral, we obtain again a perfect match with the BCFT results,
Third order
At this level the amplitude we have to compute is where 3 is defined in (2.28). This amplitude is depicted in Fig. 2. Explicitly we find where 1 is the regulator for the most internal propagator (the one inside the lower order contributionˆ 2 (2.23)) and 2 is the regulator for the external propagator. From the perturbative construction of the solution, it is clear that 1 should be analytically continued to zero before 2 . Using the symmetries of the correlator in the matter and ghost sector and renaming t/2 → t, the whole Ellwood invariant reduces to (4.40) It is useful to change variable with x = 3 2 y and s = t y = 2 3 xt so as to rewrite the integral as B SFT 3,m = 8πi (4.41) Now we apply the Wick theorem (see (A.17) of Appendix A), (4.43) The first integral contains a divergence in j (s) j (−s) when s approaches zero. Explicitly using (A.13) this part of the amplitude is given by Notice that the integral in x is convergent which tells us that we could have avoided the 2 regulator. This is because the external propagator acts on a state which is in the fusion of three marginal operators and therefore it cannot contain the tachyon in its level expansion. We have (4.45) Summarizing: from the BCFT side we found that the third order is proportional to m 3 (4.8) and there are no other terms. Instead, in the OSFT computation, at the third order we still get the same BCFT number proportional to m 3 but in addition to it there is another contribution coming from the peculiar renormalization implicitly defined by the propagator B 0 /L 0 . This is the first time that there appears a discrepancy between the two approaches. As a consequence the third order coefficient in theλ(λ) equation (4.1) is (4.46)
Fourth order
The fourth order Ellwood invariant is given by there are two contributions coming from the λ solution (2.29) |0 . (4.48) The Ellwood invariant at this order is given by (4.49)
First term V mÂ2,4
In the first term, as before, we need to compute the commutator of the insertions and apply the propagators, |0 . (4.50) The corresponding amplitude is depicted in Fig. 3. Applying the two propagators, the amplitude takes the form . (4.51) Using the symmetries of the problem translating the correlators ξ → ξ + z 2 and changing variables w = sz and x = yz, the amplitude simplifies to (4.52)
Second term V mÂ3,3
The second term is the Ellwood invariant of 3,3 , and it is depicted in Fig. 4. Explicitly we have to compute the star product of two 2 and then act with a propagator B 0 /L 0 : (4.54) Again the four different insertions of the ghosts contribute in the same way, and therefore (4.55) With the change of variable x = zt and y = zs, The complete term at this order is given by summing the two integrals (4.52) and (4.56),
Fifth order
At the fifth order the solution is composed of three terms, |0 . (4.64) Then the Ellwood invariant we have to compute is (4.65) First term V mÂ2, 5 The first term involves the statê |0 . (4.66) The amplitude to compute is depicted in Fig. 5 and after some manipulations, involving changes of variables and conformal transformations, we get 2πi V mÂ2,5 where the Schwinger parameters t 1 , t 2 , t 3 , t 4 are related to the integration variables as (4.68)
Second term V mÂ3,4
The state here iŝ |0 , (4.69) and its Ellwood invariant is (Fig. 6) 2πi V mÂ3,4 where the integration variables are related to the Schwinger parameters as The last term involves the statê and it becomes (Fig. 7) 2πi V m where the Schwinger parameters are related to the integration variables as The a 5 coefficient is easily computed: which gives the usual exact match with the BCFT results. As far as a 3 is concerned the computation follows closely the fourth order (with one more integral) and everything can be analytically done giving the result a 3 = 9 log 2. This is precisely the number needed to ensure that b 5 is mindependent (4.16) so this is a consistency check.
Let us now address the a 1 coefficient, which is determined by the O(m) winding number contribution in (4.75). This is generated by the term from the Wick theorem with the maximal number of contractions between the j and computing the four dimensional integrals (coming from the three diagrams) analytically is not possible. Therefore we proceed analytically as far as we can and then we resort to numerics. The Y and Z integrals can be analytically computed in all of the three diagrams, including the subtraction of the tachyon divergence.
Rescaling the X variable in the first diagram X → 2X , the O(m) contribution E i from each diagram is reduced to an expression of the form where the function f i are known analytically. To renormalize the tachyon divergence in the X integration we explicitly subtract the second order pole in X from the function f i in the following way: .
(4.79) (a) (b) Fig. 9 In a we show the three residues of the simple poles corresponding to the three diagrams. In b the sum of the three functions F i (t) is shown to be finite in T = 0 It turns out that the coefficient of the 1/ X 2 pole, which in the above formula is indicated as ( f i ) −2 , is T independent. This treatment leaves us with three numerical functions of T , which have to be integrated in the interval 0, 5 2 . Surprisingly each of these functions shows a nonvanishing 1/T pole, as shown in Fig. 8, We also have and, using the Wick theorem, (A.9) The one-point function is 11 In addition we have the following correlator for the auxiliary closed string field [23]:
Cylinder
On the cylinder the chiral closed string field X (z) has the following correlators [1]: and then j (x) j (y) C N = 2π 2 N 2 csc 2 π N (z − w) . (A.13)
Wick theorem
In the main text we deal with correlators of the following form: (A.14) This correlator significantly simplifies on a cylinder C n , when the bulk operator (properly dressed with the ghosts and the auxiliary sector to acquire total weight zero; see (2.15)) is placed at the midpoint i∞. In particular, thanks to the rotational invariance of C n we have The term containing the maximal number of contractions with the closed string is given by Let us list for convenience the explicit correlators which are used in the main text: where the only non-trivial correlator involvingX is given by | 5,917.8 | 2017-11-01T00:00:00.000 | [
"Materials Science"
] |
CircTP63 promotes cell proliferation and invasion by regulating EZH2 via sponging miR-217 in gallbladder cancer
Background Gallbladder cancer (GBC) is the most common biliary tract malignancy and has a poor prognosis in patients with GBC. CircRNA TP63 (circTP63) has been implicated in cell proliferation and invasion in some tumor progress. The study aims to investigate the clinical significance and functional role of circTP63 expression in GBC. Methods The expression of circTP63 in GBC tissues or cells was detected by qRT-PCR and the association between circTP63 expression and prognosis of GBC patients was analyzed. CCK8 assay, flow cytometry analysis, transwell assay and in vivo studies were used to evaluate the cell proliferation and invasion abilities after circTP63 knockdown in GBC cells. Luciferase reporter assays and RNA pull-down assay were used to determine the correlation between circTP63 and miR-217 expression. Besides, western blot analysis was also performed. Results In the present study, we showed that circTP63 expression was upregulated in GBC tissues and cells. Higher circTP63 expression was associated with lymph node metastasis and short overall survival (OS) in patients with GBC. In vitro, knockdown of circTP63 significantly inhibited cell proliferation, cell cycle progression, migration and invasion abilities in GBC. Besides, we demonstrated that knockdown of circTP63 inhibited GBC cells Epithelial-Mesenchymal Transition (EMT) process. In vivo, knockdown of circTP63 inhibited tumor growth in GBC. Mechanistically, we demonstrated that circTP63 competitively bind to miR-217 and promoted EZH2 expression and finally facilitated tumor progression. Conclusions Our findings demonstrated that circTP63 sponged to miR-217 and regulated EZH2 expression and finally facilitated tumor progression in GBC. Thus, targeting circTP63 may be a therapeutic strategy for the treatment of GBC.
Introduction
Gallbladder cancer (GBC) is one of the most common digestive tract tumors worldwide [1]. GBC is characteristically diagnosed at advanced stages due to the absence of specific signs and symptoms, and only a small population of GBC patients is suitable for the surgical resection [2,3]. Most gallbladder cancer patients have an extremely poor prognosis, and the 5-year overall survival rate is less than 5% [4]. The non-surgical therapies for GBC patients are primarily composed of chemotherapy, radiotherapy, targeted therapy. Recently, studies have found that molecular targeted therapeutics including fibroblast growth factor receptor (FGFR), MEK, ERBB2 or PI3-Kinase inhibitors have been explored, which provide hope for gallbladder cancer treatment [5]. Therefore, elucidating the mechanism of the occurrence and development and validating existing novel molecular target to improve therapeutic effects of GBC patients are needed.
Circular RNAs (circRNAs) are a type of endogenous non-coding RNA that do not have a 5′-cap or a 3′-polyA tail and are implicated in a variety of biological functions [6,7]. Recently, RNA-sequences revealed that circRNAs are involved in diagnosis, prognosis, development, and drug resistance in some tumors. CircRNAs were found to regulate cell apoptosis, cell proliferation, cell migration and tumor metastasis by regulating gene expression [8]. Such as, hsa_circRNA_100269 expression is downregulated in gastric cancer and upregulated hsa_cir-cRNA_100269 inhibits cell growth and tumor metastasis through inactivating PI3K/Akt axis [9]. Up-regulated circBACH2 is found in triple-negative breast cancer and contributes to cell proliferation, invasion, and migration of triple-negative breast cancer [10]. circCCDC66 expression is upregulated in thyroid cancer and promotes cell proliferation, migratory and invasive abilities and glycolysis through the miR-211-5p/PDK4 axis [11]. Hsa_circ_0068515, designated as circTP63, is reported in lung squamous cell carcinoma (LUSC) and correlates with larger tumor size and higher TNM stage in LUSC patients. Besides, upregulated circTP63 is also identified to promote cell proliferation by functioning as a ceRNA to upregulate FOXM1 in LUSC [12]. Another study shows that circular RNA circTP63 enhances estrogen receptor-positive breast cancer progression and malignant behaviors through the miR-873-3p/FOXM1 axis [13]. In hepatocellular carcinoma (HCC), circTP63 is significantly upregulated in HCC tissues and cell lines, and circTP63 overexpression promotes tumor progression by sponging miR-155-5p and upregulating ZBTB18 expression [14]. However, its role in GBC progression remains unknown.
In the study, we firstly demonstrated that circTP63 expression was significantly upregulated in GBC tissues and cells. Upregulated circTP63 expression notably associated with short survival rate of GBC patients. Functionally, knockdown of circTP63 inhibited cell proliferation, cell cycle progression, and cell invasion abilities in GBC and suppressed tumor growth in vivo. Besides, we demonstrated that knockdown of circTP63 inhibited cell EMT process in GBC. Mechanistically, we showed that circTP63 could competitively bind to miR-217 and promoted level of EZH2 expression, which finally facilitated tumor progression. Thus, these results provided better understand for the regulatory role of circRNAs in GBC progression, and could improve the diagnosis and therapies of GBC.
Patients tissue samples
The study was performed in accordance with the Declaration of Helsinki and the guidelines of the committee of the Human Ethics Committee of Xinhua Hospital. A total of 39 snap-frozen GBC tissues and paired adjacent normal tissues were acquired from patients diagnosed with GBC at Xinhua Hospital between March 2009 and March 2016. All the enrolled patients of this study had never received preoperative therapy and tissue samples were collected and frozen in liquid nitrogen immediately after surgical resection. The information for GBC patients was shown in Table 1. All of participants signed informed consent before this study.
Cell lines culture
Three human GBC cell lines (NOZ, GBC-SD, and SGC-996) and the normal human intrahepatic biliary epithelial cell line HIBEC used in the present study were purchased from Cell Bank of the Chinese Academy of Science (Shanghai, China). Cells were cultured in DMEM (Gibco, Invitrogen, USA) supplemented with 10% fetal bovine serum (FBS) (Gibco, Invitrogen, US) and 0.5% penicillin/ streptomycin (Gibco, Invitrogen, USA). Cells were maintained at 37 °C in a humidified atmosphere containing 5% CO 2 .
CCK8 assays
NOZ and SGC-996 cells were seeded in 96-well plates (3 × 10 3 cells per well). After cells were transfected at 1-5 days, cells were added with 10 μl of the CCK-8 solution (Dojindo Laboratories, Kumamoto, Japan) in each well of the plate. Then, cells were incubated for 2 h in the incubator. Finally, the absorbance was detected at 450 nm using a microplate reader (BioTek Instruments, Inc., Winooski, VT, USA).
Flow cytometry analysis
Transfected cells were harvested, washed and then were fixed with 70% ethanol at − 20 °C overnight, Next, after RNase digestion, the cells were stained with 20 μg/ml Propidium iodide (PI; Beyotime, Shanghai, China) at 37 °C for 30 min, and 100 μg/ml RNase A was subsequently added to the cells and incubated in a 4 °C dark room for 30 min. Cell cycle was examined by flow cytometry using the FACS Calibur system (BD Biosciences, San Jose, CA, USA).
Cell migration and invasion assays
Cell migration or invasion assays were performed by transwell plates (BD Falcon, USA) and were coated without or with Matrigel in 24-well transwell chambers with 8 mm pore polycarbonate filters (Millipore, Billerica, MA, USA). 1 × 10 5 cells were cultured on the upper chamber in medium without serum, while the lower chamber was added with 10% fetal bovine serum (FBS) (Gibco, Invitrogen, USA). After transfection at 48 h, cells on the lower layer of the membrane were stained using 1% crystal violet for 30 min at room temperature. The cell number was counted by using an Olympus microscope and five fields were randomly selected to count the cells (Magnification 200× or Magnification 100×). All assays were independently performed in triplicate.
Nuclear-cytoplasmic fractionation
Cytoplasmic and nuclear RNAs were isolated using NE-PER Nuclear and Cytoplasmic Extraction Reagents (Thermo Scientific, USA) following all manufacturer protocols. We followed this experiment with qRT-PCR analysis and GAPDH and U6 were used as controls.
Western blotting assays
Total protein was extracted using a RIPA buffer (Beyotime, Beijing, China). An equal amount of total protein was separated on SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred onto PVDF membranes (Millipore, Billerica, MA. USA). The membrane was blocked with 5% non-fat milk and incubated with the primary antibody with E-cadherin (1:1000, Cell Signaling Technology, Houston, TX, USA). Vimentin (1:1000, Cell Signaling Technology, Houston, TX, USA), EZH2 (1:1000, Cell Signaling Technology, Houston, TX, USA) and GAPDH (1:2000, Abcam) overnight at 4 °C. Next, the secondary antibodies were added for 1.5 h and then each protein band was detected by the ECL detection system (Amersham Biosciences, Buckinghamshire, UK).
Luciferase reporter assays
The wide type (WT) circTP63 or EZH2 3′-untranslated region (UTR) containing miR-217 targeting sequence and the mutated type (MUT) was amplified and cloned into the luciferase reporter plasmid psicheck-2 vector (Promega, Madison, WI). Cells were collected and lysed for luciferase detection 48 h after transfection. Luciferase activities were measured using the Dual-Luciferase Reporter Assay System (Promega, USA). The relative luciferase activity was normalized against to the Renilla luciferase activity.
Biotin-coupled RNA pull down
The 3′end biotinylated miR-217 or control RNA was designed and synthesized by GenePharm (Shanghai, China). NOZ were transfected with 50 nM of biotinlabeled miRNAs. Streptavidin-coupled Dynabeads (Invitrogen) were washed, resuspended in the buffer and then was added with the biotin-labeled miRNAs. After incubating at room temperature for 10 min, the coated beads were separated with a magnet for 2 min. The pulled-down RNA was extracted by Trizol reagent and followed by qRT-PCR analysis.
In vivo xenograft experiments
Xenograft experiments (n = 5/per group) or metastatic experiments were performed by using 3-week BALB/c nude mice. All animal protocols were approved by the Institutional Animal Care and Use Committee at Xinhua Hospital. 1 × 10 5 NOZ cells were transfected and were subcutaneously injected into the flank. The tumor volume and weight were evaluated every week, Tumor volume (mm 3 ) = (length) × (width) 2 /2. After 4 weeks, mice were sacrificed and tissues processed for further histological analysis. According to the AVMA Guidelines for the Euthanasia of Animals, all the mice were euthanized with an intraperitoneal injection of a threefold dose of barbiturates. After that we removed tumors immediately and measured the length, width and weight of the tumors. No mice died accidentally during feeding.
Immunohistochemistry (IHC)
Immunohistochemistry was performed using tumor issues by HRP-conjugated secondary antibody for staining and DAB Kit (ZSGB-BIO, China) for color development. The antibodies used for immunochemistry staining were below: E-cadherin (Cell Signaling Technology, Houston, TX, USA), Vimentin (Cell Signaling Technology, Houston, TX, USA) and Ki-67(Cell Signaling Technology, Houston, TX, USA). The slides were scored based by the intensity of the staining and the percentage of cells stained. Slides were visualized at × 200 magnification for scoring the staining intensity: no color for 0 points; light yellow for 1 point; yellow for 2 points; brown for 3 points. For each slide, 5 high magnification (× 400) field were randomly selected to count positive cells ratio: less than 10% for 0 point; 10% to 25% for 1 point; 25% to 50% for 2 points; 50%-100% for 3 points. The 2 scores were added up as final score.
Statistical analysis
All statistical analyses were performed using GraphPad Prism (GraphPad Software, Inc. La Jolla, USA). The data were presented as mean ± SD, and compared by Student's t test or ANOVA with Tukey test. A P value of less than 0.05 was considered statistically significant.
CircTP63 expression is upregulated in GBC and correlates with poor prognosis
To explore the clinical significance of circTP63 expression in GBC, we detected the circTP63 expression collected from 39 pairs of GBC tissues and adjacent normal tissues by qRT-PCR. The results showed that circTP63 expression was dramatically upregulated compared with adjacent normal tissues in patients with GBC (Fig. 1A). Moreover, we also detected that circTP63 expression was higher in three GBC cell lines than in HIBEC cells (Fig. 1B). The 39 cases of patients were divided into circTP63 higher expression and lower expression groups according to the median expression value. The association between circTP63 expression and clinicopathological characteristics was analyzed. The results displayed that circTP63 expression was associated with lymph node metastasis (Table 1, P < 0.05). Kaplan-Meier analyses and log rank test showed that higher circTP63 expression showed a poor overall survival rate in GBC patients compared with lower circTP63 expression (Fig. 1C). We next examined the subcellular localization of circTP63, and the results demonstrated that circTP63 expression was predominately distributed in the cytoplasm than that in the nucleus in NOZ and SGC-996 cells (Fig. 1D, E). Total RNA isolated from NOZ and SGC-996 cells was exposed to RNase R and the results presented that circTP63 expression had no apparent change after RNase R treatment (Fig. 1F).
Downregulation of circTP63 inhibits cell proliferation, migration and invasion in GBC
Two specific circTP63 siRNAs were used to knock down the expression of circTP63 in GBC cells. The qRT-PCR analysis results showed that the expression level of circTP63 was markedly reduced, but the TP63 mRNA was not changed after circTP63 knockdown in NOZ and SGC-996 cells ( Fig. 2A, B). Cell proliferation was assessed by CCK8 assay, and the results showed that knockdown of circTP63 suppressed cell proliferation ability compared with the control group in NOZ and SGC-996 cells (Fig. 2C). In addition, flow cytometry analysis also demonstrated that circTP63 knockdown significantly inhibited S phase cell number but increased G1 cell number compared with the control group in NOZ and SGC-996 cells (Fig. 2D-G). Moreover, transwell assay was performed to detect cell migration and invasion ability, and the results showed that the cell migration and invasion ability was impaired by circTP63 silencing compared with the control group in NOZ and SGC-996 cells (Fig. 3A-D). The epithelial marker E-cadherin expression was notably increased, while the expression of mesenchymal marker Vimentin was notably decreased after cells were transfected with si-circTP63-1 or si-circTP63-2 compared with the control group in NOZ and SGC-996 cells (Fig. 3E, F). These above evidences showed that circTP63 downregulation suppressed GBC cell growth and EMT process.
Downregulation of circTP63 inhibits cell growth in vivo in GBC
To evaluate the roles of circTP63 expression in vivo, we constructed xenograft tumors in nude mice by injection with NOZ cells stably expressing sh-circTP63 or control vector by using lentiviral transduction. The results showed that the circTP63 knockdown had slower growth rate, and reduced tumor volume or weigh than those expressing in control vector ( Fig. 4A-C). Immunohistochemistry staining of Ki-67 expression in xenograft tumors demonstrated that tumor tissues in sh-circTP63-1 or sh-circTP63-2 group had less Ki-67 positive cells than that in the control group (Fig. 4D). Furthermore, immunohistochemistry staining demonstrated that the epithelial marker E-cadherin expression was notably increased, while the expression of mesenchymal marker Vimentin expression was notably decreased in circTP63 knockdown group compared the control group in the metastastic nodules in lung after tail vein injection at 4 weeks, moreover, the nodules number was also reduced after knockdown of circTP63 compared with the control group (Fig. 4E, F, G). These results suggested that circTP63 knockdown could inhibit tumor growth and EMT in vivo.
circTP63 sponges miR-217 in GBC cells
Recently, more studies have reported that circR-NAs could sponging to miRNAs, thereby reducing the regulation of miRNAs on their target genes [15]. By performing a search for miRNAs that have complementary base pairing with circTP63 by using the online software tools circinteractome (http:// circi ntera ctome. nia. nih. gov), the results showed that miR-217 could form complementary base pairing with circTP63 (Fig. 5A, left). Luciferase assay demonstrated that miR-217 mimic repressed the luciferase activity of circTP63-WT (wild type), while miR-217 mimic had little effect on that of circTP63-MUT(mutant-type) in NOZ cells (Fig. 5A, right). We then detected the expression of miR-217 after knockdown of circTP63, the results indicated that miR-217 expression was significantly upregulated after knockdown of circTP63 in NOZ and SGC-996 cells compared with the control group (Fig. 5B). Furthermore, we also detected that miR-217 expression was significantly lower in GBC tissues compared with adjacent normal tissues by qRT-PCR analysis (Fig. 5C, left). Higher circTP63 expression was negatively correlated with lower miR-217 expression in GBC tissues by Pearson correlation analysis (Fig. 5C, right, r = − 0.423, P < 0.05). The miR-217 expression was also lower expression in three GBC cell lines than that in normal HIBEC cells (Fig. 5D). In addition, we demonstrated that circTP63 was pulled down and enriched with 3′-end biotinylated miR-217 in NOZ cells compared with the control group (Fig. 5E). Functional assays were used to explore the association between circTP63 and miR-217 in NOZ cells. CCK8 assay results showed that miR-217 inhibitor significantly promoted cell proliferation ability, but was rescued by transfecting with si-circTP63-1 in NOZ cells (Fig. 5F). Transwell invasion assay results showed that miR-217 inhibitor significantly promoted cell invasion ability, but was rescued by transfecting with si-circTP63-1 in NOZ cells (Fig. 5G). Besides, flow cytometry analysis also demonstrated that miR-217 inhibitor significantly promoted S phase cell number, but was rescued by transfecting with si-circTP63-1 in NOZ cells (Fig. 5H, I). These above results showed that circTP63 affected cell proliferation and invasion by regulating miR-217 expression.
CircTP63 sponges miR-217 and regulates EZH2 expression in GBC
It was reported that the miR-217 could regulate the EZH2 expression in gallbladder cancer [16], we sought to explore whether circTP63 expression could affect EZH2 expression. The EZH2 expression was significantly higher in GBC tissues compared with adjacent normal tissues by qRT-PCR analysis (Fig. 6A). EZH2 was predicted as a The bio-miR-217 or NC group complex was pulled down by incubating the cell lysate with streptavidin-coated magnetic beads and the circTP63 was detected by qRT-PCR. F The cell proliferation ability was analyzed by CCK8 assays after cells were transfected with si-NC, miR-217 inhibitor or si-circTP63-1 + miR-217 inhibitor in NOZ cells. G The cell invasion ability was analyzed by transwell assays after cells were transfected with si-NC, miR-217 inhibitor or si-circTP63-1 + miR-217 inhibitor in NOZ cells. H, I The cell cycle progression was analyzed by flow cytometry after cells were transfected with si-NC, miR-217 inhibitor or si-circTP63-1 + miR-217 inhibitor in NOZ cells. Data are shown as mean ± SD, *P < 0.05, **P < 0.01 target gene of miR-217 (Fig. 6B). Luciferase assays demonstrated that miR-217 mimic repressed the luciferase activity of EZH2-WT, while miR-217 mimic had little effect on that of EZH2-MUT in NOZ or SGC-996 cells (Fig. 6C, D). We also demonstrated that EZH2 mRNA expression was downregulated in NOZ and SGC-996 after circTP63 knockdown, but was rescued by transfecting with miR-217 inhibitor and si-circTP63-1 (Fig. 6E, F). The western blot analysis showed that EZH2 protein expression was downregulated in NOZ and SGC-996 cells transfected with si-circTP63-1 and si-circTP63-2, but was rescued by transfected miR-217 inhibitor and si-circTP63-1 (Fig. 6G, H). Thus, these results indicated that circTP63 sponged to miR-217 and regulated EZH2 expression in GBC. In our previous study, we demonstrated that EZH2 expression is upregulated in GBC and is a key downstream target of lncRNA MINCR, which regulates cell proliferation, cell invasive and apoptosis in GBC cells [15]. In the study, we revealed a novel regulatory pathway that circTP63 sponged miR-217 and regulated EZH2 expression in GBC.
Discussion
Gallbladder carcinoma presents a high degree of malignancy and extremely dismal prognosis for patients.
Recently, increasing studies are beginning to explore the new therapeutic methods deriving from molecular mechanisms for GBC [18]. Accumulating evidence regarding The relative mRNA expression of EZH2 was detected in after NOZ or SGC-996 cells were transfected with si-NC, si-circTP63-1, si-circTP63-2, miR-217 inhibitor and si-circTP63-1 + miR-217 inhibitor by qRT-PCR analysis. G, H The relative protein expression of EZH2 was detected in after NOZ or SGC-996 cells were transfected with si-NC, si-circTP63-1, si-circTP63-2, or si-NC, miR-217 inhibitor and si-circTP63-1 + miR-217 inhibitor by western blot analysis, respectively. Data are shown as mean ± SD, *P < 0.05 the multifunctionalities of circRNAs make them ideal for targets and markers for the prognosis, diagnosis, and developing new treatments of cancer [19]. Such as, circular RNA circ-MTO1 expression is upregulated in GBC tissues and serves as a novel potential diagnostic and prognostic biomarker for gallbladder cancer [20]. Huang et al. reported that circular RNA circERBB2 promotes gallbladder cancer proliferation in vitro and in vivo. Furthermore, circERBB2 regulates nucleolar localization of PA2G4, thereby forming a circERBB2-PA2G4-TIFIA regulatory axis to modulate ribosomal DNA transcription and GBC proliferation [21]. Our previous study reported that by RNA sequencing from GBC tissues, circFOXP1 promoted GBC progression and Warburg effect by regulating PKLR expression, suggesting a potential target for GBC treatment [22]. However, the study for circTP63 in GBC progression remains less.
In the study, our results demonstrated that circTP63 expression was upregulated in GBC tissues compared to adjacent normal tissues. We also detected that circTP63 expression is higher in GBC cell lines. Furthermore, clinical data by K-M analysis and log rank test showed that higher circTP63 expression showed a poor overall survival rate in GBC patients compared with lower circTP63 expression. These clinical results indicated that circTP63 expression could be a prognostic maker for GBC. In our previous study, microRNA and long non code RNAs have been studies for potential molecular biomarkers for GBC prognosis [15]. These clinical results indicated that a cir-cRNA named circTP63 could be a prognostic maker for GBC.
In the previous study, CircTP63 is identified as vital regulator in several tumors. Such as, circTP63 expression is elevated in lung squamous cell carcinoma (LUSC) tissues and is correlated with larger tumor size and higher TNM stage in LUSC patients. Function studies showed circTP63 promotes cell proliferation both in vitro and in vivo by competitively binding to miR-873-3p and regulating the level of FOXM1 [12]. In breast cancer, Deng et al. found that circular RNA circTP63 enhances estrogen receptor-positive breast cancer progression and malignant behaviors through the miR-873-3p/FOXM1 axis [13]. Our results showed that circTP63 knockdown inhibited cell proliferation and cell cycle progression. Furthermore, we demonstrated that circTP63 knockdown inhibited cell migration, cell invasion ability and EMT process. These evidences indicated that circTP63 affected GBC cell growth and EMT process.
Next, we explored the underlying molecular mechanisms of circTP63 in GBC. CircRNAs were reported to exert their functions such as 'microRNA sponge' that competitively bound to miRNAs. In the study, we performed bioinformatic analyses to select miRNAs, which shared common binding sites with circTP63. The results showed that miR-217 shared common binding sites with circTP63. Simultaneously, we found that miR-217 reduced the luciferase activity of circTP63-WT group luciferase reporter compared to control or circTP63-MUT group. After circTP63 knockdown, the miR-217 expression was upregulated in GBC cells. In addition, we demonstrated that circTP63 was pulled down and enriched with 3′-end biotinylated miR-217 in GBC cells compared with the control group, these results indicated that circTP63 could interacted with miR-217 in GBC.
EZH2 is found to be upregulated in GBC tissues in previous study and overexpression of EZH2 is associated with invasion, metastasis, and poor progression of gallbladder adenocarcinoma [23]. Long noncoding RNA MEG3 regulates LATS2 by promoting the ubiquitination of EZH2 and inhibits proliferation and invasion in gallbladder cancer [24]. Our previous study also showed that EZH2 expression is also higher in GBC tissues and overexpression of EZH2 enhanced GBC tumor progression. Then, we showed that lncRNA MINCR/ miR-26a-5p/EZH2 axis was involved in cell proliferation, cell invasive and apoptosis in GBC cells [17]. In the study, we found a novel regulating pathway that circTP63/miR-217/EZH2 affected GBC cell proliferation and invasion ability in vitro. Of course, for limitations, our result only demonstrated a regulatory role of circular RNA circTP63 in gallbladder cancer cells from a molecular level. In the further, we hope to confirm the functional role of circTP63 by knockout animal models. Besides, the expanded clinical samples are also necessary for further verification of clinical roles.
Conclusions
In conclusion, our study firstly explored the biological significance of circTP63 in GBC. We demonstrated that circTP63 expression was significantly upregulated in GBC tissues. Higher circTP63 predicted the poor prognosis of GBC patients. CircTP63 downregulation inhibited the GBC cells proliferation and metastasis. Moreover, we found that circTP63 knockdown inhibited EZH2 expression by sponging to miR-217. Therefore, circTP63 inhibition might serve as a potential therapeutic target for GBC patients in the future. | 5,545.4 | 2021-06-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Diffusion in time-dependent random environments: mass fluctuations and scaling properties
A mass-ejection model in a time-dependent random environment with both temporal and spatial correlations is introduced. When the environment has a finite correlation length, individual particle trajectories are found to diffuse at large times with a displacement distribution that approaches a Gaussian. The collective dynamics of diffusing particles reaches a statistically stationary state, which is characterized in terms of a fluctuating mass density field. The probability distribution of density is studied numerically for both smooth and non-smooth scale-invariant random environments. Competition between trapping in the regions where the ejection rate of the environment vanishes and mixing due to its temporal dependence leads to large fluctuations of mass. These mechanisms are found to result in the presence of intermediate power-law tails in the probability distribution of the mass density. For spatially differentiable environments, the exponent of the right tail is shown to be universal and equal to −3/2. However, at small values, it is found to depend on the environment. Finally, spatial scaling properties of the mass distribution are investigated. The distribution of the coarse-grained density is shown to possess some rescaling properties that depend on the scale, the amplitude of the ejection rate and the Hölder exponent of the environment.
Introduction
Many different situations found in nature occur in an environment where spatial fluctuations occur at length scales so small that they contribute to macroscopic processes only through an averaged effect. The environment is then usually modeled by introducing some random disorder from which effective properties are derived. Examples include the Kraichnan velocity ensemble for turbulent transport [1] and the Sherrington-Kirkpatrick spin-glass model for magnetization [2]. There are many applications where the environment fluctuates on much slower timescales than the processes of interest (such as diffusion, transport or wave propagation). This leads us to consider quenched disorder, as in the case of directed polymers [3,4], of wave propagation in random media [5], of heat transport in open-cell foams [6] and of viscous permeability in porous materials [7]. In most instances, one of the main goals is to derive an effective diffusivity, conductivity or permeability of the medium (see, e.g., [8]).
As to diffusive processes in quenched random media, mathematicians and physicists have largely studied their discrete version known as random walks in random environment (RWRE) [9]. One usually considers a lattice with fixed random transition probabilities between sites and studies the behavior of a random walk on it. One of the important questions of interest has been to determine under what assumptions such RWRE are transient or recurrent (i.e. whether they escape to infinity or indefinitely come back to their starting position). For a time-independent random environment with no space correlations, it has been shown that randomizing the environment slows down the diffusive properties of the random walk [10]. Furthermore, under some precise assumptions, Sinai [11] proved that one-dimensional (1D) symmetric nearest-neighbor random walks in an uncorrelated environment are sub-diffusive and scale as X t ∼ log 2 t when t → ∞. Derrida and Pomeau [12] demonstrated that when the assumption of symmetry is relaxed, the random walk escapes as X t ∼ t α , with 0 < α < 1. Bricmont and Kupiainen [13] showed that the upper critical dimension of random walks in a time-independent random environment is 2, so that for d > 2 RWRE are diffusive. More recently, a great deal of effort was devoted to proving central-limit theorems or large-deviation principles for quenched disorder [14][15][16]. Such works generally assume that the environment has no (or very short) spatial correlations and that it is time independent. Very little is known in 3 the time-dependent environment. One then deals with annealed statistics where the averages are performed over the fluctuations of the random medium. In such settings and under some rather general assumptions, it was shown that random walks are always diffusive and that central-limit theorems generally hold [17,18].
In this work, we are interested in those situations where the timescales and length scales of diffusion are comparable to or longer than those at which the environment fluctuates, and thus we are interested in time-dependent random media with both temporal and spatial correlations. A clear instance where this occurs is turbulent transport whose modeling is a key issue in industrial and environmental science. The classical models generally used in engineering and meteorology are based on an 'eddy-diffusivity' approach (see, e.g., [19]), which in general requires a clear scale separation between the turbulence and large-scale variations of the mean velocity. Homogenization techniques can then be used to show that the averaged concentration field follows an effective advection-diffusion equation with an advecting velocity and a diffusion coefficient tensor that depend on the slow variables [20]. The large-scale mixing originating from small-scale fluctuations acts only through the diffusive term, which, by the maximum principle, cannot be responsible for the creation of large concentrations. The concentrations observed in compressible flows can thus come only from the effective advection term. As we will see later in this paper, the situation can be rather different when one is interested in other types of compressible transport.
The paper is organized as follows. In section 2, we introduce a time and space continuous model of diffusion in a random environment that can be interpreted either as a limiting process of Langevin dynamics or as the continuous limit of a discrete ejection model. The model corresponds to an Itō differential equation with multiplicative noise and no drift. Section 3 is devoted to the study of individual trajectories of the Itō diffusion in the case of smooth and non-smooth environments. When the environment has a finite correlation length, particles are found to have standard diffusion properties at large times with a displacement distribution that approaches a Gaussian. In section 4, we define the density of mass and study the fluctuations of the density in smooth and non-smooth random environments. The locations where the environment ejection rate vanishes are shown to play a crucial role in the large fluctuations of density. Spatial scaling properties of the mass distribution are then investigated in section 5. We show that the distribution of the coarse-grained density possesses some rescaling properties that depend on the scale, the amplitude of the ejection rate and the Hölder exponent of the environment. Finally, section 6 presents some concluding remarks.
Description of the model
We consider the dynamics given by the following Itō stochastic differential equation: where W t is the D-dimensional Wiener process. The diffusion coefficient σ is a random, spaceand time-dependent field, whose statistics are independent of W t and are described below. Before specifying our settings, we motivate this model by considering two instances where (1) arises as a limiting process.
4
Let us first consider species whose dynamics is dissipative in the full position-velocity phase space. We assume that the trajectory of a particle obeys the Newton equatioṅ The forces acting on the particle are viscous drag and an external force f that depends on both space and two different timescales. We have dropped here the vectorial notation but all the following considerations can be easily generalized to any dimension. Such an equation describes for instance the dynamics of a heavy inertial particle in a velocity field that varies over two timescales. Also, one could consider a large particle embedded in a time-and space-dependent thermal bath. The fast timescale ε can be interpreted as the typical time of momentum exchange between the particle and its environment. As in Einstein's original work on Brownian motion, the timescale ε is much smaller than the viscous damping time 1/µ. In the limit εµ → 0, the force f can be approximated by a Gaussian noise with correlation where the average · ε is with respect to the fast time variable. The spatio-temporal variations of σ 2 can be interpreted as a non-homogeneous temperature field in the thermal bath. We next consider the limit when the response time 1/µ is much shorter than the slow timescale. This introduces a new fast timescale O(1/µ) over which the particle velocity fluctuates. We see that where t ∧ t = min(t, t ). The last equality is obtained by integrating by parts and neglecting exponentially small terms. Then, taking the limit µ → ∞ in (4) we obtain V t V t = 2 σ 2 (X t , t)δ(t − t ). The particle position thus satisfies the stochastic equation (1). The model we consider is thus relevant to the study of the Langevin dynamics in a fluctuating environment. The second instance where (1) arises is in the continuous limit of a mass-ejection model. Let us consider the D-dimensional periodical lattice of fundamental size x and period L = N x. This lattice defines a tiling on which we consider the following discrete-time dynamics. The cell indexed by i contains at time n t a mass ρ i (n). Then, between times n t and (n + 1) t, a fraction 0 < γ i (n) < 1/(2D) of this mass is ejected from the cell i to one of its 2D neighbors. For a given set of ejection rates {γ i (n)} i∈{1,...,N } D the variables {ρ i (n)} i∈{1,...,N } D define a Markov chain whose master equation is given by where N i are the neighbor cells of i. It is clear that, because of periodic boundary conditions, the total mass M = i ρ i is conserved. Equation (5) describes a mass-ejection process in a timedependent non-homogeneous environment determined by the ejection rates γ i (n). A similar model has been studied in [21] in the case when the γ i (n) take two values, γ and 0, and are uncorrelated in both space and time. This model was motivated by the study of inertial particles clustering in turbulent flows and was expected to mimic heavy particle ejection from coherent vortical structures by centrifugal forces.
Turning back to the ejection model (5), we now consider the continuous limit x → 0 and t → 0. To take this limit, we suppose that the ratio x 2 / t is kept fixed. We denote by σ 2 (x, t) the continuous limit of γ x/ x (t/ t), where the coefficient is the typical amplitude of the ejection rate and σ is an order-one time and space continuous function. In this limit, equation (5) becomes This is the Fokker-Planck (forward Kolmogorov) equation associated with the Itō stochastic differential equation (1). For t > s, we define the (forward) transition probability density where · designates averages with respect to the realizations of W t . Then the solutions to (6) trivially satisfy for all t > s The model described by the diffusion equation (6) can thus be reinterpreted in terms of an RWRE. Indeed the discrete version of the process given by (1) corresponds to a random walk on the D-dimensional lattice, with a probability to hop from site i to one of its neighbors equal to 2D γ i (n).
In contrast with a standard diffusion equation coming from Fick's first law, the differential operator ∇ 2 [σ 2 (x, t) ·] in the right-hand side of equation (6) is not positive-definite. There is no maximum principle and the solution is expected to behave very differently. The mass is going to accumulate at the zeros of σ . Suppose indeed in one dimension that σ (x, t) C x in the vicinity of x = 0. At leading order, the flux reads J = − (∂/∂ x)(σ 2 ρ) −2 C 2 xρ, which is positive for x < 0 and negative for x > 0, leading to a permanent mass flux toward x = 0. As we will see later, the zeros, their densities and their lifetimes play a crucial role in the statistical properties of the density field.
We now specify how the statistical properties of the random environment are prescribed in our model. We consider that L = 2π and that σ (x, t) is 2π -periodic in space. The environment is then entirely determined in terms of its Fourier series that we write as where the a k are positive real amplitudes and the τ k are scale-dependent characteristic times. The Fourier modes χ k satisfy χ −k (t) = χ k (t) and χ 0 ≡ 0. They are independent Gaussian processes with unit variance and unit correlation time. We choose them as complex Ornstein-Uhlenbeck processes that solve the stochastic differential equation [22] dχ k (s) = −χ k ds + √ 2 dB k (s), where the B k are independent 1D complex Wiener processes. We now prescribe scale-invariance properties for the random environment. We want |σ (x + r, t) − σ (x, t)| ∼ |r| h , where h > 0 controls the regularity of the ejection rate. When h < 1, it is the spatial Hölder exponent of the ejection rate. This amounts to assuming that the Fourier mode amplitudes behave as a k = 1 2 k −1/2−h . Temporal scale invariance is set by assuming that τ k = k −β with β > 0. The exponent β relates time dependence to spatial scale invariance; the case β = 0 corresponds to σ having a unique correlation time and β → ∞ to white noise. As we shall see below, when h < 1, dimensional analysis motivates the choice β = 2 − 2h.
The numerical results presented in this work are obtained in two different ways. The density evolution is obtained by solving equation (6) with a second-order finite-difference discretization of the Laplacian and a semi-implicit Euler temporal scheme. This numerical scheme ensures the conservation of the total mass with high accuracy. In this work, numerical resolutions vary from N = 512 to N = 8192 collocation points and the time step is chosen small enough to well resolve the fastest timescale of the problem. For particle trajectories, the stochastic equations (1) and (10) are solved by a standard stochastic Euler scheme and ensemble averages are obtained by Monte Carlo methods. In both cases, we expect the error made in the solution to act as a numerical diffusion with a constant proportional to the time step. Figure 1(a) show the temporal evolution of the density and of the ejection rate σ 2 (x, t), respectively, for a typical realization in dimension D = 1. Figure 1(b) shows a snapshot of the random environment σ and of the position of a set of particles obeying equation (1) in D = 2. It is apparent in figure 1 that the high-density zones are located in the vicinity of the zeros of σ (x, t). From the right of figure 1(a) we observe that the zeros of σ follow random paths. Their role in the statistical properties of the density of mass and in the diffusion properties of particles will be discussed further in this work.
The distribution and time evolution of these zeros strongly depend on the parameter h characterizing the environment. Let us first consider the case of large h. In figure 1(a), it is apparent that the zeros typically appear in pairs and then diffuse and separate until they merge together or with another zero. Without loss of generality, we can assume that the ejection rate 7 has only two modes and takes the simple form with σ r + i σ i = 2χ 1 . Using the Itō formula it is possible to show that σ can be equivalently rewritten as σ ( where the stochastic processes A t and t satisfy where B A t and B t are uncorrelated Wiener processes. It is clear that the amplitude A t fluctuates around A = 1 and that its correlation time is of the order of 1. It follows from equation (12) that, for typical values of A t , the phase t , and thus the positions of the zeros of σ , diffuse on a timescale of the order of 1.
When h < 1, the random environment presents scaling properties and very different behavior is expected. We have by construction that the second-order structure function of σ behaves like where the overline stands for the ensemble average with respect to fluctuations of the environment (i.e. with respect to the Ornstein-Uhlenbeck processes χ k ). In this work, we make the choice of relating space and time correlations by a dimensional argument. The characteristic time τ k introduced in equation (9) behaves as a power law, so that the correlation time of σ at scale is τ C ( ) = ( /L) β . According to equation (13), the diffusion timescale associated with the spatial scale behaves as τ β+2h−2 defines a dimensionless space-dependent parameter that is usually called the Kubo number. When Ku( ) 1, the environment looks as if frozen. When Ku( ) 1, it fluctuates in an almost time-uncorrelated manner. When β = 2 − 2h, the Kubo number depends on the scale and this breaks any possible scale invariance of the mass distribution. When β = 2 − 2h, we have Ku( ) = Ku = /L 2 , so that scale invariance is possible. Here, we will focus on the latter case and describe the mass concentration properties in terms of the dimensionless parameter Ku ∝ . Note that the choice of having a scaleindependent Kubo number is common in the framework of turbulence and is expected to be relevant to the problem of diffusion of inertial particles in turbulent flows.
Finally, let us make a couple of remarks on the dependence upon another parameter of the environment, namely the Hölder exponent h. Using Parseval's theorem, it is possible to show that σ 2 is a Gaussian variable with the variance of increments given by equation (13). Therefore σ (·, t) is a fractional Brownian motion of exponent h [23]. For h = 1/2 it corresponds to standard Brownian motion. For h > 1/2 the increments of σ are not independent and have positive covariance; the zeros of σ are then finite and isolated. For h < 1/2, the covariance of increments is negative, and therefore we expect the number of zeros to become infinite and to accumulate. The different behavior of the random environment will affect the general properties of diffusion, as we will see in the following sections.
As the Ornstein-Uhlenbeck processes χ k are stationary, the fluctuations of the ejection rate σ 2 are a stationary and homogeneous random field. Hence we expect that, at sufficiently large times, the density distribution reaches a statistically stationary state and numerical simulations 8 indicate that this is indeed the case. The work reported in this paper is mainly concerned with the statistical properties of the density field in this large-time asymptotics. Most of the results presented in this work are in dimension D = 1.
Diffusive properties
We now turn to studying the diffusive properties of the solutions X (t) to (1). One has so that where the angular brackets denote average over trajectories. At times much larger than the correlation time of σ 2 along particle trajectories, the integral becomes a sum of independent random variables, which obeys the law of large numbers. We thus have where the overline denotes the average with respect to the environment. The effective diffusion constant D( , h) involves an average of the environment along particle trajectories which is equivalent to an average weighted by the mass density ρ. As the trajectories are expected to spend a long time in the vicinity of the zeros of σ , we expect this Lagrangian average to be less than the full Eulerian spatial average. The displacement fluctuations of order less than √ t are expected to be given by a central-limit theorem. However, larger fluctuations should obey a large deviations principle with a rate function that might not be purely quadratic as it is nontrivially related to the Lagrangian properties of the environment. To study these fluctuations, we perform Monte-Carlo simulations of equation (1).
We first consider the case when the ejection rate σ 2 is a smooth function of space (i.e. h > 1). With this configuration there is a finite (and small) number of isolated zeros separated by a distance order L/2 = π . A number of simulations of equation (1) have been performed for different values of using 32 modes and h = 2. Figure 2 shows the meansquare displacement of particles averaged over 1000 realizations of the diffusion and 20 000 realizations of the environment. As expected from equation (16), linear growth is observed at large times. However, the diffusion constant D( ) non-trivially depends on , as seen in the inset of figure 2. The non-monotonic behavior of D( )/ can be explained by two competing phenomena. In the limit → 0 (or equivalently Ku → 0) the environment changes fast enough compared to the characteristic time of diffusion given by 1/ . In this limit, we can consider that the particle trajectories sample the environment homogeneously, so that the Lagrangian average can be replaced by the spatial Eulerian average. This leads to (16) on . For h = 2, we obtain D( , 2)/ 1.03, which is in agreement with the value observed in the inset of figure 2. As increases, trajectories spend a longer time at the zeros of σ decreasing the value of D( )/ . In the limit of → ∞ (i.e. Ku → ∞), the environment can be considered completely frozen and we expect the mass density solving equation (6) to be approximately given by ρ(x, t) ∼ 1/σ 2 (x, t). From equation (16), we obtain that D( , h) = ( /L) σ 2 ρ d(x) ≈ ( /L) dx = . Hence D( )/ = 1 for → ∞ as seen in the figure. In this limit the mass is completely concentrated at the zeros of the ejection rate, and the diffusion is carried out by the motions of the latter that follow a dynamics similar to equation (12).
The normalized probability distribution functions (PDFs) of the displacements are shown in figure 3 for different times and different values of . At early times and for low values of , the PDFs present exponential tails that are also observed at intermediate times for larger values of . In all cases, the PDFs approach a Gaussian distribution at very large times. Note that for sufficiently large values of , some oscillations can be observed on the PDF tails at intermediate times. As we will now see, this is due to trapping by the zeros of σ (x, t). To emphasize this point, we compare the PDF of the displacement for different values of chosen at the time t * such that (X (t * ) − X (0)) 2 = (L/2) 2 = π 2 , i.e. the time when the typical distance traveled by the particles reaches a length of the order of the separation between two zeros (of the order of L/2 for smooth environments). The PDF of (X (t * ) − X (0))/(L/2) is displayed in figure 4(a) for various values of . It is apparent that the bumps appear at multiples of L/2 for large s. With a high probability, a particle is initially located close to a zero of σ . When it travels away, there is a very good chance that it stops at the neighboring zero, leading to a quasi-multimodal distribution.
To better quantify the large-time behavior of the displacement, we compute its higher-order moments and compare them to those corresponding to a Gaussian distribution. We thus define for even p where [z] denotes the Gamma function. C p = 0 for all p 2 if X (t) − X (0) is a Gaussian variable. C 2 = 0 by construction and C 4 is the kurtosis. C p can be interpreted as a pth-order deviation from a Gaussian distribution. Data suggest that all the C p display 1/t behavior at large times. The dependence on the amplitude and on the order p is well represented by the functional form The temporal evolution of C p /A( ) for different values of and p, and the function A( ) are displayed in figures 4(b) and (c), respectively. As can be seen from figure 4(b), the data shows the dependence of the diffusion constant normalized by D 0 = σ 2 L 2 = ζ (1 + 2h). It is seen that the importance of zeros on the diffusion constant increases as h decreases.
Density fluctuations
So far, we have studied the diffusion properties of particles described by the stochastic equation (1). We now turn to studying the statistical properties of the mass density ρ that evolves according to the diffusion-like equation (6). As we have previously observed, the spatial fluctuations of ρ are correlated with the distribution of zeros of the random environment σ (x, t). More precisely, we observe from equation (6) that the flux of mass at x is directly given by The flux hence vanishes at the zeros of σ (x, t), so that they would act like sinks concentrating all of the mass as if they were not moving. However, due to their diffusion, the zeros are not able to indefinitely concentrate mass and the density saturates in their neighborhood. In parallel, as the total mass is conserved, this concentration process leads to the creation of voids in the regions where σ is of the order of 1.
In the following, we consider the case D = 1. Under some assumptions of homogeneity and ergodicity of the dynamics, the PDF of the mass density can be written as This will be the main quantity studied in the following sections. As we will see, it strongly depends on the parameters and h. The definition (22) in terms of a space-time average will be particularly useful in first attacking the stationary case and then developing phenomenological arguments on its behavior in the non-stationary case.
Smooth random environments
Let us first consider the limit Ku → ∞, i.e. when the ejection is infinitely larger than the inverse of the diffusion time of the zeros of σ . Note that, in the limit of a time-independent environment, no stationary state can be achieved. Mass will then concentrate around the zeros, and at t = ∞ the density will become atomic, supported at these points. For sufficiently large but finite times, as almost all the mass is concentrated around the zeros of σ , the ejection rate can be Taylor-expanded in the vicinity of these points. Without loss of generality, let us assume that a zero appears at the origin x = 0 at t = 0 and does not move. In this limit, equation (6) reduces to where C = dσ/dx| x=0 . Rescaling time allows us to set C 2 = 1. Without loss of generality, we also consider an infinite domain and an initial condition with compact support: ρ(x, 0) = 1 for |x| 1 and 0 elsewhere. Equation (23) can be analytically integrated by using the method of characteristics. For x > 0 the change of variable u(y, t) = e 2t ρ(x = e y−3t , t) leads to the homogeneous heat equation that can be easily solved by using the corresponding Green function. The solution for x < 0 is obtained by symmetry. One then obtains where erfc(z) = 2 √ π ∞ z e −s 2 ds is the complementary error function. The time-averaged distribution P T (ρ) defined in equation (22) can be rewritten as where xρ(t) is such that ρ(xρ(t), t) =ρ. It exists only for t > lnρ/2 . The contour lines of ρ(x, t) are displayed in figure 6(a). The high-density zones are concentrated in a narrow region of space. This will justify the saddle-point approximation used below. With the change of variables t → λ = 2t/ lnρ, introducing µ(λ;ρ) = − ln xρ(t)/ lnρ, and after some algebra, equation (25) becomes with F(λ;ρ) = λ + µ(λ;ρ) − 1 2λ where in the last equation the asymptotic of erfc −1 (z) ≈ log (1/z) for z ≈ 0 has been used (recall that λ > 1). Using a saddle-point approximation for lnρ 1 of the integral in equation (26), we obtain at leading order (dropping the bar over ρ) This scaling is clearly observed in figure 6(b), where equation (6) is integrated using the stationary environment σ (x, t) = sin x.
Note that a clear power-law scaling is also seen in figure 6(b) at small values of the mass density. To explain this scaling, let us consider a cell of length x located at x 0 far away from a zero of σ (x) and where the density reaches its minimum. At leading order the mass ejected from this cell between t and t + dt is proportional to J ρ(x, t) ∂ x σ 2 (x)| x=x 0 as ∂ x ρ ≈ 0 at this point. Considering that ∂ x σ 2 (x) is constant near x 0 yields an exponential decay of the mass in the cell. Introducing in equation (22) a density of mass with an exponential decay ρ(x, t) ∼ ρ 0 e −σ 2 (with σ 2 0 an effective rate) directly leads to which is in agreement with figure 6(b). As we will soon see, the situation is rather different in the non-stationary case where the motion of the zeros of the ejection rate limits the process of mass concentration. Now, the coefficients σ r (t) and σ i (t) of equation (11) are the Ornstein-Uhlenbeck processes defined in equations (9) and (10). The PDF of ρ for different values of is presented in figure 7. The time needed for the density to accumulate near the zero of the ejection rate σ 2 is of the order of −1 .
Hence, for small values of (i.e. Ku 1), the fast temporal variations of σ (x, t) do not leave enough time for the system to accumulate mass. It is apparent in this case that most of the mass is distributed around the mean mass ρ = 1 (see the = 0.1 curve represented by orange stars). However, for large but finite values of Ku, accumulation is fast enough and a resulting ρ −3/2 scaling is observed. This scaling can be derived by considering that for large Ku the system rapidly relaxes to a quasi-stationary solution in a time of the order of 1/ and stays there for a time of the order of the correlation time of σ . Far enough from a zero, the environment behaves like σ (x) C x and its time evolution can be neglected. The density converges there to a quasistationary solution, which is such that ∂ 2 x (σ 2 ρ) ≈ 0, so that ρ ≈ σ −2 ∝ 1/x 2 . Closer to a zero, the time variations of its location become faster than the concentration of mass and the density saturates to a finite value ρ max (see the left-hand side of figure 8). The transition between these two regimes occurs at a distance x from a zero, which by continuity satisfies x ∼ ρ −1/2 max . The length x is of the order of the distance traveled by a zero during a timescale equal to that of mass concentration. Hence, if we assume that a zero diffuses, one has x ∼ −1/2 , so that the typical value of ρ max is of the order of . The distribution of density has thus a crossover at ρ ∼ . When ρ , that is, for values much below the plateau at ρ max , the behavior is dominated by the divergence of the quasi-stationary density profile. Introducing ρ(x) ∼ 1/x 2 in equation (22) straightforwardly leads to the algebraic behavior 2 max x x 1/ Figure 8. Left: sketch of the density field in the vicinity of a zero of the ejection rate σ 2 located at x = 0; the density grows as x −2 and saturates to ρ max at a distance x ∼ ρ −1/2 max from the zero. Right: sketch of the diffusive time evolution of a zero of σ 2 , which has remained at a distance less than x for a time T before the reference time t = 0.
Note that this value of the exponent in the large-density intermediate tail is very robust. It has also been observed for larger numbers of modes and with an environment of the form σ (x, t) = √ cos (x − ct) (data not shown). The behavior at ρ of the PDF of density is related to the large fluctuations of ρ max and thus to the events when a zero does not move much during a time greater than −1 . More precisely, if at a fixed time we observe a large value of ρ max , this means that the zero has not moved by a distance larger than ρ −1/2 max during a time interval of length T larger than −1 (see the right-hand side of figure 8). When zeros diffuse, this probability relates to the first exit time distribution of a diffusive process. The probability density of the first time T at which the Wiener process exits the interval |x| < x is ∼ exp(−C T / x 2 ) (see, e.g., [24]). We thus obtain For ρ the leading-order behavior is given by T = −1 . This leads to the following exponential cutoff for the PDF of mass density: This exponential behavior is confirmed numerically as can been seen from the lin-log plot in the inset of figure 7.
Non-smooth random environment
We now turn to the case of a time-dependent non-smooth ejection rate σ . Figure 9 presents the PDF of ρ for a number of runs with different values of and h. Note that for large power-law behavior is observed at small masses-see the inset of figure 9(b). These tails can be understood using a phenomenological argument similar to the one used in [21]. Consider an extreme event leading to a very low density in a cell where only ejection of mass has taken place for some time.
For such a configuration we expect an exponential decay of the density and thus a time of the order of T ∝ − −1 ln ρ to reach a small density ρ (where σ 0 is an effective rate). The probability of having this configuration depends only on the properties of the environment. Let p < 1 be the probability of having such a configuration. As the environment de-correlates in a time of the order of 1, the number n of independent realizations of the environment that is needed for the full process to take place during a time T is proportional to T , and thus n = −C −1 ln ρ, where C is a constant that depends on h. Therefore, the probability of this complete event is We thus obtain that the probability of such an extreme event is ρ β , with β = −C ln p/ > 0. Note that this is actually a lower bound to the small mass tail, which is valid as far as p = 0 and C = 0. Obtaining an analytic expression for the exponent β is a theoretically challenging problem that is beyond the scope of this work. The exponent β can be obtained by fitting the left tail of the mass density PDF inside the range where the power law is observed. The fits are presented as dashed lines in the inset of figure 9(b), and the dependence of β on the Hölder exponent h is displayed in figure 10(a) for = 10.
Note that the width of the PDFs in figure 9 clearly depends on the value of the Hölder exponent h as is apparent in figure 10(b), which represents the variance of the density δρ 2 = (ρ − ρ ) 2 . Note that neither the exponent β nor the variance δρ have monotonic behavior as a function of h. This can be qualitatively understood in the following way. When the value of h is decreased, the number of zeros of σ (x, t) increases. However, their spatial distribution also depends on the Hölder exponent h. On the one hand, for h > 1/2 the space increments of σ are positively correlated. This implies that there is no accumulation of zeros. For this configuration, the mass is transported from the non-vanishing zones of σ (x, t) to the nearest zeros. Therefore as h decreases, more and more zeros are present, creating large masses and void zones and thus increasing fluctuations. On the other hand, for h < 1/2 the covariance of increments is negative and there are some finite-size regions with a large number of zeros. σ (x, t) vanishes many times in such zones, so that the diffusion is very weak and mass is trapped. This makes the transfer inefficient and reduces the probability of having extremely high or low mass concentrations. In simpler terms, a large number of zeros increases fluctuations as long as they are not too dense. We expect then a change of behavior near h = 1/2; this is in agreement with figures 10(a) and (b), especially for the largest value of where the properties of the system are expected to depend more strongly on the ejection rate distribution of zeros.
Scale invariance of the mass density field
In this section we study the scaling properties of the density field. For this, we introduce the coarse-grained mass density Note that by homogeneity we have ρ = ρ . The PDF of ρ is computed using definition (22) and is displayed in figure 11 for different values of h and = 1. The narrowest curves on each figure correspond to the largest (non-trivial) scale L/2 and the curves monotonically increase in width with decreasing scale, down to the smallest scale of the runs. For spatially smooth environments, the coarse-grained density of mass becomes invariant when decreasing the scale, as is apparent in figure 11(a) for h = 2. This indicates that the density field is spatially smooth and that the zeros are isolated. The collapse is faster for small densities. This is due to the accumulation of mass only in some very small clusters and the creation of large void zones. These large voids dominate the coarse-grained density statistics up to their typical size, that is, for < L/8, as seen from figure 11(a). In contrast, the typical size clust of mass clusters is rather small. Averaging over scales larger than that, this length scale reduces the largest mass fluctuations. Indeed, the coarse-grained density cannot exceed clust / and this cutoff decreases with , as can be observed in figure 11(a).
This argument does not apply for non-smooth environments where the distribution of zeros is highly non-trivial. The scale invariance is then broken, as can be observed in figures 11(a) and (c). This is also apparent in figure 12(a) figure). However, one observes that for h < 1 and L the variance presents a power-law scaling δρ 2 ∼ −ζ that suggests self-similar behavior of the density. The exponent ζ (h; ) is displayed in the inset of figure 11(c).
The scaling invariance of the density field can be understood in the neighborhood of the zeros of σ . Let us assume that the ejection rate vanishes at x = x 0 . Then, in the neighborhood of x 0 , we have σ (x, t) 2 = |σ (x 0 + x, t)| 2 ∼ x 2h . Supposing that the density behaves as a power-law in the vicinity of x 0 and rescaling space asx = λx, one can easily see that the rescaled density Hence, we expect that if the coarse-grained density ρ presents a self-similar property, then the PDF of ρ obtained with a given value of will coincide at large values with that of ρ λ corresponding to an ejection rate of amplitude λ 2−2h . This scale invariance is apparent in figure 12(b) where different PDFs of ρ , with different values of and such that 2h−2 = const, are confronted for h = 1/2 and collapse at large densities.
Conclusions
To summarize, we have introduced a diffusion model based on a simple discrete mass ejection process where the ejection rates are random variables with temporal and spatial correlations. The model can be interpreted in terms of random walks in a time-dependent random environment. We have considered space-periodic environments where the temporal dependence of each Fourier mode is given by independent Ornstein-Uhlenbeck processes and both the amplitudes of the modes and their correlation times present some prescribed spatial scaling properties. This allowed us to consider smooth and non-smooth environments with fast and slow temporal dependence. No other assumptions were made on the environment and we expect such environments to display generic properties and to be representative of sufficiently general situations.
The model was studied analytically and numerically. The corresponding particle dynamics is given by an Itō differential equation with a multiplicative noise and no drift. We observed that random trajectories diffuse at large times. Also, the probability distribution of their displacements tends to a Gaussian at large times and the deviations from this asymptotic behavior decrease as t −1 . To take into account the fluctuations of mass due to the randomness of the environment, we introduced and studied the probability distribution of the density of mass. We obtained some analytical results on the density of mass in the case of a stationary environment and showed that it displays a power-law tail at large masses with exponent −2. In the general case, we observed competition between a trapping effect due to vanishing ejection rates of the random environment and the mixing due to its time dependence which leads to large fluctuations in the density of mass. These fluctuations were studied for both the smooth and the non-smooth random environment. In the smooth case, we showed that the PDF has an intermediate power-law behavior, like in the stationary case but with an exponent −3/2, followed by an exponential cutoff. Finally, we studied the spatial scaling properties of the mass distribution by introducing a coarse-grained density field. For smooth random environments the coarse-grained density was found to be scale invariant. We showed that at large masses, it possesses some scaling properties that depend on the coarse-graining size, the ejection rate typical amplitude and the Hölder exponent of the environment.
The overall dynamics of the model that was introduced in this paper contains several space scales and timescales. Mass rapidly accumulates near those regions with a vanishing ejection rate, and slowly moves following diffusion of the zeros. Depending on the properties of the environment, there is also a clear separation of length scales: small mass fluctuations at small scales and large ones at the scale of the distance between two vanishing ejection rates. This scale separation demonstrates the need to determine, by standard homogenization techniques, a large-scale effective diffusion tensor and possibly an effective transport term. We leave this topic for future work.
Extending our approach to dimensions higher than 1 is another possible direction for the future. The ejection rate will then vanish on complicated sets that, depending on the regularity, might display fractal properties. Varying the Hölder exponent of the random environment, its amplitude and the observation scales could then lead to a rather rich collection of different regimes. For instance, much more effort should be made to confirm whether or not a power-law is still present in the mass density probability distribution.
To conclude, we stress again that the presence of zeros in the environment has crucial effects on both the diffusive properties and the mass density statistics. They are responsible for mass accumulation but, at the same time, constitute barriers to transport. This duality implies that the fine statistical details of such systems crucially depend on the features of such zeros and in particular on the local structure of the ejection rate in their vicinity. In particular, the presence of an extra drift in the dynamics would completely alter the roles of the zeros and would prevent mass accumulation. In this work, we have decided to focus on ejection rates that can be written as the square of a generic Gaussian random function. However, this choice cannot always be relevant and, for instance, it might sometimes be more appropriate to write the ejection rate rather as the exponential of a random function. This would change drastically most of the results reported in this paper. | 10,472 | 2012-07-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Conformal blocks beyond the semi-classical limit
Black hole microstates and their approximate thermodynamic properties can be studied using heavy-light correlation functions in AdS/CFT. Universal features of these correlators can be extracted from the Virasoro conformal blocks in CFT2, which encapsulate quantum gravitational effects in AdS3. At infinite central charge c, the Virasoro vacuum block provides an avatar of the black hole information paradox in the form of periodic Euclidean-time singularities that must be resolved at finite c. We compute Virasoro blocks in the heavy-light, large c limit, extending our previous results by determining perturbative 1/c corrections. We obtain explicit closed-form expressions for both the ‘semi-classical’ hL2/c2 and ‘quantum’ hL/c2 corrections to the vacuum block, and we provide integral formulas for general Virasoro blocks. We comment on the interpretation of our results for thermodynamics, discussing how monodromies in Euclidean time can arise from AdS calculations using ‘geodesic Witten diagrams’. We expect that only non-perturbative corrections in 1/c can resolve the singularities associated with the information paradox.
Introduction and discussion
To make predictions about the thermodynamic behavior of a system, we usually study a statistical ensemble of states codified by a partition function. In this standard 'macroscopic' approach, the entropy function S(E) plays a key role, counting the number of states e S(E) with energy E and determining the phase diagram of the theory as a function of the temperature. For example, the Cardy formula [1,2] for S(E) predicts the asymptotic density of states in CFT 2 , thereby counting the number of black hole states in quantum gravity theories in AdS 3 [3,4].
We have taken a rather different 'microscopic' approach to thermodynamics in AdS/CFT [5][6][7], studying the correlation functions of light probe operators in the background of a heavy CFT microstate. Intuitively, we expect that there should be very little difference between observables computed in a thermal density matrix and those computed in a pure state randomly chosen from the canonical ensemble. Via the operator/state correspondence, we can infer thermodynamic properties from a 4-pt correlator by comparing O H is a heavy operator, and the last equality holds in CFT 2 . We obtain precisely this relation [8] by approximating the left-hand side with the Virasoro vacuum JHEP05(2016)075 conformal block, computed at large central charge c in the limit h H ∝ c h L . In the lightcone OPE limit [9,10], this will be a good approximation for any CFT 2 without additional conserved currents; more generally it provides an interesting universal contribution to the correlator capturing gravitational effects in AdS 3 . Thus the thermodynamic properties of high energy states in CFT 2 at large c are built into the structure of the Virasoro algebra.
In this work we will study 1/c corrections to the Virasoro conformal blocks and their implications for thermodynamics. These will include both semi-classical corrections at higher orders in h L /c and genuine 'quantum' corrections. We use the terminology 'semi-classical' and 'quantum' because these correspond, respectively, to the gravitational backreaction of the light probe and to gravitational loop effects in AdS 3 . In the remainder of this introduction we will discuss how our discussion relates to the black hole information paradox, and then we provide a summary of the results.
Chaos can also be studied by taking a limit of CFT 4-point correlators [11][12][13], with a universal bound expected for large central charge theories [14]; the implications of our results for chaos will be discussed in a forthcoming work.
The information paradox and the vacuum block
The black hole information paradox has many guises. In its most visceral and pressing form, it requires understanding the correct description of physics near black hole horizons, and in particular, the question of whether the semi-classical description can survive as a good approximation while simultaneously allowing for unitary evolution [15][16][17][18]. Such problems remain extremely perplexing and important, but they are difficult (or impossible?) to formulate as a precise question about CFT observables, and progress on this front may require qualitatively new 'observables' [19][20][21].
A more straightforward manifestation of the information paradox can be formulated directly in terms of CFT correlators. In the background of a large AdS-Schwarzschild black hole, the two-point correlation function with Lorentzian time-separation t L decays exponentially at large time [22]. This means that information dropped into the black hole at an initial time never comes out. A CFT living on a non-compact space or a CFT at infinite central charge may also have thermal correlators that decay exponentially for all times, as can be seen explicitly by analytically continuing the right-hand side of equation (1.1) for the case of CFT 2 on the thermal cylinder. However, for a CFT living on a compact space with finite central charge and at a finite temperature, 1 correlators cannot decay exponentially for all times, as this would signal loss of information concerning a perturbation to the thermal density matrix.
We add another layer to the story by studying the correlators of light operators in the background of a heavy pure state. This makes it possible to probe the pure quantum state of a one-sided BTZ black hole, instead of an ensemble of e S black holes. In the thermodynamic limit we expect the relation of equation (1.1) to hold, leading to a sharp Euclidean-time signature of information loss. Thermal 2-pt correlators are periodic under JHEP05(2016)075 t E → t E + β. This periodicity leads to additional singularities in equation (1.1) from periodic images of the O L (z)O L (1) OPE singularity, which occur in the Euclidean region at z =z = 1−e n T H for any integer n. Although these singularities are obligatory for thermal 2-pt correlators, they are forbidden in the 4-pt correlators of a CFT at finite central charge c [23]. So these singularities are a sharp signature of information loss in the large central charge limit, analogous to the bulk point singularity [24][25][26], a signature of bulk locality.
In the case of either exponential decay in t L for thermal 2-pt correlators or periodicity in t E for pure-state 4-pt correlators, it would be most interesting to have a bulk computation resolving the paradox. Unfortunately, we do not have a non-perturbative definition of the bulk theory, and in fact, the bulk theory may be precisely defined only via a dual CFT.
In this paper we will focus on Euclidean time periodicity and its manifestation in the Virasoro conformal blocks. We expect that unitarity can only be restored by nonperturbative effects in 1/c, and in particular that perturbative 1/c corrections should not violate the thermal periodicity t E → t E + β of the large c heavy-light correlators. These expectations are primarily based on the expectation that 1/c corrections correspond to loop effects around the infinite c gravity saddle, which is an AdS black hole background with fixed Euclidean-time periodicity, and thus such corrections should at most produce perturbative corrections to β. Roughly speaking, unitarity restoration should rely on contributions from different saddles and therefore involve effects of order e −S ∼ e −O(c) . 2 Such non-perturbative effects will be addressed more directly in future work.
We will compute 1/c corrections to the Virasoro blocks and study their behavior in Euclidean time. We find that the 1/c corrections to the vacuum block do violate periodicity, with a non-trivial monodromy under t E → t E + β. Intriguingly, there appear to be two relevant time scales, of order t ∼ c and t ∼ e O(c) , as the correlator has non-trivial dependence on both t and log t.
However, we do not believe that these effects have any immediate connection with the resolution of the information paradox. Conformal blocks have unphysical monodromies in the Euclidean plane that cancel when they are summed to form full CFT correlation functions. The monodromies we find in the 1/c expansion of Virasoro blocks seem to play a similar role to the more banal monodromies of global conformal blocks. In section 3 we explain how these monodromies can arise from AdS computations of the blocks in terms of 'geodesic Witten diagrams' [27][28][29]. The case of both global and Virasoro blocks can be given a parallel treatment, which suggests that the Euclidean-time monodromies of the 1/c corrections are likely to disappear in the full correlators.
Summary of results
In previous work we showed that in the heavy-light semi-classical limit, the vacuum conformal block can be written as
JHEP05(2016)075
where and z = 1 − e −t . Rescaling τ ≡ iπT H t so that we measure distances in units of T H , and then taking T H → ∞, we see that the full structure of the vacuum block is preserved.
Here we show that in the large temperature limit, the first correction in a 1/c expansion of the heavy-light vacuum block is An important point is that the methods we use in this paper can obtain terms that are not visible at any order in the "semi-classical" part of the conformal blocks. This semi-classical part is defined as where the ratios δ i ≡ h i /c of the external dimensions to c are all held fixed. After taking the logarithm of V, the O(h 2 L /c) correction term above can be seen to survive in this limit, but the O(h L /c) term does not and thus goes not only beyond leading order in δ L but beyond the semi-classical limit itself.
After this work was substantially completed, the paper [30] appeared that uses a different method to compute an integral expression for the order h 2 L /c (semi-classical) result.
Review
Conformal blocks in 2d CFTs are contributions to four-point correlation functions from irreducible representations of the full Virasoro algebra, and as such resum contributions from all powers of the stress tensor. These contributions are dual to those of all multigraviton contributions in AdS, and thus automatically encode an enormous amount of information about gravity in AdS 3 . To distinguish these conformal blocks from simpler expression that contain irreducible representations of the global subgroup SL(2, C), we refer to the former as Virasoro conformal blocks and the latter as global conformal blocks. The explicit form for global conformal blocks in 2d has been known for some time and is just a hypergeometric function [31]; this is in contrast with Virasoro blocks, where, despite various systematic expansions [31][32][33][34][35][36][37], no closed form expression is known. In [8,28,30,[38][39][40][41][42] methods have been developed for computing the Virasoro conformal blocks in a "heavylight" limit, where the central charge as well as the conformal weight of two "heavy" external operators are taken to be large, while the conformal weight of two "light" external operators is held fixed. The most efficient technique [38] works by using the conformal anomaly to absorb the leading order contribution of the stress tensor in this limit into a deformation of the metric.
To be more precise, recall that the Laurent coefficients of the stress tensor depend on the coordinates being used: (2.1) The usual Virasoro generators L n ≡ L (z) n are the Laurent coefficients in the flat coordinate z, where the CFT lives in the metric ds 2 = dzdz. The subset of L n with n ≤ −1 are raising operators which, when acting on a primary state, provide a natural basis for all states in a conformal block. So, one can work out the conformal blocks for a four-point function φ H (∞)φ H (1)φ L (z)φ L (0) by expanding the state created by φ L (z)φ L (0)|0 in this natural basis where |h is the primary state of the conformal block and c {m i ,k i } are coefficients that are fixed by conformal symmetry. Recall that primary states are defined as those annihilated by the lowering operators L n with n ≥ 1. One way to compute the Virasoro block is to construct a projector P h : Acting with P h to make P h φ L (z)φ L (0)|0 , one automatically obtains the sum over the basis in (2.2) with coefficients given by evaluating JHEP05(2016)075 The conformal block itself is just given by φ H (∞)φ H (1)P h φ L (z)φ L (0) , the four point correlator projectioned onto the irreducible representation of Virasoro built from the primary state |h .
However, in the heavy-light limit, this is not a very efficient basis to use. Although the normalization factors N {m i ,k i },{m i ,k i } grow with c for most contributions and thus produce a large suppression, these can be compensated in the Virasoro block by factors of the heavy operator dimension coming from the numerator φ H (∞)φ H (1)L k 1 −m 1 . . . L kn −mn |h . Fortunately, there exists another natural basis that avoids this difficulty. It is easy to see that any other set of coordinates x which begins linearly in Euclidean coordinates z at small z will again have the property that L (x) n with n ≤ −1, and thus also provides a natural basis. In [38] it was noted that the choice of coordinates leads to remarkable simplifications in the basis generated by L −n ≡ L (w) −n ; in particular, at leading order in 1/c, the only basis elements that contribute are those of the form L n −1 |h . The reason is that when one forms the projector P h,w in this basis, there is no longer any enhancement from the conformal weight of the heavy operator The simplest way to see this is to note that due to the conformal anomaly, This does not grow with h H , and therefore factor of h H cannot compensate for the suppression by factors of c from the norms To go to subleading orders in 1/c, we have to include some of these suppressed terms. Clearly, we have to include terms where the suppression from the norm involves only one factor of c, but there are also some contributions that must be included where the norm produces two factors of c. The reason is that in the sum over modes, factors of the form φ H (∞)φ H (1)L −n L −m |h with two L's can produce a factor of c upstairs. This is again easiest to understand by looking at correlators with the stress tensor, where this positive factor of c arises from the limit when two T 's are brought together. In general, a correlator with 2k insertions of T can have at most c k upstairs from such T T OPE singularities, and there will be a suppression by c −2k coming from the norm of the physical T modes. Thus, to compute to order 1/c k we will have to consider 2k factors of L −n 's.
Computation
The projector P h,w for the L −n is similar to the original Euclidean basis projector P h . Inside a four-point function, it takes the form
JHEP05(2016)075
As shown in [38], this correctly acts as a projector onto the L −n modes when the overlap factor h|L is just given by the analogous Euclidean overlap factor after a conformal transformation on the φ L 's: are unchanged from the Euclidean basis. The only piece that changes substantially is the overlap with the heavy operators: Our strategy for computing these will be to compute the corresponding φ H (∞)φ H (1)T (w 1 ) . . . T (w n ) correlators and read off the Laurent coefficients. In the following, we will focus on the vacuum block with h H 1 = h H 2 , h L 1 = h L 2 for simplicity, and relegate the calculation of the general case to appendix A. It will be convenient to choose the insertions of the heavy operators to be at 0 and ∞ rather than at 1 and ∞; this corresponds to z → 1 − z and w → 1 − w compared to above. Correlators can be computed in w most easily by using the OPE: i . The notation "∼" here means "equal up to regular terms." Since w 2 T (w) is holomorphic in z(w), these OPEs determine the singularities and therefore the complete functional dependence of T correlators in terms of correlators without T insertions. Since the transformation from w to z is regular except at z = 0, ∞, the last OPE above, T (w 1 )T (w 2 ) is just a rewriting of the standard T T OPE Applying the OPE to one or two insertions of T (w) we find (2.10) Expanding the above φ H φ H T T correlator at w 1 ∼ w 2 , one can see that there is only a fourth-order pole at w 1 ∼ w 2 and the higher order poles cancel, as is enforced by the T T OPE. To compute the 1/c correction to the leading order heavy-light Virasoro blocks, the only modes we need to sum are single-and double-L modes. Calculating the overlap factors with the light operators and the inner product factors that enter is a straightforward application of the Virasoro algebra. It will be convenient to use a basis of double-L modes that are symmetric in the indices, i.e. of the form L (m,n) ≡ LmLn+LnLm 2 . One finds Inverting and expanding to O(1/c 2 ), Now, these factors can be substituted into the sum that defines the projector. We can take advantage of the fact that φ H φ H T (w 1 )T (w 2 ) is a generating function for φ H φ H L −n L −m in order to write these terms as contour integrals in the following form: .
The sum on m in G(w 1 , w 2 ) can be done in closed form, and we get a combination of powers, logs, and hypergeometrics of the form The integration contour in (2.14) must have |w| < |w 1 | < 1, |w| < |w 2 | < 1, since the sum over powers of w 1 , w 2 converges in G 2 when |w| < |w 1 |, |w 2 |, and the sum over powers of w 1 , w 2 in φφT T converges when 1 > |w 1 |, |w 2 |. Starting with the contour integral over w 2 , we can shrink it down as far as possible. However, the sum over m produces branch cuts that prevent one from shrinking the contour all the way down to the origin. These branch cuts in w 2 are along the real axis between 0 and w; the discontinuities across this JHEP05(2016)075 branch cut can be read off from the coefficients of the logarithms in G(w 1 , w 2 ), together with the following expressions for the discontinuities of the hypergeometric functions: We therefore reduce to .
The former would correspond to a loop effect in AdS 3 , while the latter captures effects from classical gravitational backreaction from the light probe object. 4 Finally, the remaining sum on n converges in the region that |w 1 | > |w 2 |, and gives Thus, we can shrink the w 1 contour onto the branch cut from 0 to w. However, note that after we do this, the branch cut from log(1 − w 2 w 1 ) is crossed when w 1 < w 2 , but not when w 1 > w 2 . One also crosses a pole at w 1 ∼ w 2 in φ H φ H T T . However, as explained below equation (2.10), the only such singularity is φ H φ H T T ∼ c/2 w 4
12
. This does not contribute to 4 Note that there is no O(h 0 L ) piece. In fact, this is true at all orders in 1/c, since such a term would have to survive in the limit that hL = 0. But in that case, φL would have to be the identity operator, so the vacuum "block" would be the φH (∞)φH (1) two-point function, which is just constant normalized to 1.
JHEP05(2016)075
any φ H φ H L −n L −m overlap term with n, m ≥ 2, since in a small w 1 , w 2 expansion it does not have any terms with non-negative powers of both w 1 and w 2 , so we can just subtract it out. Taking this into account, we finally obtain The primes on the correlators indicate that we are to subtract out their ∼ c/2 w 4 12 singularities.
The function V h L contributes only to log V at O(c 0 ) in this limit and therefore goes beyond the semi-classical part of the block.
We were able to evaluate both the semi-classical and quantum 1/c corrections, which are written closed form in equation (1.4). In what follows we will examine some interesting limits of the general result.
Small h H limit
The main reason that the integrals in (2.20) written as a function of w 1 , w 2 contains non-integer powers of w 1 , w 2 arising from z i = 1 − (1 − w i ) 1/α . In the limit that h H /c is small, we can expand the correlator φ H φ H T T around α = 1, and these become integer powers and logarithms. At O(α − 1), one has The resulting integrals in (2.20) can be easily evaluated. The result is , there is no difference between using w vs z in the expression at leading order in O(α − 1) above. We have checked these expressions against a direct small z expansion up to O(z 8 ) using the methods of [31].
Large T limit
It is more interesting to consider limits that allow α = 2πiT L to be imaginary, since that is the regime where the heavy state develops a horizon in AdS and a temperature. The limit that is most likely to be generic is that where T L is taken to ∞. In particular, as mentioned in the introduction, in this limit one can rescale distance as x → x/T to obtain the infinite radius limit of the circle. While the two-point function on the circle at finite radius and finite temperature is equivalent to a two-point function on the torus and is thus not a universal quantity, the two-point function on the plane at finite temperature is the universal function (1.1), independent of all CFT data except for the dimension h L . Fortunately, T → ∞ is also a limit where the integrand (2.20) simplifies significantly: (2.23)
Substituting this into (2.20), we obtain the result
where Ei and li are the exponential and logarithm integral functions, respectively. Since the periodicity in Eulidean time is expected to be 1/T , in the infinite temperature limit we want to scale t to zero with tT fixed. The variable w depends on t through w = 1 − e 2πiT t , so whether or not the block is periodic in tT is a question of it monodromy as w is taken around 1 in the complex plane. One can start by looking at the behavior of (2.24) around w ∼ 1: The presence of these logarithms lead to non-trivial monodromies around w = 1, and as a result the vacuum block on its own is not periodic in time. 5
Dependence on T
Next, we want to consider how the 1/c correction varies as a function of temperature. Note that the first several terms in the small w expansion at large α and at small α are 5 Since 1 − w = e 2πiT t has unit norm, it is necessary to check the monodromy not just in a small 1 − w expansion. This is straightforward to do using (2.24) and does indeed contain a non-trivial monodromy as T t → T t + 1.
JHEP05(2016)075
remarkably similar: The fact that both begin as 1+2w follows from global conformal symmetry, but the similarity of the subsequent terms is non-trivial. It reflects the fact that each additional 'graviton' is making a suppressed contribution, so that both functions are well-approximated at small w by the lowest dimension 2-graviton global block w 4 2 F 1 (4, 4, 8, w). As discussed in the next section, we believe that the similarity at α = 1 and α = i∞ is a consequence of the fact that the BTZ solution is simply an orbifold of AdS 3 , although it would be interesting to see this explicitly.
In figure 1 we plot the w-dependence for various values of α to show this agreement explicitly. As one can see there, it is only near α ∼ 0 (which is the minimum threshold for black holes in AdS 3 ) that the w-dependence differs significantly from either the α = 1 or α = i∞ extreme.
Expectations from thermodynamics and AdS/CFT
In the last section we computed perturbative 1/c corrections to the Virasoro conformal blocks in the heavy-light limit. Unlike the leading order vacuum block, these corrections appear to deviate from expectations from thermodynamics, or equivalently, from black hole physics in AdS 3 , as they have non-trivial monodromies in Euclidean time. In what follows we will explain this in more detail, and then show that our results do not necessarily differ from expectations from AdS/CFT. The main point is that individual conformal blocks generically have unphysical monodromies that can cancel when they are summed to compute full CFT correlators, and that these monodromies have a simple origin in AdS.
Periodicity in Euclidean time and pure state thermodynamics
Let us summarize the well-known features of field theory correlation functions in the canonical ensemble, to facilitate comparison with the pure state correlation functions and associated conformal blocks that we have studied. 6 The thermal 2-pt function is (1−α)(α+1)(11α 2 +1) scales them to agree at z ∼ 0. Right,top: same as the left, but for the . The endpoints α = 1 and α = i∞ are very close, but the difference becomes more significant near α ∼ 0. Left and right, bottom: same as the top, but as a function of z for real z.
where we emphasize that t L is a Lorentzian time coordinate. Inserting a complete set of states shows which leads to the KMS condition stating that the correlator is periodic in imaginary time, up to an exchange of the order of the operators. In relativistic QFTs, the two operators commute at space-like separations |t L | < | x|, which means that F 12 and F 21 must be analytic continuations of each other. From the single Euclidean correlator F (t E , x) we can obtain either F 12 or F 21 by approaching the lightcone branch cuts of F at t 2 L = x 2 from different sides. For the cases that we will be studying
JHEP05(2016)075
In recent work [8,38,39] we have been comparing thermal 2-pt correlators with the 4-pt correlator in a heavy background (3.4) where z = 1 − e −t+iφ . In CFT 2 the thermal 2-pt correlator of Virasoro primary operators is uniquely fixed via a conformal mapping from the plane to the cylinder. The thermal correlator agrees precisely with the large c heavy-light Virasoro vacuum conformal block, where 2πT = 24h H c − 1 is the temperature. 7 The limit of large central charge with fixed h H /c can be interpreted as a high-energy limit in a theory with many-degrees of freedom. Thus we expect an identity such as equation (3.4), because in the thermodynamic limit, a pure state drawn from the canonical (or micro-canonical) ensemble should be very difficult to distinguish from the true thermal density matrix. In AdS/CFT, this is the statement that black holes and very high energy microstates should be nearly identical. In fact, in AdS 3 there are no approximately stable orbits around black holes, so these states are even more 'inescapable' than in higher dimensions. We expect that order-by-order in the 1/c expansion, heavy-light correlators will appear thermal, and that only non-perturbatively small effects may violate the approximate KMS condition in heavy-light correlators.
We pause to note a subtlety concerning the identification in equation (3.4): we should really be comparing the full heavy-light 4-pt correlator with O L O L T on the torus, since both functions must be periodic in the angular φ coordinate under φ → φ + 2π. But the 2-pt function on the torus is not fixed by conformal invariance; this corresponds to the fact that Virasoro blocks other than the vacuum will contribute to the complete heavylight 4-pt function. The vacuum does make an important universal contribution, but for example from an AdS 3 description there would also be double-trace O L ∂ n O L contributions that sum up to restore the perioidicity in φ at any t. One can avoid these complications by studying the light-cone OPE limit [9,10], or by taking the limit of T → ∞ with T t fixed, so that the φ direction is effectively non-compact when distances are measured in units of 1/T . In that large temperature limit and at large c, the identification of equation (3.4) becomes precise.
In section 2 we computed the 1/c corrections to the heavy-light Virasoro conformal blocks and found deviations from the thermal result that could not be interpreted as a perturbative renormalization of the temperature. In the next sections we will discuss how conformal blocks can be computed from AdS in order to explain why our thermality-violating 1/c corrections should not necessarily be interpreted as a violation of the Euclidean-timeperiodicity seen in equation (3.4). To be precise, we need to distinguish between two different notions of thermality. The first, which is specific to 2d CFTs, is that at infinite T (or equivalently, through rescaling, in a CFT in non-compact space), the two-point function should be exactly (3.4). The second is that the two-point function should be periodic in
JHEP05(2016)075
Euclidean time. Knowledge of the vacuum Virasoro block is sufficient to see that the first of these is violated, assuming even a mild O(1) gap in dimensions of operators. The reason is that (3.4) makes a definite prediction for the coefficients of OPE singularities in the four-point correlator, and low-order singularities can receive contributions only from lowdimension operators. Thus, a small gap is enough to imply that the first few such singularities receive contributions from only the vacuum block, and therefore that the OPE does not match the prediction of (3.4). This is in contrast with the second, more general, criterion for thermality, which requires knowledge of the correlator at finite values of t and therefore depends on the full operator content of the theory; this will be the main focus of the following sections. However, our results do show that in the lightcone OPE limit, where the Virasoro vacuum block dominates (assuming no additional conserved currents), the form of the 1/c corrections imply that the correlator cannot be separately periodic in t ± iφ.
As a final comment, note that we can reproduce the exact canonical ensemble by summing over individual pure microstates, so that In this relation we must let the sum range over both Virasoro primaries and descendants, whereas in equation (3.4) we have been focusing on Virasoro primaries O H . In a CFT 2 where O L (1)O L (z) T is entirely fixed by conformal invariance, this relation provides a constraint on CFT data closely related to modular invariance.
Monodromies of global conformal blocks from AdS
Local operators in QFTs commute at spacelike separation, so CFT correlators like are single valued analytic functions of the Euclidean x i , with singularities only occurring in the OPE limits where x i and x j coincide. This property also holds when CFT correlators are obtained from a quantum field theory in AdS via the AdS/CFT dictionary and the bulk Feynman diagram expansion. However, conformal blocks do not have this property. For example, consider a conformal block in the channel HH → LL, which can be computed as a sum over intermediate states where the states |α are all in the irreducible representation of a primary state/operator O ∆, with dimension and total angular momentum ∆, . Equivalently, this can be computed by expanding in the OPE limit z,z → 0.
This figure depicts a 'geodesic Witten diagrams' that can be used to compute a conformal block from AdS [27]. The lines connecting the two light operators to each other and the two heavy operators to each other are both geodesics, while the wavy line designates a propagator whose endpoints have been fixed to these geodesics. The only integrals are over the positions of the bulk-to-bulk propagator along the geodesics. explicit, in the case of global 2d conformal blocks we can write where ∆ = h +h and = |h −h|. The hypergeometric functions have logarithmic branch cuts around z,z = 1 with non-trivial monodromies.
We would now like to explain how these monodromies arise from an AdS calculation. First of all, note that since we are studying conformal blocks, not CFT correlators, we are not asking a question about standard bulk Feynman diagrams. These diagrams must be single-valued in the Euclidean region.
However, as has been shown recently [27], both global and Virasoro conformal blocks [28] can be computed from a certain simplified version of a bulk Feynman diagram, which the authors of [27] refer to as 'geodesic Witten diagrams'. To obtain a geodesic Witten diagram, we begin with a Feynman diagram for a 4-pt CFT correlator with four boundary-to-bulk propagators and a single bulk-to-bulk exchange propagator G BB (X, Y ), as pictured in figure 3. But instead of allowing X and Y to range over AdS, we confine these bulk points to geodesics when computing the diagram. The geodesics always connect the pairs of operators whose OPE limits define the conformal block.
JHEP05(2016)075
O H (1) Figure 4. This figure shows what happens when we analytically continue the external points of a geodesic Witten diagram. As z moves around the cylinder, the heavy and light geodesics must cross, and as they do, the propagator connecting them passes through its short-distance singularity. Note that in d > 2 dimensions this crossing is enforced by geometry, not by topology. This is the origin of the non-trivial monodromy of the conformal block. Similar reasoning leads to a monodromy in Euclidean time for non-vacuum heavy-light Virasoro blocks [38].
We can give a simple heuristic explanation of the origin of geodesic Witten diagrams as follows; for more rigorous derivations see [27]. A 4-pt tree-level Witten diagram computation in AdS can always be decomposed [44] (see [27,45] for recent discussions) into one 'single-trace' conformal block and an infinite sum of 'double-trace' conformal blocks, where the former corresponds to the state exchanged in the bulk-to-bulk propagator, and the latter correspond to the external states. In the limit that the external states have very large dimension, the bulk computation will be well-approximated by a geodesic Witten diagram via the geometric optics approximation for the heavy bulk states; in the same limit, the double-trace contributions decouple. Thus in general we expect that the unique 'single-trace' conformal block must correspond to the geodesic Witten diagram.
Given that conformal blocks can be computed as geodesic Witten diagrams, it is easy to discover the AdS origin of their non-trivial monodromies. The analytic continuation of figure 2 can be applied to a geodesic Witten diagram computation, which takes the schematic form and is pictured in figure 4. The bulk variables X(λ) and Y (λ ) run along the two geodesics, which are parameterized using λ, λ . The expression in parentheses is the (scalar) bulkto-bulk propagator, with σ(X, Y ) the distance between the two bulk points. Crucially, as 1 − z = e −t+iφ is continued in φ, we necessarily pass through a configuration where the two geodesics cross, which requires that we integrate over the short-distance singularity of the bulk-to-bulk propagator. Note that the pure vacuum conformal block has a trivial monodromy, since the relevant computation would not include a bulk-to-bulk propagator.
JHEP05(2016)075
Informally, we might say that the geodesic Witten diagram treats the external operators as classical sources in the bulk, which 'remember' their relative orientation. When we compute standard Witten diagrams, the external operators are treated as quantum fields in AdS. The path integral sums agnostically over all their bulk trajectories, destroying any 'memory' of the classical trajectories. Cancellations between the monodromies of 'single-trace' and 'double-trace' operators encode the eradication of this classical memory.
In summary, individual conformal blocks have unphysical monodromies in φ, even though the blocks have been computed from a physical process transpiring in a spacetime that is manifestly periodic under φ → φ + 2π. Next let us consider an analogous question concerning thermal periodicities and Virasoro blocks.
Monodromies of Virasoro conformal blocks and AdS/CFT
Thermal states in CFT 2 are dual to BTZ black holes in AdS 3 . As discussed in section 3.1, a simple way to recognize the temperature is from the Euclidean-time periodicity of the 2-pt correlator. This feature can be observed directly in the spinless Euclidean BTZ metric where α 2 ≤ 1, and α is imaginary in the BTZ case. The Euclidean time coordinate must be periodically identified under t ∼ t + 1/T H to avoid a singularity at the horizon r = |α|, where we note that the temperature is T H = |α| 2π . We expect that this periodicity will be inherited by AdS/CFT correlators computed from perturbative Feynman diagrams in the black hole background.
Geodesic Witten diagrams in AdS 3 have been used to obtain semi-classical Virasoro conformal blocks [28]. To leading order in the semi-classical limit, we can compute the heavy-light Virasoro blocks in the same way that we obtained global conformal blocks in section 3.2. The difference is that we evaluate the geodesic Witten diagrams in the gravitational background of the heavy operator, instead of in pure AdS.
In the last section we studied monodromies of global conformal blocks under φ → φ + 2π. We are now interested in Euclidean time periodicity, t → t + 1 T H for the Virasoro blocks. For the case of non-vacuum blocks, the reasoning from the last section can be copied directly, replacing φ with t. In fact the global AdS 3 metric is identical to the spinless BTZ metric in the high temperature limit, after rescaling r → r/r + and exchanging the roles of t and φ. So the monodromies of the non-vacuum heavy-light blocks first obtained in [38] can be understood heuristically from the 'memory' effect of the geodesic Witten diagrams [28].
The semi-classical Virasoro vacuum block computes the exponential of a geodesic length [8,29,46] in a deficit angle or BTZ background, and in both cases it has a periodicity set by α. For real α this is a periodicity in φ associated with the deficit angle, while for imaginary α = 2πiT H it is periodicity in Euclidean time. What remains is to understand the presence of a non-trivial monodromy in the 1/c correction to this vacuum block, as we found in section 2.
The geodesic Witten diagram technology has not been applied in the presence of perturbative 1/c corrections, so it is not entirely clear how to proceed. Even in the case of JHEP05(2016)075 Figure 5. This figure shows gravitational one-loop diagrams in AdS that could contribute to heavy-light Virasoro blocks at order 1/c. In the small h H /c limit, we expect that the two diagrams on the left should correspond with the 1/c effects in equation (2.22). More generally, the pair of diagrams on the left should be equivalent to the pair on the right with bulk propagators computed in the background gravitational field of the heavy operator. the large c semi-classical blocks, instead of full bulk propagators (which include a sum over images [47] in order to satisfy the correct boundary value problem), the authors of [28] used pure AdS propagators with a rescaling t, φ → αt, αφ. This led to the correct result, and it might be interpreted as a strategy for eliminating double-trace contributions, but it was not given an a priori derivation.
We will proceed by discussing the most natural generalization of the geodesic Witten diagrams which leads to a single Virasoro conformal block. Some relevant diagrams are pictured in figure 5. The pair of diagrams on the right clearly have a different structure from those we have considered previously, and in particular, the simple reasoning of figure 4 no longer applies, since there are no explicit propagators connecting the deficit angle/black hole to the light operator geodesic. The third diagram from the left leads to an integral of the schematic form dλ 1 dλ 2 G ∂B (1, Y 1 (λ 1 ))G BB (Y 1 (λ 1 ), Y 2 (λ 2 ))G grav (Y 1 (λ 1 ), Y 2 (λ 2 ))G ∂B (Y 2 (λ 2 ), z) (3.11) where λ i parameterize two points Y i (λ i ) on the light operator geodesic, and the two bulkto-bulk propagators correspond to the light operator and the gravitational field.
We can think of the bulk-to-bulk propagators in the BTZ background as the result of summing an infinite set of diagrams connecting a free AdS bulk-to-bulk propagator to a succession of graviton propagators. This justifies the expectation of a non-trivial monodromy as we rotate z on the thermal circle. We will need the relevant bulk-to-bulk propagator 8 in a deficit angle or BTZ black hole backgrounds. Since these backgrounds are orbifolds of pure AdS 3 , the propagators can be determined through the method of images. This sum over images produces a new logarithmic singularity in the propagators at the JHEP05(2016)075 location of the deficit angle and at the black hole singularity [49]. Without the sum over images the propagators have only a short-distance singularity.
Thus we are led to conjecture that the internal propagators in the diagrams on the right of figure 5 should include a sum over images, so that they are sensitive to the deficit angle or black hole singularity. The integration over such propagators could then explain the monodromy of the 1/c correction to the Virasoro vacuum block under analytic continuation in Euclidean time. It would be interesting to explore this question further, and to obtain explicit agreement between our CFT 2 computation and a gravity calculation in AdS 3 .
Given that we are arguing that double-trace operator conformal blocks must be included to see the correct "thermal" properties of the heavy-light correlator, one may wonder why the leading order in 1/c vacuum correlator did not suffer from non-periodic monodromies. The simplest way to understand this is that there is a limit where the vacuum block actually is the full correlator: in the limit of infinite T and infinite c, the contribution from double-trace operators is indeed negligible, leaving only the vacuum block to fulfill the thermal properties of the theory.
As a final comment, in section 2.5 we pointed out that the functional form of the 1/c corrections appears very similar at large temperature and at small h H , two regimes that are very different physically. We believe that from the bulk point of view, this is due to the fact that BTZ backgrounds are locally pure AdS. Diagrams such as those in figure 5 will produce very similar corrections for all values of α at small z, where the light operator geodesics in figure 5 do not extend very far into the bulk.
JHEP05(2016)075
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 10,294.2 | 2016-05-01T00:00:00.000 | [
"Physics"
] |
Temperature and wind impacts on sag and tension of AAAC overhead transmission line
Article history: Received 14 September 2017 Received in revised form 20 November 2017 Accepted 5 December 2017 The transportation of electricity from the point of generation to the consumer premises is termed as a power system. Power system comprises of three entities, power generation, transmission and distribution. Among these entities, the inefficiency in transmission part contributed to most of the losses. These losses depend on the resistance, inductance and capacitance, which are termed as the constants of a transmission line. The performance of transmission lines mostly depends on these constants i.e., if the height of transmission line from ground is less, then its capacitance effect will be more and its performance will be degraded. On the contrary, if the height of line is high, its capacitance will be low but its tension will be high. Therefore, a transmission lines are connect in a curve like shape or catenary and is termed as sag. Sag must be providing in transmission line to minimize tension. Sag and tension should be adjusted within the safe limits. This research work presents a simulation setup to calculate sag and tension of AAAC (All Aluminum Alloy Conductor) overhead transmission line for multiple spans with impact of different weather conditions. Four different cases of temperature and wind are used and explained in detail for equal level spans. The simulations were carried out in ETAP software and the results showed that with the rise of temperature the weight of conductor increases, which increases the sag. Secondly, with the increase in sag, the tension of the conductor decreases in the AAAC.
Introduction
*The system that transports electricity from the point of generation to the end user is termed as a transmission system. Transmission lines and substations play a central role in the transmission system (Quintana et al., 2016). Lines transporting the electricity embody the biggest part of the power system network. Therefore, appropriate modeling of these lines is one of the binding issues needs to be take care while designing and erecting transmission system. The subsequent performance of the transmission system depends on type of transmission modeling used in the system (Taleb et al., 2006).
Transmission lines are never connect in straight lines between supportive towers but are as a curve shape named a catenary as shown in tension in the transmission system (Oluwajobi et al., 2012). However, there is an inverse relationship between sag and tension. In case of high tension in a transmission line, the sag will be little but the there is a chance of breaking the transmission line. Contrary if sag is too much, the extent of conductor will be use and as a result, there will be increase in cost. The intensity of sag also depends on the distance between two towers. Greater the distance between connecting towers greater will be the sag (Seppa, 1993).
The sag-tension calculations in the transmission system are aim at fixing appropriates limits between sag and tension to continue uninterrupted power supply to the consumers. The sag-tension calculation allows calculating the conductor temperature as well as ice and wind load simultaneously (Mehta and Mehta, 2005). The tension is keep within limits by the tension limit of the towers and conductor. While clearance distance of sag depends on ground and line crossings. In case the crossing distance is less than the clearance distance, there will be chances of occurring line faults (Oluwajobi et al., 2012). The quantity of insulator strings and there installation techniques in that "V" or "I" configuration can be set are also important for calculation of sag-tension. The insulator string by its nature possesses the characteristics of an element therefore being element contributes in adding up further sag caused by conductor itself (Quintana et al., 2016). To consider bundle conductors are also important in various cases, for per phase more than one conductor is used. For extra high voltage system, two bundle conductors per phase is used and sometimes substation that is collect power from generating stations may use three conductors per phase. Thus, all the fundamentals concerning the sag-tension estimation process is necessary to consider for ensuring that the outcome match the actual conditions (Quintana et al., 2016;CIGRE, 2007).
Calculation of sag
While designing overhead lines, care needs to be taken so that the sag is adjusted in such a way, that tension running in the conductors is within safe range. In fact, tension is administered by conductor weight, temperature variation, ice load on wires and effect of wind. According to normal practice, conductor tension is keep below 50% of its eventual tensile power. It means least factor of safety of a conductor tension needs to be two. Now calculation of sag as well as tension of a conductor will be carry out keeping level of support at equal (Mehta and Mehta, 2005).
When supports are at equal levels
A conductor is considering between two supports A and B of equi-level with O as the lowest point such as reflected in Fig. 2. We can prove that lowest point prevails at the mid-span.
Consider a point P on the conductor. Taking the lowest point O as the origin and let the co-ordinates of point P be x and y. Assuming that the curvature is so small that curved length is equal to its horizontal projection (i.e., OP = x), the two forces acting on the portion OP of the conductor are (Kamboj and Dahiya, 2014): a) The weight wx of conductor acting at a distance x/2 from O. b) The tension T acting at O. Practically the sag is measure by Eq. 4 between two level supports, which includes the weight, length and tension between every two supports. That is between every two supports these terms should be find mechanically and mathematically. The need of low cost simulation setup to find sag and Tension is necessary.
Methodology
For the result-oriented sag-tension, sag-tension of AAAC transmission line considering different equal span lengths of different operating conditions are analyze in this research. The tool used for the calculation is ETAP. The ETAP's module containing an analytical strength for Transmission and Distribution Line Sag and Tension calculation. It is easily available low cost simulation software to calculate the appropriate sag and tension in order to ensure appropriate operating conditions on the overhead transmission lines. Four cases i.e., 1, 2, 3 and 4 are shown in tables. In 1 sag-tension of AAAC under minimum operating condition with no wind effect is analyze because in winters the temperature is minimum.
In case 2 temperature is increase from minimum to normal temperature and calculate sag and tension of AAAC. In case 3 the temperature is maximum because in summers the temperature reaches to its peak value and then calculated sag and tension under maximum temperature. In case 4 a worst condition i.e. Maximum temperature with maximum wind effect and checked its effect on sag and tension.
These calculations are for level spans only when both the towers are on same height. The height of tower is 16m and spacing between conductors is 1.5m. The conductors used in the research are AAAC (All Aluminum Alloy Conductor) because: 1. These conductors are of high strength made of Aluminum-Magnesium-Silicon alloy and are having better ratio of strength to weight enabling the conductors to exhibit more efficient electrical characteristic. They have excellent sag-tension characteristics and superior corrosion resistance when compared with other conductors. 2. Comparing with traditional ACSR, AAAC are lighter in weight, are having lower electrical losses and comparable strength and current carrying capacity.
Case 1
In the first case, sag-tension under minimum operating temperature i.e. 5˚C is analyzed because in winters the line contracts as a result there will be low sag. In Table 1, four different span lengths for equal level supports in minimum operating temperature i.e., 5˚C are analyzed using AAAC. When the span length is 50m, the sag is 0.17 and tension is 1747. As the span increases from 50m to 100m, the sag is 0.66 and tension is 1653. Similarly, for 150m and 200m the sag and tension is 1.49,1504, 2.65 and 1259 respectively. So from the Fig. 3 it can be seen that when the span length increases the sag also increases this is because sag is directly proportional to span length and inversely proportional to tension.
Case 2
In the second stage, normal operating temperature to calculate sag and tension of a transmission line is considered. In Table 2, four different span lengths for normal operating condition using AAAC are considered.
When the span length is 50m the sag and tension is 0.22 and 1336. For 100m, the sag is 0.87 and tension is 1183. Similarly, for 150 and 200m span the sag is 1.95, 3.47 while tension is 989 and 747 respectively. In this case shown in Fig. 4, there is rise in temperature due to which sag is more than minimum temperature while tension is less.
Case 3
In third stage, sag and tension under maximum operating temperature is considered. Because due to increase in temperature, the weight of metallic bodies of conductor also increases as a result further increase in sag.
From Table 3, it is noticed that four different span lengths are consider for maximum operating condition because temperature does not remain constant, it varies with time and every temperature has its own effect on sag. In this case, the temperature is maximum so when the span length is 50m the sag and tension is 0.27 and 1074. As the span Length increases to 100m, 150m and 200m the sag is 1.08, 2.44 and 4.33 while tension is 922,747 and 548. From the Fig. 5, it is clear that due to maximum temperature the sag is also larger than normal temperature. This is because due to rise in temperature, the metallic body of conductor expands and as a result, the weight of conductor increases that is directly proportional to sag. In the last case, sag and tension under worst condition is analyzed because the temperature is maximum and at the same time, there is maximum wind effect.
In Table 4 high wind speed with maximum temperature is consider because the wind load on the conductor will increase the apparent weight of the conductor resulting in an increase in tension. In this case, the temperature is maximum with maximum wind effect. The temperature is same as in previous case but wind is added. Due to wind load on the conductor will increase the apparent weight of the conductor resulting in an increase in tension. Therefore, from the Fig. 6 it is shown that sag is same as previous case due to same temperature but tension is high due to wind effect.
Conclusion and future work
In this paper, four different cases were considered for sag-tension estimation of AAAC overhead conductor in transmission lines. The various span lengths considered under different operating conditions. For these conditions, the height of tower was same but operating conditions of temperature are different. From the results the following conclusions are drawn: 1. In winters, the temperature is minimum so due to low temperature the AAAC line contracts as a result there will be low sag that will indicate a high tension, so it is clear from our result of case 1 when the temperature is low there is low sag and that indicates high tension. 2. In spring, the temperature is normal so the sag of AAAC line will be high than winters and there will be less tension than previous case. In summers, the temperature is maximum so due to increase in temperature the metallic body of conductor expands as a result the weight of conductor increases so sag is also increased. Therefore, it is clear from our results that in this case there is maximum sag and minimum tension. 3. In case 4 worst condition i.e., both maximum temperature and maximum wind effect at a same time is consider. Because wind will increase apparent weight of the conductor, as a result increase in tension and due to maximum temperature there will be maximum sag. From the result, it is clear that due to wind effect there is increase in tension occur.
Therefore, for the sag estimation of overhead conductor ETAP software is very helpful to predict sag-tension behavior of overhead conductor in transmission line more efficiently, moreover it is easily available software as compared to high cost commercial software to calculate sag-tension.
From this paper, one can easily find the sagtension values of AAAC conductor for the different cases of temperature without calculating it mathematically.
In future, the sag-tension estimation of other conductors will also be considered in ETAP. | 2,959.6 | 2018-02-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
A New Methodology for 3D Target Detection in Automotive Radar Applications
Today there is a growing interest in automotive sensor monitoring systems. One of the main challenges is to make them an effective and valuable aid in dangerous situations, improving transportation safety. The main limitation of visual aid systems is that they do not produce accurate results in critical visibility conditions, such as in presence of rain, fog or smoke. Radar systems can greatly help in overcoming such limitations. In particular, imaging radar is gaining interest in the framework of Driver Assistance Systems (DAS). In this manuscript, a new methodology able to reconstruct the 3D imaged scene and to detect the presence of multiple targets within each line of sight is proposed. The technique is based on the use of Compressive Sensing (CS) theory and produces the estimation of multiple targets for each line of sight, their range distance and their reflectivities. Moreover, a fast approach for 2D focus based on the FFT algorithm is proposed. After the description of the proposed methodology, different simulated case studies are reported in order to evaluate the performances of the proposed approach.
Introduction
Passenger vehicle sales have become a growing part of the worldwide economy, with an increasing trend over the last years. With that growth and the advancement of automation technology consumers, governments and society are all demanding better safety and reduction of the amount of deaths and injuries on the roads. Car manufacturers have started implementing Driver Assistance Systems (DAS) in production models as an answer to such demands. Among those, we recall stability control systems, anti-collision systems, Antilock Braking System (ABS), traction control and Electronic Brakeforce Distribution (EBD), seat belts, airbags, shock-absorbing bumpers, anti-intrusion bars, visual systems (VDAS) [1,2]. Most VDAS employ video cameras. They are often used as car parking systems, front or rear and lane vision systems [3][4][5]. Video cameras need outside light sources: the sensor does not work in low visibility or in adverse weather conditions (e.g., fog, rain) and in presence of smoke. These limitations can be overcome by radar technology [6]. Radar-based systems can detect targets hundreds of meters ahead, being minimally affected by fog or heavy rain, i.e., conditions that greatly limit the driver's field of vision [7]. Radar systems adopt several sensing and processing methods for determining the position and speed of the vehicles ahead [8][9][10]. Usually car manufacturers are very reluctant to alter the shape of the vehicles to accommodate any sensors, so designers are forced to design systems small enough to be mounted inside car's front grille. In order to combine small dimensions and versatility, small antennas are required, and consequently signals at high frequencies are adopted. In particular, several proposed systems work at 76-77 GHz, which is a good compromise between compactness and cost. In order to produce a high resolution radar imaging system for automotive applications many aspects need to be taken into account, for instance the choice of the more appropriate radar imaging system, the choice of the antenna, the development of a radar and system simulator that helps in evaluating system performance by generating a controllable synthetic environment and, once obtained the image, the post-processing stage, such as target detection and tracking.
At present, several technological solutions for automotive radar imaging systems have been developed. The systems synthesize, analogically or digitally [6], a beam scanning the area of interest to identify targets. The analog synthesis and scanning of the beam can be obtained in several ways, such as phased arrays, travelling wave antennas, and lens antennas. The best performing architecture in terms of resolution and scanning range is a phased array. This solution is also the most expensive, so it is necessary to find a compromise between price and performance. One possibility is to adequately process the signals of several antennas to synthesize a larger array [7]. Another possibility is to use a modular architecture by splitting the array into identical sub-arrays, the feed of each of which is individually controlled.
In the post-processing stage, the radars currently used in the automotive industry are based on the Ultra-wide Band structure [6,10]. Most of the algorithms, developed with the aim of cleaning/reducing noise in the data and classifying targets, are based on the statistical analysis of backscattered radar returns, followed by a statistical classification which allows individuating the category to which the observed target pertains. Both for detection and classification, statistical models are fitted to the data in order to assess their suitability and either confirm or reject the membership of a target to the proposed class.
In this manuscript, we focus on the signal processing step. In particular, we propose a novel signal processing algorithm, based on Compressive Sensing (CS) theory [11,12], for the detection and 3D imaging of targets within an observed volume, even in the case of scatterers sharing the same line of sight (LoS). This goal is achieved by detecting the presence of targets in the observed scene, estimating their positions and inferring their reflectivities. It will be shown that, by exploiting the solution sparsity property, CS techniques result particularly effective in solving such detection and estimation problems. The following of the paper is divided into four sections: the description of the acquisition model is made in Section 2. The CS-based approach is presented in Section 3. Results on simulated data are reported in Section 4. Conclusions are drawn in the final section.
Methodology
Let us consider an antenna ideally located at the front of the car, laying in the (x, y) plane. Let us consider a planar array antenna of K NˆKM elements, where each one transmits and receives the signal. Let us denote with x i , i = [1, . . . , K N ] and y j , j = [1, . . . , K M ], z = 0 the coordinates of antenna elements. The schematic view of the antenna is shown in Figure 1.
We assume the transmission of a monochromatic signal S T at frequency f 0 . In the noise free-case and neglecting constants, the signal received by the antenna at the position (x i , y j ) can be modeled as [13]: where G is the antenna gain, c is the speed of light and R(x i´x , y j´y , z) is the distance between each antenna element, with coordinates (x i , y j , 0), and a target placed in (x, y, z) with reflectivity γ(x, y, z). The signal S R coherently collects all the echoes from the illuminated volume V, with proper attenuation and phase. This model assumes that all targets are point scatterers, that there is no multipath effect and that the superposition principle stands. Being a good trade-off between complexity and handling, such assumptions are widely adopted. In order to simplify the realization of the system, the planar antenna is synthetized with the combination of two linear arrays, one horizontal and one vertical, as shown in Figure 2. In this case, the vertical array contains the transmitting elements (blue dots in Figure 2), and the horizontal array contains the receiving elements (red dots in Figure 2). The vertical array is composed of KN transmitting elements while the horizontal one is composed of KM receiving elements. In this case KN + KM elements are considered instead of KN × KM. The spacing among elements is λ/2. Each transmitting element of the vertical array emits a signal at different time intervals, and each echo is received by all elements of the array of receiving antennas on separate channels. In this case, the distance R of Equation (1) can be decomposed as the sum of RT and RR, i.e., the distance between the transmitting element and the target and the distance between the target and the receiving element, respectively.
Our aim is to estimate the 3D distribution of scatterers across the imaged volume. In order to speed up the process, only directions, i.e., Lines of Sight (LoS), with the presence of at least one target are selected. To do this, a first processing step implementing a fast 2D focusing algorithm is performed. The goal consists in focusing the acquired signal on a vertical plane at a fixed distance zo. Developing in Taylor series and truncating at the 2nd term, RT and RR can be written as: Substituting in Equation (1), we obtain: where the substitution exp( θ)= 2 + has been done.
After applying deramping, i.e., correcting the phase term in order to remove the linear component due to the distance, the term exp( θ) is converted to exp( ϕ), obtaining the signal Sdr: In order to simplify the realization of the system, the planar antenna is synthetized with the combination of two linear arrays, one horizontal and one vertical, as shown in Figure 2. In this case, the vertical array contains the transmitting elements (blue dots in Figure 2), and the horizontal array contains the receiving elements (red dots in Figure 2). In order to simplify the realization of the system, the planar antenna is synthetized with the combination of two linear arrays, one horizontal and one vertical, as shown in Figure 2. In this case, the vertical array contains the transmitting elements (blue dots in Figure 2), and the horizontal array contains the receiving elements (red dots in Figure 2). The vertical array is composed of KN transmitting elements while the horizontal one is composed of KM receiving elements. In this case KN + KM elements are considered instead of KN × KM. The spacing among elements is λ/2. Each transmitting element of the vertical array emits a signal at different time intervals, and each echo is received by all elements of the array of receiving antennas on separate channels. In this case, the distance R of Equation (1) can be decomposed as the sum of RT and RR, i.e., the distance between the transmitting element and the target and the distance between the target and the receiving element, respectively.
Our aim is to estimate the 3D distribution of scatterers across the imaged volume. In order to speed up the process, only directions, i.e., Lines of Sight (LoS), with the presence of at least one target are selected. To do this, a first processing step implementing a fast 2D focusing algorithm is performed. The goal consists in focusing the acquired signal on a vertical plane at a fixed distance zo. Developing in Taylor series and truncating at the 2nd term, RT and RR can be written as: Substituting in Equation (1), we obtain: where the substitution exp( θ)= 2 + has been done.
After applying deramping, i.e., correcting the phase term in order to remove the linear component due to the distance, the term exp( θ) is converted to exp( ϕ), obtaining the signal Sdr: The vertical array is composed of K N transmitting elements while the horizontal one is composed of K M receiving elements. In this case K N + K M elements are considered instead of K NˆKM . The spacing among elements is λ/2. Each transmitting element of the vertical array emits a signal at different time intervals, and each echo is received by all elements of the array of receiving antennas on separate channels. In this case, the distance R of Equation (1) can be decomposed as the sum of R T and R R , i.e., the distance between the transmitting element and the target and the distance between the target and the receiving element, respectively.
Our aim is to estimate the 3D distribution of scatterers across the imaged volume. In order to speed up the process, only directions, i.e., Lines of Sight (LoS), with the presence of at least one target are selected. To do this, a first processing step implementing a fast 2D focusing algorithm is performed. The goal consists in focusing the acquired signal on a vertical plane at a fixed distance z o . Developing in Taylor series and truncating at the 2nd term, R T and R R can be written as: Substituting in Equation (1), we obtain: where the substitution exp piθq = exp´i 2π 2z 0 ı¯h as been done.
After applying deramping, i.e., correcting the phase term in order to remove the linear component due to the distance, the term exp piθq is converted to exp piφq, obtaining the signal S dr : where the intermediate derivations are reported in Appendix.
Moving to a multi-frequency system, the dependency of the acquired signal with respect to frequency has to be made explicit, obtaining S dr (x i , y j , f ). By assuming a stepped system, a discrete number of frequencies is exploited, thus the vector f = [f 0 , f 1 , f 2 , . . . , f N ] containing the N frequencies can defined.
After discretization, Equation (4) becomes similar to the direct 2D Discrete Fourier Transform expression, with the two exponential functions being the transformation kernel. Thus, the Inverse Fast Fourier Transform (IFFT) algorithm is adopted in order to invert Equation (4) and estimate the term γ(x, y, z 0 ) from the acquired signal, i.e.,: Note that the data at each considered frequency f i produces a 2D image of the reflectivity γ(x, y, z 0 ). The reflectivity estimated from Equation (5) is exploited in order to detect the presence of targets within the imaged volume and their horizontal θ and vertical ϕ angles of view.
After this first processing step, a second one is implemented in order to detect the presence of scatterers within the 3D imaged volume and estimate their coordinates. For each identified target (i.e., for each (θ, ϕ) couple of interest), the antenna beam is tilted to its direction by applying a proper phase correction term to the acquired signals. In this step, we move to the spherical coordinates system (ε, θ, ϕ) from the Cartesian one (x, y, z). In other words, while 2D focusing works on vertical (x, y) planes, the 3D focusing considers a volume that is a cone with the vertex positioned in the center of the antenna. Considering the multi-frequency approach, a single complex value is obtained for each of the N working frequency. Our aim is, once focused on a LoS, to detect one or multiple targets and estimate their range distances, based on N acquisitions. The acquisition model can be written as: where q is the Nˆ1 data vector collecting the focused signals at the different frequencies for the direction (θ, ϕ), A is the transformation matrix and h is a vector of the reflectivity at different distances.
In particular, the vector h contains the complex reflectivity values for different range distances ε κ , uniformly sampled in the interval [ε min , ε max ], as reported in Figure 3. Few targets are expected to be detected for each line of sight, thus most of the h elements are supposed to be equal to zero. In other words, h can be assumed to be a sparse vector.
where the intermediate derivations are reported in Appendix.
Moving to a multi-frequency system, the dependency of the acquired signal with respect to frequency has to be made explicit, obtaining Sdr(xi, yj, f). By assuming a stepped system, a discrete number of frequencies is exploited, thus the vector f = [f0, f1, f2, …, fN] containing the N frequencies can defined.
After discretization, Equation (4) becomes similar to the direct 2D Discrete Fourier Transform expression, with the two exponential functions being the transformation kernel. Thus, the Inverse Fast Fourier Transform (IFFT) algorithm is adopted in order to invert Equation (4) and estimate the term γ(x, y, z0) from the acquired signal, i.e.,: Note that the data at each considered frequency fi produces a 2D image of the reflectivity γ(x, y, z0). The reflectivity estimated from Equation (5) is exploited in order to detect the presence of targets within the imaged volume and their horizontal θ and vertical φ angles of view.
After this first processing step, a second one is implemented in order to detect the presence of scatterers within the 3D imaged volume and estimate their coordinates. For each identified target (i.e., for each (θ, φ) couple of interest), the antenna beam is tilted to its direction by applying a proper phase correction term to the acquired signals. In this step, we move to the spherical coordinates system (ε, θ, φ) from the Cartesian one (x, y, z). In other words, while 2D focusing works on vertical (x, y) planes, the 3D focusing considers a volume that is a cone with the vertex positioned in the center of the antenna. Considering the multi-frequency approach, a single complex value is obtained for each of the N working frequency. Our aim is, once focused on a LoS, to detect one or multiple targets and estimate their range distances, based on N acquisitions. The acquisition model can be written as: where q is the N × 1 data vector collecting the focused signals at the different frequencies for the direction (θ, φ), A is the transformation matrix and h is a vector of the reflectivity at different distances. In particular, the vector h contains the complex reflectivity values for different range distances εκ, uniformly sampled in the interval [εmin, εmax], as reported in Figure 3. Few targets are expected to be detected for each line of sight, thus most of the h elements are supposed to be equal to zero. In other words, h can be assumed to be a sparse vector. Concerning matrix A, it is defined by discretizing the acquisition model of Equation (1). The generic element of matrix A is: ( 7) where f i is one of the frequencies within the bandwidth and ε j is a discretized range distance.
Given the previously reported model, our aim is to estimate the number of non-zero elements of h, i.e., how many targets are present in the selected line of sight, their position within vector h, i.e., the range distances of the detected targets, and their values, i.e., the reflectivity of the targets. We can refer to the estimation of vector h as an "in depth" focusing.
In the realistic case, measurements are corrupted by noise, leading to: where w is the thermal noise vector, whose element are circular complex Gaussian distributed. The problem of reconstructing a sparse vector from a low number of measurements is the typical problem addressed by the CS technique. The estimation algorithm can be formulated by solving the following minimization problem: where the L 1 -norm promotes the sparsity of the unknown h vector, while the L 2 -norm minimizes the difference between the model and the acquired data. ψ is a regularization factor whose ideal value depend mostly on SNR and has to be set [14,15]. In order to compute the estimation of h, i.e., the solution of Equation (8), several algorithms can be adopted [11,12,16].
Results
In order to evaluate the performances of the proposed method, different simulated case studies have been implemented. We simulated, in Matlab ® environment, the received signal in the case of difference scenarios, corrupting data with circular complex Gaussian distributed random noise. A cross antenna composed of two linear arrays of 111 (horizontal, Receiving-Rx) and 141 (vertical, Transmitting-Tx) elements has been considered. The system band, between 77 GHz and 77.5 GHz, has been sampled following a stepped approach. Complete system details are reported in Table 1. For the reported simulations, a constant SNR of 30 dB for a target at a distance of 200 m from the antenna has been adopted. In this case, the regularization factor ψ has been empirically set equal to 0.1. The first dataset is composed of two targets in front of the antenna, i.e., (θ, ϕ) = (0, 0), with the same reflectivity and range distances of 20 and 30 m, respectively. In Figure 4, the images obtained by the 2D FFT based focus approach, i.e., the first step of the proposed method, considering different distances (z 0 ) are reported. In particular, 10, 20, 30 and 50 m have been considered. It can be seen that targets are evident in all focusing range distance cases, although at 10 m and 50 m no targets are present, suggesting that the approximations made in Equation (4) hold also in case of wrong range distance assumption. From Figure 4, it can be stated that the choice of the focusing distance z 0 does not noticeably modify the focused image, thus the parameter z 0 can be a priori fixed to any value. distance assumption. From Figure 4, it can be stated that the choice of the focusing distance z0 does not noticeably modify the focused image, thus the parameter z0 can be a priori fixed to any value. Subsequently, we considered the line of sight corresponding to (θ, φ) = (0, 0) for computing the in depth focusing, i.e., estimating the number of targets, their range distances and their reflectivities in front of antenna. Several test studies have been considered in case of different steps in sampling the available bandwidth (in the 77-77.5 GHz interval). In particular, instead of uniformly sampling the bandwidth, random frequencies have been chosen. We first investigated the number of frequencies that has to be considered in order to achieve effective results. For this simulation 500, 100, 20 and 10 frequencies, randomly sampled within the 77-77.5 GHz interval, have been considered. We recall that, the lower the number of adopted frequencies is, the lower the global acquisition time of the system is.
The unknown vector h has been assumed to cover a range distance between 10 to 100 m with a spacing of 9 cm, providing 1000 positions. Note that the number or rows of transformation matrix A is sensibly lower than the number of columns. In particular, it has 1000 columns and 500, 100, 20 or 10 rows.
In order to provide a reference solution, the estimation via L 2 -norm minimization technique has also been performed. In Figure 5, results for the L 2 -norm and the proposed CS techniques are reported in red and blue color, respectively, in case of 500 frequencies (Figure 5a), 100 frequencies (Figure 5c), 20 frequencies (Figure 5e) and 10 frequencies (Figure 5g). Each line represents the estimated reflectivity for each range distance between 10 and 100 m. In particular, its value is expected to be zero where no targets are present, while a peak is associated to each detected target within the considered line of sight. In order to better appreciate the results, enlargements in the 15-35 m range are presented for all the cases in the right column of Figure 5. Subsequently, we considered the line of sight corresponding to (θ, ϕ) = (0, 0) for computing the in depth focusing, i.e., estimating the number of targets, their range distances and their reflectivities in front of antenna. Several test studies have been considered in case of different steps in sampling the available bandwidth (in the 77-77.5 GHz interval). In particular, instead of uniformly sampling the bandwidth, random frequencies have been chosen. We first investigated the number of frequencies that has to be considered in order to achieve effective results. For this simulation 500, 100, 20 and 10 frequencies, randomly sampled within the 77-77.5 GHz interval, have been considered. We recall that, the lower the number of adopted frequencies is, the lower the global acquisition time of the system is.
The unknown vector h has been assumed to cover a range distance between 10 to 100 m with a spacing of 9 cm, providing 1000 positions. Note that the number or rows of transformation matrix A is sensibly lower than the number of columns. In particular, it has 1000 columns and 500, 100, 20 or 10 rows.
In order to provide a reference solution, the estimation via L 2 -norm minimization technique has also been performed. In Figure 5, results for the L 2 -norm and the proposed CS techniques are reported in red and blue color, respectively, in case of 500 frequencies (Figure 5a), 100 frequencies (Figure 5c), 20 frequencies (Figure 5e) and 10 frequencies (Figure 5g). Each line represents the estimated reflectivity for each range distance between 10 and 100 m. In particular, its value is expected to be zero where no targets are present, while a peak is associated to each detected target within the considered line of sight. In order to better appreciate the results, enlargements in the 15-35 m range are presented for all the cases in the right column of Figure 5. In case of 500 frequencies (Figure 5a,b), both techniques are able to detect the presence of different targets, their reflectivities and distances (two peaks at 20 and 30 m are evident), with the L 2norm approach characterized by a coarser resolution with respect to proposed CS based methodology, as the impulses are larger. Moving to the 100 frequencies case (Figure 5c,d), the proposed approach produces very similar results compared to the previous case, while the L 2 -norm technique shows a much higher amount of estimation noise and fails in evaluating the reflectivity of the target at 30 m. In the 20 frequencies case (Figure 5e,f), characterized by a deeply undersampled bandwidth, the L 2norm fails to detect the second target, and shows several false alarms in the 10-30 m range, while the proposed approach still correctly retrieves the scatterers. In the last case, i.e., 10 frequencies sampled within the 500 MHz interval (Figure 5g,h), both techniques fail, as CS is also unable to detect the correct number of scatterers and their distance from the antenna. In case of 500 frequencies (Figure 5a,b), both techniques are able to detect the presence of different targets, their reflectivities and distances (two peaks at 20 and 30 m are evident), with the L 2 -norm approach characterized by a coarser resolution with respect to proposed CS based methodology, as the impulses are larger. Moving to the 100 frequencies case (Figure 5c,d), the proposed approach produces very similar results compared to the previous case, while the L 2 -norm technique shows a much higher amount of estimation noise and fails in evaluating the reflectivity of the target at 30 m. In the 20 frequencies case (Figure 5e,f), characterized by a deeply undersampled bandwidth, the L 2 -norm fails to detect the second target, and shows several false alarms in the 10-30 m range, while the proposed approach still correctly retrieves the scatterers. In the last case, i.e., 10 frequencies sampled within the 500 MHz interval (Figure 5g,h), both techniques fail, as CS is also unable to detect the correct number of scatterers and their distance from the antenna.
The second simulated dataset is a more realistic scenario. Several scatterers have been placed within the volume of interest in order to simulate a road with cars and lampposts, providing the scenario illustrated in Figure 6. In this case, 100 frequencies have been considered. The second simulated dataset is a more realistic scenario. Several scatterers have been placed within the volume of interest in order to simulate a road with cars and lampposts, providing the scenario illustrated in Figure 6. In this case, 100 frequencies have been considered. The in depth focus via CS approach has been applied for each line of sight, providing the detection of targets within the considered 3D volume. In this case, the L 2 -norm technique provided very unsatisfactory results, thus they have not been reported. In Figure 8 the estimated scatterers (red dots) have been plotted overlapped with the reference scenario (blue dots). The second simulated dataset is a more realistic scenario. Several scatterers have been placed within the volume of interest in order to simulate a road with cars and lampposts, providing the scenario illustrated in Figure 6. In this case, 100 frequencies have been considered. The in depth focus via CS approach has been applied for each line of sight, providing the detection of targets within the considered 3D volume. In this case, the L 2 -norm technique provided very unsatisfactory results, thus they have not been reported. In Figure 8 the estimated scatterers (red dots) have been plotted overlapped with the reference scenario (blue dots). The in depth focus via CS approach has been applied for each line of sight, providing the detection of targets within the considered 3D volume. In this case, the L 2 -norm technique provided very unsatisfactory results, thus they have not been reported. In Figure 8 the estimated scatterers (red dots) have been plotted overlapped with the reference scenario (blue dots). From the reported results, different aspects can be highlighted. The proposed method is able to detect multiple targets sharing the same LoS. The estimated position of the targets is globally satisfactory, i.e., red dots are correctly positioned in the 3D volume. The false alarm rate is very low, even at far range distances from the antenna. Concerning the detection rates, as expected, performances are better in the short range region. From Figure 8 it is evident that the number of detected targets beyond 35 m is very low compared to the nearest region. We have to underline that the considered scenario is very challenging,since in many lines of sight more than two scatterers are present. However, at least few scatterers have been found for each car, while, considering lampposts, only the most distant on the left side of the road has been completely missed.
An evaluation of computation time of the method has been made. At present, the detection of the targets for each line of sight requires about 10 s on a Core i7 workstation with 16 GB of RAM in the case of 20 frequencies and 1000 unknowns. In case of the simulated scenario reported in Figure 8, 119 lines of sight have been detected, thus the simulation was completed in about 119 × 10 s, which is about 20 min in total. However, it has to be underlined that all the process has been implemented in a Matlab environment and no optimization was done on the code. If we accept a resolution reduction over range, e.g., moving from 9 to 90 cm (100 unknows), which could be still acceptable considering the application, the computational time for each line of sight reduces to 0.7 s. Moreover, to improve the code performances, massive parallelization may be implemented (all lines of sight could be processed simultaneously). Both code optimization and parallelization could lead to two orders of magnitude speedup. For example, by employing a General Purpose Graphic Processor Unit (GP-GPU) with hundreds of cores, the global processing time can be further reduced. Such value could be further reduced by optimizing the code, making it suitable for most of real time applications.
Conclusions
Visual Driver Assistance Systems (VDAS) have a fundamental role in the automotive safety. However, VDAS are not reliable in poor visibility conditions. Imaging radars offer an effective alternative able to operate in any visibility conditions, in particular in critical situations due to the presence of smoke or fog. In this paper, we present a novel radar approach for signal processing in automotive field. In particular, a fast 2D focusing methodology based on the FFT algorithm followed by a new radar signal processing technique (based on Compressive Sensing) is presented. The proposed methodology is able to produce a 3D map of the scatterers within the imaged volume, providing the estimation both of their positions and reflectivities. Case studies have been presented From the reported results, different aspects can be highlighted. The proposed method is able to detect multiple targets sharing the same LoS. The estimated position of the targets is globally satisfactory, i.e., red dots are correctly positioned in the 3D volume. The false alarm rate is very low, even at far range distances from the antenna. Concerning the detection rates, as expected, performances are better in the short range region. From Figure 8 it is evident that the number of detected targets beyond 35 m is very low compared to the nearest region. We have to underline that the considered scenario is very challenging,since in many lines of sight more than two scatterers are present. However, at least few scatterers have been found for each car, while, considering lampposts, only the most distant on the left side of the road has been completely missed.
An evaluation of computation time of the method has been made. At present, the detection of the targets for each line of sight requires about 10 s on a Core i7 workstation with 16 GB of RAM in the case of 20 frequencies and 1000 unknowns. In case of the simulated scenario reported in Figure 8, 119 lines of sight have been detected, thus the simulation was completed in about 119ˆ10 s, which is about 20 min in total. However, it has to be underlined that all the process has been implemented in a Matlab environment and no optimization was done on the code. If we accept a resolution reduction over range, e.g., moving from 9 to 90 cm (100 unknows), which could be still acceptable considering the application, the computational time for each line of sight reduces to 0.7 s. Moreover, to improve the code performances, massive parallelization may be implemented (all lines of sight could be processed simultaneously). Both code optimization and parallelization could lead to two orders of magnitude speedup. For example, by employing a General Purpose Graphic Processor Unit (GP-GPU) with hundreds of cores, the global processing time can be further reduced. Such value could be further reduced by optimizing the code, making it suitable for most of real time applications.
Conclusions
Visual Driver Assistance Systems (VDAS) have a fundamental role in the automotive safety. However, VDAS are not reliable in poor visibility conditions. Imaging radars offer an effective alternative able to operate in any visibility conditions, in particular in critical situations due to the presence of smoke or fog. In this paper, we present a novel radar approach for signal processing in automotive field. In particular, a fast 2D focusing methodology based on the FFT algorithm followed by a new radar signal processing technique (based on Compressive Sensing) is presented. The proposed methodology is able to produce a 3D map of the scatterers within the imaged volume, providing the estimation both of their positions and reflectivities. Case studies have been presented in order to show the effectiveness of the approach and the performances achieved by the proposed algorithm. In particular, it has been shown that with respect to the widely adopted L 2 -norm minimization technique, the proposed methodology is able to estimate the reflectivity of multiple scatterers within the same LoS and their range distances in a more effective way, allowing strong subsampling of the available bandwidth. Reported results suggest that the proposed technique could be considered a promising and interesting approach for automotive radar focusing. In future work, the methodology will be tested and validated on other realistic and real datasets and studies on the detection and false alarm rates will be conducted. | 8,184.4 | 2016-04-29T00:00:00.000 | [
"Computer Science"
] |
Survey on Antioxidants Used as Additives to Improve Biodiesel’s Stability to Degradation through Oxidation
A major problem that limits the use of biodiesel is maintaining the fuel at the specified standards for a longer period. Biodiesel oxidizes much more easily than diesel, and the final oxidation products change its physical and chemical properties and cause the formation of insoluble gums that can block fuel filters and the supply pipes. This instability of biodiesel is a major problem and has not yet been satisfactorily resolved. Recently, the use of biodiesel has increased quite a lot, but the problem related to oxidation could become a significant impediment. A promising and cost-effective approach to improving biodiesel’s stability is to add appropriate antioxidants. Antioxidants work better or less effectively in different biodiesel fuels, and there is no one-size-fits-all inhibitor for every type of biodiesel fuel. To establish a suitable antioxidant for a certain type of biodiesel, it is necessary to know the chemistry of the antioxidants and factors that influence their effectiveness against biodiesel oxidation. Most studies on the use of antioxidants to improve the oxidative stability of biodiesel have been conducted independently. This study presents an analysis of these studies and mentions factors that must be taken into account for the choice of antioxidants so that the storage stability of biodiesel fuels can be improved.
Introduction
The main reasons for the increased interest in bioenergy are economic development, energy security, and independence [1,2], as well as the concern for reducing environmental pollution [3][4][5][6].The commercialization of biofuels has been carried out in different ways, including the establishment of standards at the national or zonal levels, the initiation of demonstration projects, as well as the elaboration of strategies [7].
Among biofuels, biodiesel is considered the best replacement for diesel fuel [8], which is probably due to its low toxicity, good lubricity, negligible sulfur content, lower exhaust emissions, higher flash point, and possible derivation from renewable sources of raw material [5,6,9].However, despite these advantages, large-scale use is still hampered by certain technical challenges, one of which is fuel quality [10].The poor oxidation stability, reduced adaptability to low temperatures, and microbial degradation of biodiesel are factors that result in the degradation of biofuel.
Oxidation of biodiesel leads to the production of hydro-peroxides and carboxylic acids [11,12] that can form insoluble sediments that then clog filters or produce deposits on the fuel injector [13].Many studies have shown that the oxidation reaction usually involves unsaturated fatty acids of the methyl ester in the composition and is accelerated by air, heat, and light [14][15][16].It has been shown that the poor tolerance of biodiesel to low temperatures in winter is due to the crystallization process of saturated fatty acids in biodiesel, and this process causes clogging of fuel pipes and filters [17][18][19].Microbial activities have also been found to lead to biodiesel deterioration [20][21][22][23][24].For example, Beker et al. [25] showed that the microbial degradation of biodiesel produces an increase in the viscosity, acid number, and water content of biodiesel to values above the limit presented in the standard specifications.
Microbial activities could be inhibited by synthetic additives, thus preventing the microbial degradation of biofuel [25].However, synthetic additives are expensive, and the development of cheap, biodegradable, non-toxic, and renewable additives with good efficacy is required.Substantial achievements were made in recent years, but there have been little or no comprehensive reviews of these crucial studies.Herein, some of the related studies are summarized, the state-of-the-art is established, and hypotheses and recommendations are made for further studies.
Substitution of Fossil Fuels by Renewable Sources
Depletion of fossil fuel sources requires finding the necessary substitute sources, considering the increase in the rate of energy consumption due to population growth, industrialization, and transportation demands.As a result, this increased demand for energy, together with the increase in greenhouse gas (GHG) emissions from the use of fossil fuels, constitutes a major challenge facing the world's population today [26].This issue determined the need to find sources of clean, low-cost energy, and led to the intensification of efforts based on research and revision of production paths, using new and optimized techniques, to promote clean energy production [26,27].This will support the maintenance of energy security, limit environmental pollution, and reduce the degree of climate change [28,29].Therefore, the current global evolution must be directed toward low-carbon sources, and most of the future energy must be provided from clean and renewable sources.Figure 1 shows a diagram of the transition from fossil fuels to renewable sources, and this transition could control the extreme climate effects generated by greenhouse gas emissions, while supporting the production and use of clean and sustainable energy.
Molecules 2023, 28, 7765 2 of 24 diesel, and this process causes clogging of fuel pipes and filters [17][18][19].Microbial activities have also been found to lead to biodiesel deterioration [20][21][22][23][24].For example, Beker et al. [25] showed that the microbial degradation of biodiesel produces an increase in the viscosity, acid number, and water content of biodiesel to values above the limit presented in the standard specifications.
Microbial activities could be inhibited by synthetic additives, thus preventing the microbial degradation of biofuel [25].However, synthetic additives are expensive, and the development of cheap, biodegradable, non-toxic, and renewable additives with good efficacy is required.Substantial achievements were made in recent years, but there have been little or no comprehensive reviews of these crucial studies.Herein, some of the related studies are summarized, the state-of-the-art is established, and hypotheses and recommendations are made for further studies.
Substitution of Fossil Fuels by Renewable Sources
Depletion of fossil fuel sources requires finding the necessary substitute sources, considering the increase in the rate of energy consumption due to population growth, industrialization, and transportation demands.As a result, this increased demand for energy, together with the increase in greenhouse gas (GHG) emissions from the use of fossil fuels, constitutes a major challenge facing the world's population today [26].This issue determined the need to find sources of clean, low-cost energy, and led to the intensification of efforts based on research and revision of production paths, using new and optimized techniques, to promote clean energy production [26,27].This will support the maintenance of energy security, limit environmental pollution, and reduce the degree of climate change [28,29].Therefore, the current global evolution must be directed toward low-carbon sources, and most of the future energy must be provided from clean and renewable sources.Figure 1 shows a diagram of the transition from fossil fuels to renewable sources, and this transition could control the extreme climate effects generated by greenhouse gas emissions, while supporting the production and use of clean and sustainable energy.Among the renewable sources mentioned in Figure 1, biodiesel could be considered a more sustainable and better candidate, considering its recyclability, regeneration ability, biodegradability, low sulfur content, and its low profile of greenhouse gas emissions [30].While other renewable energies, such as wind, solar, and hydro, can mainly be used to provide electricity, energy obtained from biomass, especially biodiesel, can provide electricity and will also contribute to meeting the demand for liquid fuel from the transport sector.Since fossil fuel sources from which diesel is obtained are decreasing, new renewable sources are being sought as replacements and, thus, there is an increased demand for biodiesel use.
Although review papers have been published on the oxidative stability of biodiesel [31], methods to improve the oxidation stability of biodiesel [30], the factors affecting the oxidation process [32], the effects of antioxidants on biodiesel stability, the performance of combustion and the resulting emissions [33], a combined discussion of the impact of antioxidants on oxidative stability, and the poor cold-flow properties of biodiesel and their effects on the engine system have not been reported.Therefore, this review brings these aspects together, and the discussed parameters are presented in Table 1.
Factors Affecting the Stability to Oxidation of Biodiesel
Oxidative, hydrolytic, ketonic, and microbiological degradation are the common processes of deterioration of fatty acids.It is known that oxidation is one of primary process by which fatty acids or their esters degrade [34].Auto-oxidation, thermal and enzymatic oxidation, or photo-oxidation are the main oxidation processes that lead to deterioration of biodiesel fuels' quality.Among all the processes, auto-oxidation is the most common.It is a chemical process where fatty acids from biodiesel composition are degraded by oxygen in the air [35].Usually, unsaturated fatty acids have double bonds between carbon atoms in their structure, and these double bonds can be cleaved by chemical reactions with free radicals, and the cleavage reactions also involve oxygen molecules.Typically, the oxidative degradation can cause the release of malodorous and highly volatile compounds, such as aldehydes and ketones.Since these reactions are free radical chemical reactions, they can be catalyzed by the sunlight.Factors that influence the rates of biodiesel oxidation include the chemical composition of fatty acid methyl ester (FAME) and its structure, the presence of natural antioxidants, the storage temperature and exposure to moisture, air, heat, and light, the presence of metal ions as catalysts, enzymes, and other impurities [36].
Influence of Feedstock Composition on Biodiesel Properties
Consumers, mainly those in the automotive industry, have signaled that the low oxidation stability and poor cold-flow properties of biodiesel fuels are the main causes of the degradation in biofuel quality [37,38].Both causes are influenced by the feedstock used in the production of biodiesel, mainly by the content and composition of fatty acids in the feedstock; however, certain discrepancies have appeared between them.Biodiesel produced from raw material that contains a higher amount of saturated acids (SFAs) and a reduced amount of unsaturated fatty acids (UFAs) has a high cetane number, better oxidation stability, as well as a high calorific value, which means that the biofuel has a better quality [39].On the other hand, this can lead to an increased viscosity at low temperatures that causes poor cold-flow properties, being in contradiction with the quality properties of the biofuel and causing fuel filter clogging and reducing the ignition efficiency in combustion engines [40].In contrast, biodiesel containing a large amount of UFAs and a small amount of SFAs has a low cetane number, a low oxidation stability, and therefore, a worse quality.Figure 2 shows a schematic representation of how the interaction between SFAs and UFAs influences the oxidation stability and cold flow of biodiesel.
to moisture, air, heat, and light, the presence of metal ions as catalysts, enzymes, and other impurities [36].
Influence of Feedstock Composition on Biodiesel Properties
Consumers, mainly those in the automotive industry, have signaled that the low oxidation stability and poor cold-flow properties of biodiesel fuels are the main causes of the degradation in biofuel quality [37,38].Both causes are influenced by the feedstock used in the production of biodiesel, mainly by the content and composition of fatty acids in the feedstock; however, certain discrepancies have appeared between them.Biodiesel produced from raw material that contains a higher amount of saturated acids (SFAs) and a reduced amount of unsaturated fatty acids (UFAs) has a high cetane number, better oxidation stability, as well as a high calorific value, which means that the biofuel has a better quality [39].On the other hand, this can lead to an increased viscosity at low temperatures that causes poor cold-flow properties, being in contradiction with the quality properties of the biofuel and causing fuel filter clogging and reducing the ignition efficiency in combustion engines [40].In contrast, biodiesel containing a large amount of UFAs and a small amount of SFAs has a low cetane number, a low oxidation stability, and therefore, a worse quality.Figure 2 shows a schematic representation of how the interaction between SFAs and UFAs influences the oxidation stability and cold flow of biodiesel.Therefore, the injection of raw biodiesel directly into combustion engines is not recommended due to these problems as well as other disadvantages, such as poor atomization and incomplete combustion, which lead to engine clogging [41].These operability problems constitute a danger to the safety of the equipment and increase maintenance costs.These problems have been studied by researchers and several methods have been suggested to solve them, such as the use of antioxidants, mixing biodiesel with conventional diesel, pyrolysis, or micro-emulsification.Among these, the use of antioxidants to increase the oxidation stability and achieve a good cold flow has gained the attention of researchers [42][43][44].
The degradation of biodiesel through the oxidation process is proportional to the nature of the fatty acids contained in the feedstock from which the biodiesel is produced.The chemical structure of the fatty acids in the raw material is linearly correlated with the oxidative instability, even if the composition of the fatty acids differs between the types of feedstock [42].In their study, Meira et al. [44] showed that the composition of fatty acids, moisture, glycerin content, as well as the storage conditions of the biofuel (temperature and exposure to light) are some of the factors that influence the auto-oxidation process.
The oxidation process is triggered by the loss of hydrogen atoms belonging to the allylic or bis-allylic carbon atom in the presence of an initiator, when a free radical is formed, which quickly reacts with an oxygen molecule and forms peroxyl radicals and, subsequently, propagates and forms hydro-peroxides.Hydro-peroxides formed are unstable compounds and continuously decompose, forming aldehydes and short-chain organic acids as oxidation products.These compounds progressively form insoluble gums Therefore, the injection of raw biodiesel directly into combustion engines is not recommended due to these problems as well as other disadvantages, such as poor atomization and incomplete combustion, which lead to engine clogging [41].These operability problems constitute a danger to the safety of the equipment and increase maintenance costs.These problems have been studied by researchers and several methods have been suggested to solve them, such as the use of antioxidants, mixing biodiesel with conventional diesel, pyrolysis, or micro-emulsification.Among these, the use of antioxidants to increase the oxidation stability and achieve a good cold flow has gained the attention of researchers [42][43][44].
The degradation of biodiesel through the oxidation process is proportional to the nature of the fatty acids contained in the feedstock from which the biodiesel is produced.The chemical structure of the fatty acids in the raw material is linearly correlated with the oxidative instability, even if the composition of the fatty acids differs between the types of feedstock [42].In their study, Meira et al. [44] showed that the composition of fatty acids, moisture, glycerin content, as well as the storage conditions of the biofuel (temperature and exposure to light) are some of the factors that influence the auto-oxidation process.
The oxidation process is triggered by the loss of hydrogen atoms belonging to the allylic or bis-allylic carbon atom in the presence of an initiator, when a free radical is formed, which quickly reacts with an oxygen molecule and forms peroxyl radicals and, subsequently, propagates and forms hydro-peroxides.Hydro-peroxides formed are unstable compounds and continuously decompose, forming aldehydes and short-chain organic acids as oxidation products.These compounds progressively form insoluble gums through polymerization reactions that increase the acid value and viscosity [45,46].Consequently, when biodiesel is oxidized, a series of changes occur that most often reduce the quality of the biofuel and, thus, the engine performance is affected.
As mentioned, fatty acids are saturated (SFAs), such as palmitic, stearic, and hydroxystearic acids, and unsaturated (UFAs), such as oleic, linoleic, palmitoleic, ricinoleic, and linolenic acids, etc. [24,28].Although SFAs have a negligible effect compared to UFAs on the biodiesel oxidation process, the availability and cost contribute to the choice of the type of raw material used in each area for biodiesel production.Additionally, fatty acid esters and their compositions vary depending on the sources of raw materials (see Table 2).The variation in the composition of biodiesel has direct consequences on the quality of the biofuel, the performance of the combustion engine, and the degree of exhaust emissions [32].Feedstocks containing a high amount of SFAs showed a higher oxidative stability than those formed by more monounsaturated fatty acids (MUFAs) and polyunsaturated fatty acids (PUFAs).This could be because MUFAs and PUFAs are more prone to the oxidation process and, therefore, the inhibition effect produced by antioxidants is diminished [17].
Biodiesel Chemical Composition
There are two types of fatty acids, saturated (SFAs) and unsaturated (UFAs).Stearic, palmitic, and hydroxystearic acids are saturated, while oleic, linoleic, palmitoleic, linolenic, ricinoleic, and eicosenoic acids are unsaturated fatty acids [47].The rate of oxidation of saturated fatty acids is very slow compared to the rate of oxidation of fatty acids, and their contribution to the degree of oxidation of biodiesel can be considered insignificant most of the time.Therefore, the study of the oxidation stability of the biofuel is primarily aimed at the oxidation reactions of unsaturated fatty acids.Although the value of the iodine index (IV) is an indicator that refers to the degree of unsaturation, it cannot be an indicator of oxidative stability.For example, in their study, Knothe and Razon [48] showed that the oxidation rates of biodiesel increase with the total number of bis-allylic sites (methylene CH directly adjacent to the two double bonds) in its structure, and not with the total number of double bonds or the iodine index value.The allylic position equivalent (APE) and the bis-allylic position equivalent (BAPE) values are the main parameters that show oxidation stability, and can be calculated with the relations below [49]: where A represents the amount of each fatty acid compound, C18:1-oleic acid, C18:2-linoleic acid, and C18:3-alpha-linolenic acid.The BAPE value is much more representative than the APE value because a much higher oxidation rate occurs only in the bis-allylic positions [9,48].Linoleic and linolenic acids structurally contain bis-allylic sites and are more easily subjected to oxidation than saturated fatty acids.
In other studies [15,18,50,51], the possibilities of improving the oxidation stability of biodiesel were studied by increasing the content of saturated fatty acids in the raw material using fuel filters to clog.
Presence of Natural Antioxidants in Biodiesel
The data of many studies show that even biodiesel containing high saturated esters loses its stability in a very short time if it contains a low amount of natural antioxidants [52,53].Attempting to increase the oxidation stability of biodiesel, the use of plant extracts and bio-oil as antioxidant additives has been reported in a large number of studies.Fernandes et al. [54] presented the effects of ethanol extract of Moringa oliefra leaves as a potential antioxidant additive for biodiesel.The results obtained showed that extracts with 98% ethanol presented bigger antioxidant effects than those with 70% ethanol, having a maximum period of induction of 19.3 h.This behavior was attributed to the presence of phenolic compounds and less to the presence of the polar solvent (ethanol).Ramalingam et al. [55] analyzed the effects of Pongamia pinnata leaf extract on the oxidative stability of biodiesel.They showed that the induction period significantly increased when the leaf extract dosage was increased.They reported an induction period of up to 14 h, which represented a 180% increase, as compared to the biodiesel sample without a leaf extract dosage.The antioxidant activity of this additive was attributed to the presence of carotenoid and chlorophyll II of the eight components determined in the extracts.In another study, Rocha et al. [56] also dosed Moringa leaf extract as an additive to increase the stability of biodiesel oxidation.The increasing extract dosage determined a directly proportional increase in the induction period of the biodiesel oxidation, where a maximum induction period of 8.75 h was reported from the 4000 ppm dosage used, which represented an improvement of about 74% compared to the biodiesel sample without an extract dosage.However, the antioxidant activity of this extract was lower when compared with that of butyl hydroxyl toluene (BHT), which is a synthetic additive.The performance of the extract was also attributed to the presence of phenolic compounds, which is in agreement with some previous studies [57][58][59].Moringa extract could, therefore, be used as an effective antioxidant additive for the increase in biodiesel oxidation stability.However, this plant has important economic value as a source of food and feedstock due to its nutritive and medicinal values [60][61][62][63], and this limits its possible use and commercialization as an antioxidant additive.In another study [57], the antioxidant activity of curcumin, a plant which contains β-carotene, has been investigated.The results obtained showed that curcumin extract exhibited high antioxidant activity, and an induction period of 6.35 h and 9.11 h were obtained using dosages of 500 ppm and 1500 ppm, respectively.Devi et al. [63], in their study, showed that ginger extract had good miscibility with biodiesel due to the low water content and high proportion of non-polar compounds contained in the extract.A dosage of 2000 ppm generated an induction period of 23.99 h, while a dosage of 250 ppm generated an induction period that was within the accepted standards.The antioxidant activity of this extract on the biodiesel's stability was attributed to the presence of higher amounts of phenolic compounds in the ethanolic extract of ginger plant, as is also reported in other previous studies [64][65][66].Spacino et al. [67] studied the effects of ethanol extracts of herbs, such as rosemary, oregano, basil, and their mixed formulations, as antioxidant additives on B100 soybean biodiesel's stability to the oxidation process.The results showed that the oxidation process of the biodiesel with additives became endothermic and was not spontaneous, indicating that the herb extracts were effective.Of all these extracts, a mixture of 50% rosemary, 25% oregano, and 25% basil extract was the most effective.A biodiesel with a 1:1 dosage of rosemary and oregano extracts exhibited a maximum induction period of 10.18 h [67].It was observed that the induction periods of biodiesel samples doped with rosemary, oregano, and basil extracts were higher than those biodiesel samples doped with the synthetic antioxidants, such as TBHQ, BHA, and BHT [68].
The possible use of different bio-oils from biomass were investigated regarding the antioxidant effects on biodiesel's stability.In their study, Garcia et al. [69] extracted bio-oil from pinewood and the efficiency of bio-oil extraction and miscibility with biodiesel were investigated.They showed that the miscibility of the bio-oil with biodiesel was improved, and the induction period increased with the increasing additive dosage.A maximum induction period of 8.8 h was obtained using isopropyl acetate.The antioxidant effects are due to the presence of phenolic compounds in bio-oil composition.In their study, Gil-Lalaguna et al. [70] presented the antioxidant activity of pinewood bio-oil used as a biodiesel additive to improve its stability.The pinewood bio-oil was hydrothermally treated using different solvents, under various parameters, such as: temperature of 250, 290, and 300 • C and pressure from 4 to 11.5 MPa.The biodiesel sample with a bio-oil dosage presented improved oxidation stability by 135% when it was used as a raw bio-oil and 400% with the bio-oil dosage that was hydrothermally treated with water as a solvent at 300 • C and 8.5 MPa.Additionally, it is shown that the biodiesel sample doped with treated bio-oil remained stable for about four months due to the presence of phenolic compounds and the increase in the concentration of catechol after the hydrothermal treatment of the bio-oil.In [71], the effects of different types of bio-oil used as additives and their mixture ratios on biodiesel's stability to oxidation were presented.The bio-oils used as additives were produced by pyrolysis of pine pellets in a semi-continuous auger reactor and pyrolysis of pine chips using a batch pyrolysis system.Various dosages were used in the biodiesel and a good miscibility of the bio-oil with biodiesel was obtained.In the temperature range of 155 to 225 • C, the oxidation stability of the biodiesel increased for doped biodiesel samples.The effects of the bio-oils obtained from malle and pine woods on biodiesel's stability were shown by Garcia-Perez et al. [72].The results showed that less than 20% wt. of both of the bio-oils was miscible with the biodiesel but mixing the bio-oils with 50% wt. of ethyl acetate improved the miscibility, and the induction periods for the biodiesel doped with the pine and malle bio-oils increased from 10.2 to 28.1 and 26.0 h, respectively.It can be concluded that the bio-oils obtained by biomass pyrolysis could be suitable additives to improve biodiesel's stability to oxidation.A key factor for efficient activity of the bio-oils as biodiesel additives is the miscibility of bio-oil with biodiesel.This factor depends on the type of alcohol used as a co-solvent for bio-oil and the amount of alcohol used [73].For example, while ethanol dissolves only about 35% wt of bio-oil, 1-butanol and 2-propanol produce higher homogenous mixtures compared to ethanol (1-butanol dissolves up to 60% wt of bio-oils, and 2-propanol dissolves up to 50%).However, not many studies focus on the economic feasibility of bio-oils used as bio-additives or on the use of suitable solvents to produce good miscibility of the bio-oils with the biodiesel, or possible modification treatments of the bio-oils to increase their miscibility without solvents.These studies must be carried out.
Influence of Storage Conditions on Biodiesel's Stability
Besides a high oxygen content, the biodiesel's quality could be affected by other factors, such as temperature, storage time, and light exposure.The rate of oxidation reaction increases when the storage temperature is high.In their study, Xin et al. [74] showed that the oxidation reaction rate of biodiesel is much slower if the storage temperature is lowered.However, storage of biodiesel fuel at low temperatures is not used due to technical problems and, usually, biodiesel is stored at room temperature.During storage, biodiesel is not subjected to high temperatures, but when the engine is fueled, the fuel comes into contact with the spray nozzles, the piston, and the walls of the combustion chamber, which are at high temperatures, causing quick oxidation of the biodiesel and possibly producing deposits on them and causing clogging.The rate of sludge deposition depends on the fuel type and the temperature range, and the rate of sludge formation decreases when the vapor pressure of fuel increases and the partial oxygen pressure decreases.The storage time is another parameter that influences the stability of biodiesel.Ashraful et al. [75] reported that coconut oil methyl ester was deteriorated after a storage time of 12 weeks at a constant temperature and in a humid environment.Additionally, the density, viscosity, and the total acid number (TAN) of fuel increased with gum formation and sludge deposition at the end of the experiment.Batista et al. [76] determined toxic compounds, such as 2,4-decadienal or acrolein-type unsaturated aldehydes (2-heptenal and 2-octenal), during storage of soybean biodiesel in the dark, under an air environment, for a period of 5 years.In their study, Christensen and McCormick [77] studied the stability of pure biodiesel for a storage period of 12 months and of B5 and B20 mixtures over a period of 3 years.They showed that the addition of different antioxidants was effective in restoring the biodiesel's stability, and at least 6 months of storage is possible if antioxidants were dosed at a 6 h induction time.Many studies showed a proportional correlation between the total acid number (TAN) increasing and a long storage time, as well as high temperatures and exposure to light [78].In the study achieved by Rajendran et al. [79] it is shown that an increase in the TAN value took place when various rapeseed FAME were stored in the dark long term (52 weeks).These results were in agreement with the data presented by Yang et al. [80], who stored other kinds of commercial biodiesel for one year, such as animal fat, soybean, and canola FAMEs in fixed conditions (tight steel tanks, volume of 10 L, dark room, at −4 • C). Figure 3 shows the increasing TAN value during storage for various biodiesel types [81,82].
ducing deposits on them and causing clogging.The rate of sludge deposition depends on the fuel type and the temperature range, and the rate of sludge formation decreases when the vapor pressure of fuel increases and the partial oxygen pressure decreases.The storage time is another parameter that influences the stability of biodiesel.Ashraful et al. [75] reported that coconut oil methyl ester was deteriorated after a storage time of 12 weeks at a constant temperature and in a humid environment.Additionally, the density, viscosity, and the total acid number (TAN) of fuel increased with gum formation and sludge deposition at the end of the experiment.Batista et al. [76] determined toxic compounds, such as 2,4-decadienal or acrolein-type unsaturated aldehydes (2-heptenal and 2-octenal), during storage of soybean biodiesel in the dark, under an air environment, for a period of 5 years.In their study, Christensen and McCormick [77] studied the stability of pure biodiesel for a storage period of 12 months and of B5 and B20 mixtures over a period of 3 years.They showed that the addition of different antioxidants was effective in restoring the biodiesel's stability, and at least 6 months of storage is possible if antioxidants were dosed at a 6 h induction time.Many studies showed a proportional correlation between the total acid number (TAN) increasing and a long storage time, as well as high temperatures and exposure to light [78].In the study achieved by Rajendran et al. [79] it is shown that an increase in the TAN value took place when various rapeseed FAME were stored in the dark long term (52 weeks).These results were in agreement with the data presented by Yang et al. [80], who stored other kinds of commercial biodiesel for one year, such as animal fat, soybean, and canola FAMEs in fixed conditions (tight steel tanks, volume of 10 L, dark room, at −4 °C). Figure 3 shows the increasing TAN value during storage for various biodiesel types [81,82].After 12 months of storage, the highest TAN value corresponds to Jatropha biodiesel, while the lowest corresponds to soybean biodiesel.These data emphasize, once again, that After 12 months of storage, the highest TAN value corresponds to Jatropha biodiesel, while the lowest corresponds to soybean biodiesel.These data emphasize, once again, that the oxidation stability is influenced by the raw material used for biodiesel production, as described in Section 2.1.
Presence of Metal Contaminants
In biodiesel fuels, composition was determined by the presence of transition metals in small concentrations.Some metals, such as copper, iron, zinc, and nickel, can accelerate the auto-oxidation rate, even at low concentrations.The most common metals that were detected in biodiesel are Cu, Fe, Ni, and Cr, and these metals can cause the decomposition of the peroxides into free radicals.The metal ions in lower or higher oxidation states catalyze the reaction of hydro-peroxides' decomposition, as shown in Equations ( 3) and (4): and free radicals such as RO* and ROO* are produced [83].
Analyzing Equations ( 3) and ( 4), it can be seen that the metal ions are regenerated, and they produce radicals such as RO* and ROO*, which cause the reduction of the induction period (IP) [84].The catalytic effect of metals can be canceled if deactivators such as chelating compounds are used.They form complexes with metal ions and make them unavailable to initiate the oxidation reaction.
The Influence of Other Factors
Exposure to light is another factor that determines biodiesel degradation.Usually, the rate of the oxidation reaction increases when the biodiesel fuel is exposed to light through infiltration, which causes the deterioration of the quality of the biodiesel.The effects of light on the oxidation stability of biodiesel are influenced by the types of fatty esters in the composition of the biofuel, as well as the types of antioxidant additives used [78].The penetration of light into the depth of the biofuel leads to the degradation of large portions of fatty acid monoalkyl esters (FAME).Pigments such as chlorophylls and pheophytins exposed to light cause photo-oxidation, while in the dark they act as antioxidants, stopping oxidation [85].On the other hand, biodiesel that has an increased content of mono-, di-, or tri-glycerides may absorb a larger amount of water and may be exposed to hydrolytic degradation.During this process, biodiesel is converted into alcohols and free fatty acids [86].The hydrolytic degradation process is accelerated by a high initial content of acids, especially free fatty acids, but also by the presence of moisture and heat.The water content in biodiesel decreases the calorific value, increases the corrosion rate, and creates a favorable environment for the multiplication of microbes [86], which can favor the degradation of the biofuel.The microorganisms found in biodiesel cause fuel degradation, and their amount may vary greatly.Most of the researchers mentioned that almost all variants of microorganisms, such as bacteria, fungus, and yeast, may be found in diesel fuel and biodiesel [87].Different species have different degradation mechanisms, thus providing different results.Microorganisms' diversity, growth rates, and patterns can be affected by fuel constituents, such as carbon and energy sources.Microbes are also commonly found in fuel storage tanks, transport systems, and fuel supply chains.Biofilm forming is triggered by microbial growth in storage tanks and pipes, which can block filters and pipelines, as well as increase pump and injection system wear.Fuel contamination shortens the filter's life and can result in fuel starvation, engine problems, and possible damage to the fuel injection equipment [87].This confirms that the biodegradation of hydrocarbons is an integral part of microbial life.
Remediation Methods
Problems related to the low oxidation stability of biodiesel have attracted a lot of attention from both researchers and users.Attempts were made to solve them through physical, chemical, or genetic moderation methods.A variety of techniques to stop oxidation have been analyzed, such as vacuum technology, low-temperature storage, use of inert gas for packaging, reducing the partial pressure of oxygen in contact with biodiesel, enzyme deactivation, and the use of antioxidants.Therefore, the properties of biodiesel, such as high oxidative stability, good behavior in cold flow, and the reduction of combustion emissions, could certainly be obtained by modifying the fatty acid composition through the methods mentioned above.For example, Lanjekar and Deshmukh [37] showed that physical methods such as fractionation or winterization produce a biodiesel with low oxidation stability, which makes them non-viable methods because the biodiesel produced requires the addition of antioxidant additives.Mixing biodiesel with another biodiesel made from different raw materials affects the properties of the biofuel due to the variation of fatty acids in the composition, while mixing with diesel is quite expensive and unsustainable [88].On the other hand, the addition of branched-chain fatty alkyl esters has negative effects on the cold-flow properties, forming crystals and sediments that determine the clogging of fuel lines and blockage of filters [89].Chemical methods, such as hydrogenation, are not viable, primarily due to the formation of biodiesel with poor cold-flow properties, leading to the occurrence of flow restriction [90].
The modification of the structure by changing the location of the unsaturation position closer to the ester head group, reducing the number of double bonds, as well as the removal of hydroxyl groups and the conversion of cis unsaturation to the trans position are also techniques used to improve the stability to oxidation of the biodiesel [91].Sundus et al. [92] also analyzed in detail the methods used to improve the biodiesel's stability upon oxidation.In order to avoid contact of the water with biodiesel, they showed that the use of a method based on membranes for the purification of biodiesel can be effective.Preventing or slowing down the rate of the biodiesel oxidation process with the use of antioxidant additives is the most cost-effective method compared to other techniques.The antioxidant donates an electron or hydrogen to the free radical and, thus, the oxidation reaction is neutralized.A universal antioxidant has not been found to increase the oxidation stability of all types of biodiesel.With the diversity of available antioxidants, it is necessary to find the most suitable antioxidant additives to increase the oxidation stability of each type of biodiesel.
Antioxidants
Antioxidants are chemical compounds that slow down or stop the auto-oxidation process by delaying the production of oxidants or by interrupting the multiplication of free radicals through various reactions in the auto-oxidation chain [93].The antioxidant efficiency of an additive added to biodiesel is influenced by several factors, such as the type and number of antioxidants present, the solubility in biodiesel, the chemical structure, the potential for redox reactions, the profile of fatty acids in FAME, the position of the hydroxyl groups, etc. [94].The addition of antioxidant additives in biodiesel is a good method to reduce or eliminate the degradation due to the ability to delay or stop the oxidation processes [94].
Natural antioxidants are usually extracted from plants and are mainly phenolic compounds that have the ability to delay auto-oxidation by inhibiting active oxygen-containing species through various mechanisms [95].Many phenolic, flavonoid, or carotenic substances are found naturally in plants, fruits, and vegetables, including gallic acids, curcumin, tocopherols, ascorbic acid, lycopene, vanillin, cinnamic acid, etc. [96].Although these natural antioxidants have obvious characteristics, not enough studies have been carried out to investigate their potential for increasing the oxidation stability of biodiesel [97] (Table 3).On the other hand, the use of synthetic antioxidant additives to stop the oxidative degradation of biodiesel and other fuels has been presented on a large scale [98].The main phenolic synthetic antioxidants used to increase the stability to oxidation of biodiesel are tertiarybutylhydroquinone (TBHQ), butylated hydroxytoluene (BHT), butylated hydroxyanisole (BHA), propyl gallate (PG), octyl gallate (OG), dodecyl gallate (DG), ethoxyquin (EQ), and pyrogallol (PY) [99] (Table 4).
To increase the stability of biodiesel, natural antioxidants are more suitable, which stabilize oxygen-containing compounds in biodiesel much more easily than synthetic antioxidants.The synthetic antioxidant additives had increased efficiency, especially for compounds distilled from petroleum, such as gasoline and blends with a low level of biodiesel.It should be emphasized that synthetic antioxidants have certain disadvantages, such as high toxicity, low thermal stability, partial solubility, and high volatility, as well as a high production cost, and some of these disadvantages can be avoided by using the natural antioxidants [95].On the other hand, it should be mentioned that the chemical structures of antioxidant compounds largely contribute to their stability and determine the nature of their reaction mechanism [100].Tables 3 and 4 show the standard physico-chemical properties, such as molecular mass, solubility, and melting points, as the main factors that establish the structure-antioxidant activity relationship.Therefore, antioxidant additives with a higher molecular weight, multi-hydroxyl groups (polyphenols), and that are easily soluble in biodiesel have been shown to be effective in improving biodiesel's stability [93].
Natural Antioxidants
The use of natural antioxidants as fuel additives is of great interest, primarily for health reasons, with their toxicity being much reduced.Plants that have a high content of phenolic compounds can have an antioxidant effect, and vegetable oils contain natural antioxidants, such as chlorophylls, polyphenols, tocopherols, carotenoids, tocotrienols, ascorbic acid salts, lignin, etc., which fulfill the role of protectors in the oxidation process of fatty acids.A large part of these antioxidants can be decomposed or destroyed during transesterification or refining processes [101,102].Considering this aspect, biodiesels obtained from unrefined vegetable oils have a higher amount of natural antioxidants in their composition and, consequently, a greater stability to degradation by oxidation, but do not meet other conditions necessary to be used as fuel [103].Phenolic compounds extracted from plants, such as tocopherols, lycopene, carotenoids, astaxanthin, canthaxanthin, zeaxanthin, caffeic acid, gallic acid, ferulic acid, vanillin, sinapic acid, p-coumaric acid, cinnamic acid, eugenol, sesamol, vanillic acid, resveratrol, etc., have antioxidant properties, and are currently widely produced and marketed.Additionally, plant extracts of rosemary, sage, thyme, cloves, oregano, allspice, cinnamon, marjoram, artichoke, eucalyptus, turmeric, etc., have been used as effective antioxidants for food products [104].However, except for tocopherols, very few studies have been conducted on the use of natural antioxidants to stop the degradation of biodiesel fuels.It was found that tocopherols show antioxidant activity only if their concentration is approximately equal to their concentration in vegetable oil, and at a higher concentration, they could act as an oxidant agent [27,105].Further, in many studies, it has been shown that tocopherols, compared to synthetic antioxidants, have a limited antioxidant effect on biodiesel fuels, being more effective for diesel than for biodiesel produced from vegetable oils or their esters [101][102][103].In addition, tocopherols are compounds that oxidize easily in air, and they are only stable in an inert environment (in the absence of air).The β-carotene compound is usually found in palm oil and represents a potential antioxidant.Its activity is influenced by the partial pressure of oxygen that it encounters; for example, at a higher oxygen level, it acts as an oxidant and favors the oxidation process [104].The compounds present in small quantities, such as citric acid and polyphenols, also contribute to increasing the stability by trapping metal ions that favor the oxidation process.The oil also contains ascorbic acid, which can act as an antioxidant with a secondary role of reducing the formation of hydro-peroxides [106].Bassil et al. and Damasceno et al. [107,108] tested the antioxidant effect of caffeic acid (CA) in a concentration of 1000 ppm in soy methyl ester and showed that CA could have antioxidant activity even after a period of three months.In addition to their role as a hydrogen donor, free radicals generated from caffeic acid form a dimer that has antioxidant properties and determines additional protection [109].Fernandes et al. [54] demonstrated that the ethanolic extract of Moringa oleifera leaves has better antioxidant properties than tertiarybutylhydroquinone (TBHQ) present in biodiesel obtained from soybeans.Moser [110] studied the antioxidant effect of myricetin, a flavonoid obtained by extracting Moringa oleifera seeds in soybean oil methyl ester, showing that myricetin has better antioxidant activity than α-tocopherol.Additionally, Serqueira et al. [111] investigated the activity of tetrahydro-curcuminoid (a natural antioxidant obtained from curcumin) for biodiesel fuel produced from cottonseed and residual cooking oils and concluded that the antioxidant activity of tetrahydro-curcuminoid is higher than that of butylated hydroxytoluene (BHT).Medeiros et al. [112] showed that the natural extract of rosemary in ethanol has a better antioxidant performance than, for example, the synthetic antioxidant TBHQ, with the study referring to the stability of biodiesel obtained from cotton seeds.Furthermore, Spacino et al. [113] discovered an important antioxidant activity of a mixture consisting of alcoholic extracts of rosemary and oregano in a 1:1 ratio by adding it to soy methyl ester.In their study, Deyab et al. [114] observed that ethanol extracts from rosemary leaves substantially reduced the corrosion rate of aluminum present in biodiesel.More recently, other researchers [115] have demonstrated that the extracts obtained from lignocellulosic bio-oil present significant antioxidant properties, and their addition at 4% by weight to biodiesel improves the stability to degradation by oxidation by 475%.However, currently, natural antioxidants do not show high commercial success, primarily because of their higher costs.
Antioxidant Mechanism
The process of oxidation causes the quality degradation of biofuel, induced by autooxidation, photo-oxidation, thermal, and enzymatic oxidation.However, auto-oxidation is considered the most common way through which the oxidation process occurs, mainly due to the predominant amount of unsaturated fatty acids (UFAs) present in the feedstock [116].Therefore, in this paper, the mechanism for forming the oxidation stability of the biodiesel refers on auto-oxidation, because the other forms have smaller effects.
Auto-Oxidation
The main difference between biodiesel and diesel is that biodiesel contains oxygen in its composition, while diesel does not.A schematic representation of the molecular structures of biodiesel and diesel is presented in Figure 4.
biodiesel refers on auto-oxidation, because the other forms have smaller effects.
Auto-Oxidation
The main difference between biodiesel and diesel is that biodiesel contains oxygen in its composition, while diesel does not.A schematic representation of the molecular structures of biodiesel and diesel is presented in Figure 4.The presence of oxygen in the composition of biodiesel has the effect of reducing the ignition delay time.It can improve the combustion environment and ensure a more complete combustion of the fuel, which determines a reduction of the amounts of CO, PM, and other exhaust emissions.In many studies [117][118][119], it has been reported that the high oxygen content of biodiesel can substantially reduce the emissions of exhaust gases and PM from diesel engines.However, a high oxygen content enhances combustion, which results in higher combustion temperatures and increases the likelihood of combining with nitrogen, resulting in increased NOx emissions.In addition to this disadvantage, biodiesel is also subject to the degradation process through auto-oxidation, which causes its quality to deteriorate.
Auto-oxidation is an auto-catalytic reaction caused by molecular oxygen, and its speed can be increased by exposing biodiesel to air, high temperatures, or light, which leads to the formation of polymeric products that damage the quality of biodiesel.Therefore, the composition of fatty acids (saturated and unsaturated), the structural form, and the processing and storage conditions determine the intensity of auto-oxidation reactions The presence of oxygen in the composition of biodiesel has the effect of reducing the ignition delay time.It can improve the combustion environment and ensure a more complete combustion of the fuel, which determines a reduction of the amounts of CO, PM, and other exhaust emissions.In many studies [117][118][119], it has been reported that the high oxygen content of biodiesel can substantially reduce the emissions of exhaust gases and PM from diesel engines.However, a high oxygen content enhances combustion, which results in higher combustion temperatures and increases the likelihood of combining with nitrogen, resulting in increased NOx emissions.In addition to this disadvantage, biodiesel is also subject to the degradation process through auto-oxidation, which causes its quality to deteriorate.
Auto-oxidation is an auto-catalytic reaction caused by molecular oxygen, and its speed can be increased by exposing biodiesel to air, high temperatures, or light, which leads to the formation of polymeric products that damage the quality of biodiesel.Therefore, the composition of fatty acids (saturated and unsaturated), the structural form, and the processing and storage conditions determine the intensity of auto-oxidation reactions [120].Usually, free radicals are intermediates formed during auto-oxidation, and these are groups of atoms that carry an odd number of electrons or contain one or more unpaired valence electrons that exist freely for a fairly short time.Therefore, the free radical auto-oxidation mechanism is generally considered the fastest degradation pathway, and it involves a series of reactions that occur through initiation, propagation, and interruption of reactions, as shown below: where I represents an initiator, air, high temperature, or light, and has the role of removing an H atom from the unsaturated fatty acid structure and generating free radicals (R*), according to Equation (5).RH represents an organic substrate, such as fatty acid monoalkyl esters (FAME), which can be oxidized.The antioxidants can slow down or inhibit the initiation reaction.The free radicals formed in the initiation reaction are very reactive and quickly combine with available O 2 , resulting in peroxy radicals (ROO*), as shown by Equation ( 6).
The peroxy radical (ROO*) is less reactive than the R * radical; however, it is reactive enough to remove another H atom from the RH structure, forming another radical, R * .This new R* radical can then react with molecular oxygen (O 2 ), according to Equation ( 6), resulting a new peroxy radical (ROO*), and thus the chain reaction is propagated.The primary or propagation chain-breaking antioxidants (AOH) can react with peroxy (ROO*) and R* free radicals, resulting in inactive products, and thus the propagation reaction does not continue-it is interrupted (according to Equations ( 8)-( 11)).The resulting products are stable (inactive), and they do not initiate further reactions of oxidation [121].
Structure and Molecular Mass of the Antioxidants
Antioxidants act differently from one another, depending on the type and positions of the substituents attached to the aromatic ring of the compound.Figure 5 shows the molecular structure for some of the common antioxidants.Substituents such as alkyl or alkoxyl (CH 3 O) are donors of electrons, and their presence in the ortho or para positions of the aromatic nucleus of phenols is preferable in the antioxidant structure.The alkyl groups in the ortho and para positions in the structure favor the stabilization of the phenoxyl free radical generated by antioxidants and, consequently, prevent secondary reactions.Additionally, the alkyl groups in the ortho position constitute a steric hindrance and prevent the unwanted pro-oxidation reaction from taking place [122].On the other hand, substituents such as COOH, halogens, NO 2 , and branched para-alkyl groups in the α position are acceptors of electrons, and it was found that they reduce the effectiveness of antioxidants [122].However, most of the time, steric hindrance reduces the rate of electron release and, therefore, the antioxidant activity is diminished.The tertiary butyl groups in the structure of the BHT and BHA antioxidant molecules represent a stronger steric hindrance than that produced by the same group in the molecular structure of TBHQ, which has the effect of reducing the antioxidant activity [123].The antioxidant activity is also influenced by the number of hydroxyl groups in the structure of the molecule.Antioxidants from the gallate series have a structure with a longer chain and possess more polyhydroxy groups (Figure 5), which determines that their antioxidant activity is higher than that of monohydroxy antioxidants, such as BHT or BHA.However, the antioxidant activity begins to decrease for the antioxidant that has more than three hydroxyl groups in its structure.In addition, the polyhydroxy substituent makes the substance partially soluble in water, which leads to a decrease in the concentration of antioxidants in the respective compound [124].
Peltzer et al. [125] showed that antioxidants that do not contain oxygen in the para position have an efficacy that increases linearly with their concentration, while antioxidants that contain para-oxygen reduce their activity with the increasing concentration.The presence of polar compounds in biodiesel, such as small amounts of methanol and ethanol, influences the performance, because the antioxidants, in the biodiesel composition or added as additives, form hydrogen bonds with the alcoholic group, slowing down the reaction of antioxidants with free radicals [126].
Antioxidants used to ensure the degradation stability for biodiesel must be able to migrate freely throughout the mass of the biofuel to reach a large number of initiation points of the auto-oxidation reaction, which are generated during storage and prevent this reaction from propagating.An oxidant with a low molecular mass can disperse well in biodiesel and can much more easily reach the sites of initiation of the auto-oxidation reaction.However, antioxidants with a low molecular mass are quite volatile, and they are lost through evaporation, which is why they are not used for long-term storage of biodiesel [127].On the other hand, high-molecular-mass antioxidants contain more hydrogen that can be donated and have a reduced volatility, which makes them more effective for long-term biodiesel stabilization.However, antioxidants with an increased molecular mass have lower mobility, which leads to their uneven distribution in the mass of biodiesel.Thus, the concentration of these antioxidants should be ~10 times higher than the actual concentration of free radicals that initiate the auto-oxidation reaction [127].The antioxidant performance of natural antioxidants increases, as the alkyl chain is longer (increasing the molecular mass), until a limit is reached, after which further extension of the alkyl chain length produces a rapid loss of antioxidant activity [124].In his study, Ingendoh [128] analyzed the antioxidant performance of lower-molecular-weight BHT and higher-molecular-weight Bis-BHT by adding them to soy biodiesel, showing that Bis-BHT is more effective than BHT.Therefore, higher-molecular-weight antioxidants are preferred for long-term stabilization to oxidation of biodiesel.
Molecules 2023, 28, 7765 15 of 24 influenced by the number of hydroxyl groups in the structure of the molecule.Antioxi dants from the gallate series have a structure with a longer chain and possess more poly hydroxy groups (Figure 5), which determines that their antioxidant activity is higher than that of monohydroxy antioxidants, such as BHT or BHA.However, the antioxidant activ ity begins to decrease for the antioxidant that has more than three hydroxyl groups in its structure.In addition, the polyhydroxy substituent makes the substance partially soluble in water, which leads to a decrease in the concentration of antioxidants in the respective compound [124].Peltzer et al. [125] showed that antioxidants that do not contain oxygen in the para position have an efficacy that increases linearly with their concentration, while antioxi dants that contain para-oxygen reduce their activity with the increasing concentration The presence of polar compounds in biodiesel, such as small amounts of methanol and ethanol, influences the performance, because the antioxidants, in the biodiesel composi tion or added as additives, form hydrogen bonds with the alcoholic group, slowing down the reaction of antioxidants with free radicals [126].
Antioxidants used to ensure the degradation stability for biodiesel must be able to migrate freely throughout the mass of the biofuel to reach a large number of initiation points of the auto-oxidation reaction, which are generated during storage and prevent this reaction from propagating.An oxidant with a low molecular mass can disperse well in biodiesel and can much more easily reach the sites of initiation of the auto-oxidation re action.However, antioxidants with a low molecular mass are quite volatile, and they are lost through evaporation, which is why they are not used for long-term storage of bio diesel [127].On the other hand, high-molecular-mass antioxidants contain more hydrogen
Concentration Requirements for Antioxidants' Use
To be effective, the antioxidants must have a minimum concentration in the biodiesel mass, from which their activity increases linearly with the increase in concentration.However, there is a final concentration value, beyond which the antioxidant activity does not improve.In most cases, for example, a higher concentration of phenolic antioxidants acts as a pro-oxidant and intensifies the degradation reaction [129,130].The concentration interval between the minimum value (limit) and the final saturation value is known as the optimal interval, and this range is different for each type of antioxidant.Zhong and Shahidi [131] showed that the relationship between antioxidant activity and antioxidant concentrations is not linear, it follows a parabolic curve, and the minimum and maximum limits are not the same for different antioxidants-the values are higher for polar antioxidants.Chen and Luo [129] reported that the minimum critical concentration of antioxidants for a highly unsaturated biodiesel (more than 90%) is about 100 ppm to obtain a noticeable increase in the induction period (IP).The minimum limit concentration increases with the temperature, and at higher temperatures, a larger amount of antioxidant is required to obtain good biodiesel degradation stability.For example, Lapuerta et al. [132] studied the effect of temperature on the induction period of biodiesel derived from soybean oil, animal fats, and used cooking oil, using BHT as an antioxidant, and showed that at the higher temperature (130 • C), the antioxidant concentration required to reach the 8 h target was quite high (>25,000 ppm).The amount of antioxidant added in biodiesel is strongly influenced by the raw material used to produce the biodiesel and the technology.Natural antioxidants are much more sensitive to concentrations, and at higher concentrations, they show pro-oxidant effects.For example, the optimal range of the tocopherol concentration is 0.043-0.13%by weight, and at higher concentrations (>0.2%), it shows a pro-oxidant effect [34].In addition, in [110], it was shown that the optimal concentration for α-tocopherol in soy methyl ester is in the range of 600-700 ppm and, after this range, no antioxidant activity was observed.In general, the use of higher concentrations of antioxidants in biofuels should be avoided because it could lead to an increase in the delay period in combustion and is reflected by an increase in costs.
Effects of Antioxidants on Biodiesel Cold-Flow Properties
Low cold-flow properties, such as cloud point (CP), pour point (PP), and cold filter plugging point (CFPP), strongly influence biodiesel's quality parameters.These characteristics of biodiesel are much less satisfactory than those of diesel [19,133].In low-temperature conditions, the crystallization process of biodiesel can occur, which affects the good functionality of the engine.This is due to increased viscosity, higher density, poor atomization, and vaporization, which affect the engine fuel system by clogging fuel filters and blocking fuel inlet pipes and nozzles [134].Therefore, tracking the changes in fuel quality parameters is decisive, especially before the biofuel is under the influence of certain environmental conditions.The properties of biodiesel under low-temperature conditions must be known to estimate the longevity and better performance of the biofuel for avoiding CFPP and low-temperature filterability [90].Thus, neglecting the properties of biofuel at low temperatures, crystals can form in its mass, and as a result, the viscosity, flow capacity, filtration, and volatility are affected, and this negatively influences the ease of starting the engine [39].In their study, Islam et al. [135] reported that low-temperature flow is much worse for biodiesel, which contains a higher amount of saturated fatty acids (SFAs).Improving the cold flow of biodiesel can be achieved using antioxidant additives, and this is considered a conventional method that is cost-effective and sustainable.These antioxidant additives improve the flow of cold biodiesel through the process of co-crystallization of fuel crystals and thus stop the further growth of crystals [136].In their study, Muniz et al. [137] showed that polymeric and phenolic compounds are good PP depressants and, respectively, performant antioxidant additives.Therefore, it is considered that the chemical structures of these additives consist of a series of hydrocarbons that can co-precipitate with the hydrocarbon chain of the fuel and thus prevent the development and solidification of wax crystals [138].During the interruption of nucleation, the crystal structure the occurs, formed from a three-dimensional shape, is narrow and long-pointed, thereby preventing biodiesel filters from blockage and slowing down the growth of crystals, thus avoiding the solidification.Anoop et al. [38] observed that the CP, PP, and CFPP of biodiesel from coconut (COB) were quite high, and the biofuel was prone to the formation of crystals due to the large quantity of SFAs.By reaching an IP of 6.0 h, according to EN 21214, the biodiesel presented a better oxidative stability but showed a poor cold flow.The addition of ginger and pepper extracts increased the IP by just 3% and is steadily higher using garlic.The use of antioxidant plant extracts in different concentrations (i.e., adding 0.4% wt to 1.0% wt) reduced the CFPP by 2 • C, but a much lower decrease in the CP and PP of biodiesel was seen, as reported in [38,139].According to the data presented, it can be emphasized that plant antioxidants have a higher efficiency in increasing the oxidation stability of biodiesel, and less so in maintaining suitable biodiesel cold-flow properties.
Sustainability of Using Antioxidants to Enhance Biodiesel Stability and Research Trends
The use of antioxidant additives to increase the oxidation stability and improve the cold flow of biodiesel dates back many years, and many researchers [140][141][142] have reported that the use of antioxidants is a cost-effective and sustainable way to improve the stability and performance of biodiesel.The most used antioxidants were synthetic, and among them pyrogallol (PY) proved to be the most effective and most frequently used for increasing the oxidation stability of biodiesel, followed by PG, TBHQ, BHA, and BHT, as reported in [143].However, safety issues in the use of synthetic antioxidants, especially associated with their toxicity and high volatility, are some of the disadvantages that have led to their gradual elimination, especially in the food industry [144].In addition, many of these antioxidants have been found to be active only at concentrations greater than 1000 ppm, and their cost is high.On the other hand, their partial solubility leads to deposits in the engine, which subsequently clogs the filters and blocks the fuel lines; thus, the maintenance costs are increased.Therefore, it can be said that the use of this class of antioxidants is not sustainable or economically viable [145] and considering the use of natural antioxidants for biodiesel stabilization would be a better option.In this context, many natural antioxidants, such as polyphenols, carotenoids, flavonoids, and amines, have been studied as additives to increase the stability of biodiesel, due to the availability of hydrogen atoms for the elimination of free radicals, high solubility, renewability, and low cost [146].However, many of these natural antioxidants do not show good performance in terms of improving the IP of biodiesel [147].Additionally, no important progress has been made regarding the exploration of the potential of natural antioxidants for the improvement of biodiesel cold flow.Another problem is related to the fact that the excessive use of natural edible products as antioxidant additives poses a high threat to food security by intensifying the conflict between food and fuel.In this context, few studies have presented the use of non-edible sources of antioxidants to increase the oxidation stability and poor cold-flow improvement of biodiesel.Therefore, the attempt to make this resource economically viable must be based on strategies supported by research in the field.
Conclusions
The substitution of fuels derived from oil with renewable biofuels has received special attention recently, especially through the implementation of energy policies based on energy obtained from biomass.The unsustainability of petroleum fuels is primarily due to higher toxicity and emissions with a high content of greenhouse gases.Therefore, from the reviewed literature, it appears that biodiesel as an alternative to mineral diesel is a promising option, but poor oxidation stability and poor cold flow are still major issues that limit widespread use.
Biodiesel has a reduced oxidative stability, and this creates problems, such as deposits in the engine, clogging of the fuel filter, and blockage of the supply pipes.Additionally, during long-term storage, biodiesel quality degrades due to the oxidation process.Many attempts have been made to increase the long-term storage stability of biodiesel and important progress has been seen.However, this problem is not yet completely solved, and for biodiesel-fueled engines, maintenance problems occur, which require high costs.Except for the BHT antioxidant, most of the phenolic antioxidants, both synthetic and natural, are expensive and, therefore, new high-performance, safe, ecological, and much cheaper antioxidant additives must be developed.The following conclusions emerged from this study:
•
Antioxidants are chemical compounds that slow down or stop the auto-oxidation process by delaying the production of oxidants or by interrupting the multiplication of free radicals through various reactions in the auto-oxidation chain.
•
The performance of antioxidants is influenced by the source from which the biodiesel fuel was obtained.Pyrogallol, for example, stood out as the most effective antioxidant for biodiesel produced from raw material with a high content of fatty acids (FFA).
Biodiesel fuels that contain a small amount of natural antioxidants, such as carotenoids and tocopherols, have a reduced oxidation stability.
•
The selection of antioxidants is based on their antioxidant performance, which must be high, and which is due to their good solubility, good efficiency at low concentrations, non-toxicity, and long shelf life.
•
Antioxidants with a higher molecular weight ensure better stability of biodiesel over longer storage times because they contain more hydrogen atoms for donation.
•
Antioxidants containing polyhydroxyl groups in their structure show higher performance compared to monohydroxyl antioxidants, such as BHT and BHA; however, no improvement in activity was observed for antioxidants with more than three hydroxyl groups in the molecular structure.
•
Polar and partially fat-soluble antioxidants are more effective in maintaining the stability of biodiesel than fat-soluble antioxidants.
•
The temperature, viscosity, and pH of biodiesel significantly influence the effectiveness of antioxidants.Those containing a greater number of aromatic rings and longer aliphatic chains show a higher resistance to heat.Antioxidants with high pH have a lower concentration of metal ions and improve the stability of biodiesel, and the viscosity of biodiesel greatly influences the uniform distribution of antioxidants in the mass of biodiesel.
•
The use of higher concentrations of antioxidants in biofuels should be avoided, because it could lead to an increase in the delay period in combustion and reflects an increase in costs.
•
The plant antioxidants have a higher efficiency in increasing the oxidation stability of biodiesel, and less so in maintaining suitable biodiesel cold-flow properties.
To increase the potential of using antioxidants as a promising method to improve biodiesel's stability, this review suggests that other important aspects need to be studied, including: an in-depth investigation and detailed analysis of the effects of antioxidant concentrations on exhaust emissions from engines, knowledge of the evaporation rate of biodiesel mixed with antioxidants in a heated engine, the influence of antioxidants on the reduction of NOx, smoke and hydrocarbon (HC) emissions, as well as the influence of antioxidants on reduced biodiesel consumption and activation temperatures.Currently, no studies have been identified that refer to these aspects, highlighting a need for future research.
Figure 1 .
Figure 1.Schematic transition diagram from fossil fuels to renewable sources.Figure 1.Schematic transition diagram from fossil fuels to renewable sources.
Figure 1 .
Figure 1.Schematic transition diagram from fossil fuels to renewable sources.Figure 1.Schematic transition diagram from fossil fuels to renewable sources.
Figure 2 .
Figure 2. The influence of the interaction between SFAs and UFAs on the oxidation stability and cold flow of biodiesel.
Figure 2 .
Figure 2. The influence of the interaction between SFAs and UFAs on the oxidation stability and cold flow of biodiesel.
Figure 4 .
Figure 4. Schematic representation of the molecular structures of biodiesel and diesel.
Figure 4 .
Figure 4. Schematic representation of the molecular structures of biodiesel and diesel.
Figure 5 .
Figure 5. Structures of some common antioxidants.
Figure 5 .
Figure 5. Structures of some common antioxidants.
Table 2 .
The influence of the fatty acid composition on the biodiesel oxidation stability index (induction period, ID). | 14,981.4 | 2023-11-24T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
Progress of Endogenous and Exogenous Nanoparticles for Cancer Therapy and Diagnostics
The focus of this brief review is to describe the application of nanoparticles, including endogenous nanoparticles (e.g., extracellular vesicles, EVs, and virus capsids) and exogenous nanoparticles (e.g., organic and inorganic materials) in cancer therapy and diagnostics. In this review, we mainly focused on EVs, where a recent study demonstrated that EVs secreted from cancer cells are associated with malignant alterations in cancer. EVs are expected to be used for cancer diagnostics by analyzing their informative cargo. Exogenous nanoparticles are also used in cancer diagnostics as imaging probes because they can be easily functionalized. Nanoparticles are promising targets for drug delivery system (DDS) development and have recently been actively studied. In this review, we introduce nanoparticles as a powerful tool in the field of cancer therapy and diagnostics and discuss issues and future prospects.
Introduction
Cancer is a disorder that is hard to cure because it is basically a rebellion of self-cells, making it difficult to target only cancer cells in therapy. The basic approach against cancer is early detection, followed by chemotherapy and/or physical removal of the tumor if possible. Physical removal can be achieved by surgical operation or radiation therapy; both of these methods are highly invasive and require information about the tumor location. Chemotherapy is a less invasive therapy; however, it is only effective against certain types of tumors. In addition, chemotherapy has non-negligible risks of adverse effects [1,2]. Most anticancer drugs are designed to kill cancer cells that exhibit high proliferation behavior, and this characteristic causes adverse effects in healthy proliferative cells. Immunotherapy is another therapeutic method that has already been applied against cancer; however, like chemotherapy, it is only effective against certain types of cancer [3]. This may be attributed to the fact that cancer cell survival is related to immunosuppression and regulatory T cells protect cancer cells from the immune system. To achieve effective and minimally invasive cancer therapy, early diagnostics and high-accuracy imaging technologies are essential; treatment must be initiated as early as possible, and tumor location and temporal size changes should be monitored to analyze the therapeutic effect. However, current analysis technologies, such as fecal examination and projection radiography, cannot detect tumors until they reach a certain size. Thus, when a tumor is discovered, surgery is often an exclusive option. These hurdles hinder the development of minimally invasive cancer therapies. To overcome these hurdles, the use of nanoparticles is expected to be helpful. Figure 1 illustrates the technology introduced in this review. The history of nanoparticle research and development is very long, and the application range of nanoparticles covers not only the medical field, but also industrial fields and the space industry. In the field of cancer diagnostics and therapies, nanometer-sized particles called extracellular vesicles (EVs), such as exosomes, have attracted attention because they have been reported to be associated with malignant transformation of cancer [4,5]. Nanoparticles formed in vivo, such as EVs, are called "endogenous nanoparticles," and their functions have been reported to be related to various diseases, homeostasis, and cancer. On the other hand, classical nanoparticles which are formed by chemical synthesis from organic and/or inorganic materials, are called "exogenous nanoparticles". Highly functional exogenous nanoparticles have been reported to improve synthesis precision and analysis technologies, and these nanoparticles have been applied in various fields [6,7]. In this review, we mainly focused on EVs and summarize the research results in cancer diagnostics and therapies. A common issue in the application of endogenous and exogenous nanoparticles is the delivery of formed nanoparticles to the target location. For example, in terms of adverse effects, it is important to reduce the agent dose as much as possible while increasing drug delivery to the tumor. Research approaches using nanoparticles in these delivery technologies are also debated. Finally, we discuss future prospects and issues in cancer diagnostics and therapies using nanoparticles.
Types of Extracellular Vesicles
EVs, also known as microparticles or lipid vesicles, are secreted from cells with a lipid bilayer or multilayer structure. All living cells secrete EVs, which are generally categorized into several groups based on their origin, such as exosomes, microvesicles, and apoptotic bodies ( Figure 2, Table 1) [8]. EVs carry various types of cargo, such as proteins, deoxyribonucleic acid (DNA), ribonucleic acid (RNA), lipids, and metabolites [9], and they are decorated by surface molecules, which are crucial for the targeting of recipient cells. EVs may be important for intercellular communication and can modify the state of recipient cells with their cargo or surface molecules. Because various types of EVs are secreted from the same cell, they are heterogeneous in size, composition, and origin [10,11]. Exosomes are the most studied EVs and originate from endosomes; early endosomes develop into late endosomes and form multivesicular bodies containing numerous luminal vesicles. Microvesicles are secreted from cells through budding of the plasma membrane. Apoptotic bodies with diameters ranging from 50 nm to a few micrometers are generated through the disassembly of cells during apoptosis. These EVs are currently gaining attention in various fields of science owing to their growing significance in diagnosis and therapy. The liquid biopsy of EVs in body fluids is an emerging diagnostic tool. Apart from circulating tumor cells in the bloodstream, EVs are also targets for liquid biopsy [12]. Because EVs are present in all body fluids and carry nucleic acids, a minimally invasive diagnosis is possible. Monitoring tumor progression or deciding on optimal care is possible by investigating DNA errors derived from tumor cells in EVs. In addition to tumor-derived DNA, surface or luminal proteins or microRNA (miRNA) cargo in EVs may reflect tumor progression and thus may be useful biomarkers for liquid biopsy. Exosomes and microvesicles can be targets for liquid biopsy, and although their origin is different, their composition is similar. EVs secreted by cancer cells are thought to play a role in tumor formation, transformation, and metastasis. Recent advances in liquid biopsy are discussed in detail in the following sections.
Therapeutic Use of EVs
The therapeutic use of EVs has been proposed because they can modify the state of target cells. In particular, EVs derived from immune cells can prime early T cells, differentiate mature T cells, and develop effector functions, such as antigen presentation and activation of immune cells [13]. The EVs' functions on immunity have been focused by these discoveries, and immunotherapy using these functions is expected. Wolfers et al. showed that EVs derived from cancer cells possess cancer antigens, and these EVs could induce cancer antigen-specific immune response and anticancer efficacy [14]. This research raised expectations to achieve cancer vaccine therapy using antigen presentation [15]. Vaccine therapy using cancer cell derived EVs has an advantage unlike conventional immunotherapy, it does not require the identification of cancer antigens. On the other hand, the vaccine therapy using cancer derived EVs has shown anticancer efficacy, however, the therapeutic effect is still insufficient. Therefore, many researchers strive to elucidate the mechanism of EVs on human immunity [13]. Immunotherapy using EVs with anti-inflammatory properties is another type of immunotherapy that differs from vaccine therapy. In addition, EVs derived from cancer cells possess cancer antigens and can therefore be used for cancer immunotherapy by providing cancer-specific antigens to the immune system [14]. EVs derived from mesenchymal stem cells (MSC) are attractive for therapeutic use because of their low immunogenicity and ability to enhance injury recovery [16,17]. MSC-derived EVs have been reported to possess antioxidant, anti-inflammatory, and anti-apoptotic properties [18,19]. Furthermore, recent studies have shown that they protect cardiomyocytes from ischemia. Thus, EVs can function as drugs.
In addition to using EVs as drugs, EVs may deliver cargo into target cells, and the use of EVs as drug delivery system (DDS) carriers is gaining interest in the medical field. When using EVs as therapeutic DDS carriers, it is necessary to load the drug to be delivered into the vesicles [20,21]. This can be achieved by preloading the therapeutic nucleic acid into the cells producing the vesicles or introducing the drug into purified EVs. In the former case, although the mechanism by which nucleic acids and other inclusions are incorporated into the exosomes has not been fully elucidated, various methods have been developed [22,23]. In the latter case, where cargo is encapsulated into purified EVs, there is still a significant barrier to achieving efficient drug encapsulation [24]. Hydrophobic drugs, such as anticancer drugs, can be loaded passively through hydrophobic interactions with lipid bilayers [25]. However, owing to the hydrophobic lipid bilayer, hydrophilic drugs such as nucleic acids require a technique to permeate the membrane. Electroporation and sonication have been used to create pores in the EV membranes [26,27]. However, it has been highlighted that excessive physical stimulation may induce the aggregation of EVs, thereby altering their morphological characteristics, and changes in the surface potential of the membrane may increase cytotoxicity. Various attempts were made to increase drug-loading efficacy such as use of mesoporous material, use of acoustofluidic device, or dimerization of drugs, [28][29][30].
For the delivery of EV cargo into target cells, especially when the drug of interest is encapsulated inside EVs and is unable to penetrate the lipid bilayer by itself, the drug must be released from the inside of EVs to exert its biological function after cell entry.
Since EVs are mostly internalized by endocytosis, membrane fusion between the endocytic membrane and the EV membrane must take place. Whether EVs exhibit membrane fusion activity is a subject of considerable debate [31][32][33][34]. Several reports have demonstrated that EV-mediated cargo delivery is an inefficient process where less than 0.2% of recipient cells functionally receive the RNA cargo from EVs in vitro [35], while engineering EVs with virus-derived fusogenic proteins significantly enhances cargo delivery [36,37]. These studies strongly suggest that engineering EVs to enhance cytoplasmic delivery is the most critical issue for the DDS application of EVs. Because exogenous nanoparticles can be a promising DDS carrier for cancer therapeutics, various nanoparticles loaded with anticancer drugs have been developed. Drugs such as paclitaxel, doxorubicin, or vincristine were loaded into liposomes and used as anti-cancer drugs at clinics [38,39]. Other than anti-cancer drugs, reagents which generate heat by external energy can be incorporated into nanoparticles and used for hyperthermia therapy. Magnetite is a colloid of Fe 3 O 4 iron oxide that can be used as a contrast agent for MRI; magnetite generates heat when stimulated with alternating magnetic fields [40]. Agents for photothermal therapy such as metal nanoparticles or polymers can also be delivered to tumor sites by nanoparticles [41]. Various nanoparticles have been in use since the 1990s, such as liposomal daunorubicin or doxorubicin [42], since efforts were made to widen the variety of drugs or increase the efficacy of the medicine [43]. The global market of nanomedicine is rapidly growing with an estimated business of USD 293.1 billion in 2022 [44,45].
Biofabrication of EVs
As all cells secrete EVs, cultured cells are the best candidates for fabricating EVs for therapeutic use. In general, the medium used for mammalian cell culture contains serum, including animal derived EVs. Thus, the use of a defined medium without a serum component is indispensable [46]. The source cells of EVs are chosen for their therapeutic use but can even be fabricated from the same cell type. EVs have heterogeneous physical properties such as size, density, and shape, as well as composition of cargo and surface molecules [11,47,48]. These heterogeneities may affect the efficacy of EVs when applied as therapeutic drugs, and the properties of carriers when applied as DDS carriers. EVs can be isolated using various methods, such as differential centrifugation, density gradient centrifugation, ultrafiltration, affinity chromatography, precipitation, and size exclusion chromatography [49,50]. The complete separation of EVs from other biosubstances of similar size, such as lipoproteins and protein aggregates, is currently difficult. Various EV separation kits are available which have been reported to have higher separation efficiency compared with conventional ultracentrifugation methods [51,52]. Size exclusion chromatography methods have also been used to separate exosomes from protein aggregates, which reportedly have less contamination than other methods [53]. The use of a microfluidic device or tangential-flow filtration may reduce contamination.
Both biochemical and physical properties are used to evaluate the quality of EVs. Biochemical analysis uses proteomic, genomic, and lipidomic approaches [54]. Western blot analysis is a common biochemical approach for bulk EV samples, by which EVrelated proteins, such as tetraspanins (CD9 and CD63), have been confirmed [55]. The morphological characteristics of EVs can be evaluated using electron or atomic force microscopy [56,57]. Optical imaging is not possible because EVs are often smaller than the wavelength of visible light. However, the use of a fluorescent microscope to labeled EVs enables visualization of their presence [58,59]. The size and surface charge of EVs can be estimated using methods such as dynamic light scattering or nanoparticle tracking analysis, and zeta potential analysis, respectively [60][61][62]. None of the single methods can quantify EVs' characteristics. Therefore, a combination of approaches is required.
EV-Based Liquid Biopsy for Cancer Diagnostics
As mentioned in the previous section, EVs also play important roles in intercellular communication involved in tumor development and are expected to be a promising source of biomarkers for biofluid (e.g., blood and urine)-based cancer diagnostics, called liquid biopsy [63]. For instance, surface proteins of exosomes represent their origin and alteration of the parent cells and are therefore expected to be cancer biomarkers. Examples include prostate cancer antigen 3 [64] and survivin for pancreatic cancer [65], CD24 for breast cancer [66] and ovarian cancer [67], and CD9 and CD147 for colorectal cancer [68]. Nucleic acids contained in exosomes, such as miRNAs, messenger RNA (mRNAs), and long noncoding RNAs, are also promising biomarkers for cancer diagnostics [69,70]. Among these, exosomal miRNAs have been intensively investigated because of their relatively high stability against enzymatic degradation [71]. Exosomal miR-23b-3p, miR-10b-5p, and miR-21-5p have been reported as prognostic biomarkers for non-small cell lung cancer [72], whereas miR-141 has been reported as a biomarker for prostate cancer [73]. Various other examples can be found in previous reviews [74].
Due to the heterogeneity of samples, as well as the small quantity of target analytes, sensitive and selective analysis of EVs in a facile and inexpensive way has been a challenge in the field of diagnostics. In addition to widely used methods, such as enzyme-linked immunosorbent assay for protein markers as well as quantitative real-time polymerase chain reaction (qRT-PCR) and next-generation sequencing for nucleic acid markers, highly sensitive and selective EV analysis methods using nanoparticles have recently been investigated [75,76]. Nanoparticles are known to exhibit specific functions derived from their nanometer size, which is an intermediate between atomic and bulk scales. For instance, semiconductor nanoparticles, which are often called quantum dots (QDs), exhibit fluorescence derived from the quantum-confinement effect, whereas metal nanoparticles show strong optical absorbance at specific wavelengths owing to localized surface plasmon resonance (LSPR). The following sections introduce the use of these synthetic, functional nanoparticles for EV analysis (Figure 3).
Use of Synthetic Nanoparticles for Analyzing Exosomal Surface Proteins
Antibody surface modification is the main strategy for targeting synthetic nanoparticles to cancer-related exosomal surface proteins. Aptamers, which are short sequences of artificial DNA or RNA that bind a specific target molecule, have been another choice owing to their smaller size and lower cost compared with those of antibodies. Binding of synthetic, functional nanoparticles to target exosomal surface proteins through these targeting moieties enables their effective capture and sensitive detection via nano size-derived functions.
For example, LSPR-based detection of exosomal surface proteins utilizes the strong optical absorbance of plasmonic nanoparticles at a specific wavelength around the red color, which can be blue-shifted by their aggregation. Jiang et al. non-covalently coated gold nanoparticles (AuNPs) with a panel of aptamers that can specifically bind to exosomal surface proteins, i.e., CD63, EpCAM, PDGF, PSMA, and PTK7 [77]. After mixing the aptamer-coated AuNPs with target exosomes in a high-salt concentration solution, binding of the aptamers to target exosomal surface proteins induces the aptamer displacement from the AuNP surface, leading to the aggregation of AuNPs and a shift in LSPR-derived absorbance. By constructing an aptamer-coated AuNP-based detection panel for the above five cancer protein markers, the authors demonstrated surface molecular profiling of exosomes isolated from various cancer cell lines.
investigated [75,76]. Nanoparticles are known to exhibit specific functions derived from their nanometer size, which is an intermediate between atomic and bulk scales. For instance, semiconductor nanoparticles, which are often called quantum dots (QDs), exhibit fluorescence derived from the quantum-confinement effect, whereas metal nanoparticles show strong optical absorbance at specific wavelengths owing to localized surface plasmon resonance (LSPR). The following sections introduce the use of these synthetic, functional nanoparticles for EV analysis (Figure 3). When metal nanoparticles bind to molecules, Raman scattering from the molecules is significantly improved by up to 10 10 -10 11 times, enabling their sensitive detection at the single-molecule level. This phenomenon is called surface enhanced Raman scattering (SERS) and has also been used for exosomal surface protein detection. Li et al. developed Au-core/Ag-shell nanoparticles (Au@Ag NPs) modified with antibodies for the exosomal cancer protein marker, migration inhibitory factor, as SERS tags [78]. Exosomes were first captured by antibody-immobilized substrate and then labelled with Au@Ag NPs, resulting in the specific detection of pancreatic cancer-derived exosomes via SERS with a detection limit of ca. 9.0 × 10 −19 M. The developed SERS assay enabled the classification of pancreatic cancer patients and healthy individuals, metastasized tumors and metastasis-free tumors, and tumor node metastasis P1-2 stages and P3 stage.
In addition to LSPR and SERS, other nano size-derived functions of synthetic nanoparticles have also been utilized for the analysis of exosomal surface proteins [79][80][81][82]. For example, the magnetic property of superparamagnetic iron oxide nanoparticles has been used for the capture and isolation of exosomes with specific target surface proteins [82], whereas the peroxidase-mimetic activity of iron oxide nanoparticles has been used for the detection and analysis of exosomal surface proteins [83].
Use of Synthetic Nanoparticles for Analyzing Exosomal miRNAs
While exosomal protein markers are mainly expressed on the cell surface, exosomal nucleic acid markers, including miRNAs, are encapsulated inside exosomes. Analysis of exosomal miRNAs usually requires RNA extraction, followed by amplification and detection of target miRNAs using the qRT-PCR method [84]. Recent studies have also examined the direct detection of miRNAs in a single exosome, which requires membrane fusion or membrane penetration of the detection probes together with their detection using a single vesicle imaging system [85,86]. To target these exosomal miRNAs using synthetic nanoparticles, the main strategy is to modify their surfaces with single-stranded DNAs (ssDNAs) which have a complementary sequence to the target miRNAs. The ssDNA-modified nanoparticle surface can bind to target miRNAs via double helix formation, resulting in sequence-specific targeting.
AuNPs exhibit strong quenching against a wide range of fluorophores via Förster resonance energy transfer (FRET) when they are close to the particle surface. Together with their large surface area to load signal amplification agents, AuNPs have been used as a platform for fluorescent detection of biomolecules, including exosomal miRNAs [87]. Zhai et al. reported an Au nanoflare-based fluorescent probe for detection of the exosomal breast cancer miRNA marker, miR-1246 [88]. They prepared ssDNA-modified AuNPs, followed by hybridization with Cy3-modified ssDNA, which was partially complementary to the ssDNA on the AuNP surface. At this initial state, because Cy3 is close to the AuNP surface, fluorescence from Cy3 was quenched by FRET. After application to the plasma sample, the Au nanoflare probe penetrates the exosomal membrane to target the internal miRNAs. Because the ssDNA on the AuNP surface was designed to be complementary to miR-1246, exosomal miR-1246s could bind to the ssDNA on the AuNP surface via toehold-mediated strand displacement, resulting in the release of Cy3-mediated ssDNA from the AuNP surface, leading to the activation of the fluorescent signal. By measuring plasma miR-1246 levels using the aforementioned probe, successful identification of breast cancer patients from healthy controls was demonstrated.
In addition to direct labelling, amplification of fluorescent signals is a promising strategy for sensitive miRNA detection. Degliangeli et al. reported an AuNP-based fluorescent amplification and detection platform for cancer-related miRNA markers, miR-21 and miR-203 [89]. AuNPs were coated with fluorescein mercuric acetate (FMA)-modified ssDNA, which could bind to target miRNAs via duplex formation. On adding ssDNAmodified AuNPs to samples with duplex-specific nuclease (DSN), target miRNAs could bind to ssDNA on the AuNP surface via duplex formation. Subsequently, activation of the FMA fluorescent signal was induced by DSN-mediated degradation of the formed duplex. As the target miRNAs are released after DSN-mediated duplex cleavage, they can be reused for further fluorescent activation cycles, resulting in the amplified fluorescent detection of miR-21 and miR-203. A similar amplification strategy with an enzyme-free catalytic DNA reaction, in which QDs were used as a fluorescent agent instead of small molecular fluorophores, was also reported [90].
In addition to the above strategies, SERS and electric signals have been used for synthetic nanoparticle-based miRNA detection [91,92]. Further challenges in this field include the development of a detection platform for multiple miRNA panels. Recently, cancer diagnostics using several tens of miRNAs (typically around 20 types) as a panel has been recognized as a promising approach [93,94]. The development of novel synthetic nanoparticle-based platforms that can achieve facile, sensitive, and multiple detections of exosomal miRNA markers is expected to further accelerate the future clinical translation of this miRNA panel-based diagnostic approach.
Exogenous Nanoparticle-Based In Vivo Diagnostic Imaging
As mentioned above, EVs are prospective targets for cancer therapies and diagnostics. While the liquid biopsy-based analysis of EVs provides information for the origin and characteristics of cancer, the size and location of tumor are also important information for cancer therapy and diagnostics. In particular, for non-invasive cancer therapy methods, such as anticancer drugs and radiation, this monitoring information is important for evaluating the treatment effect. This information is also essential for the efficiency and safety of surgical procedures in the surgical therapeutic field. For achieving the accurate monitoring of tumor size and location, exogenous, synthetic nanoparticles can play important roles. In this section, we introduce synthetic nanoparticle-based in vivo imaging technologies for cancer therapy and diagnostics.
To visualize formed tumors, molecular imaging has attracted much attention as a powerful technology for understanding cancer biological phenomena and medical applications as non-invasive diagnostic techniques. In particular, nanoparticulated imaging agents have been explored as versatile probes for molecular imaging because they have characteristic functions derived from their size and can be modified with ligands that are well-suited to target specific biomolecules. Numerous functionalized nanoparticles have been developed and proposed as imaging agents for various imaging modalities such as magnetic resonance imaging (MRI), X-ray computed tomography (CT), positron emission tomography (PET), and fluorescent imaging. Regardless of the modality, nanoparticulated imaging agents have demonstrated outstanding performance enabled by fine-tuning the nanoparticle size, surface properties, composition, and other characteristics.
MRI, CT, and PET imaging technologies have been widely used as powerful tools for noninvasive diagnostic imaging because of their excellent penetration depths. Paramagnetic gadolinium (Gd) complexes have been commonly explored as MRI contrast agents, mainly for vascular visualization and brain tumor detection. Nanoparticulated Gd contrast agents improve the circulation time and target specificity; however, the relatively low relaxivity and high dose requirement are potential issues. Superparamagnetic iron oxide nanoparticles are expected to improve contrast efficacy compared to conventional Gd-based nanoparticles, which can be further tuned by adjusting the size and composition of the nanoparticles [95]. Their use can also help avoid potential side effects caused by the use of ionized Gd and the high requisite doses of Gd-based contrast agents. Recently, iron oxide-based superparamagnetic nanoparticles such as "Ferridex" and "Resovist" have been approved by the Food and Drug Administration in the United States for liver cancer detection; however, a higher spatial resolution is required for a more accurate diagnosis. Furthermore, PET has emerged as a clinical imaging modality because of its advantages, including excellent penetration depth and quantitative capability. PET imaging via radiolabeled nanoparticles has improved PET contrast efficiency and accelerated the quantitative evaluation of drug delivery to tumors because of the reduced risk of radioisotope detachment [96].
Fluorescence imaging is also a powerful modality for molecular imaging because of its high spatial and temporal resolutions. Although many techniques for fluorescent imaging have been proposed as advanced modalities of molecular imaging with higher spatial resolution, these technologies face limitations in the detection limit due to restrained brightness and the inevitable photobleaching of small fluorescent molecules. To overcome these limitations, many studies have focused on developing nanoparticle-based fluorescent probes. Quantum dots (QDs) consisting of semiconductor nanocrystals have been reported as effective imaging probes for visualizing cellular membrane proteins and intracellular components [97]. Although the brightness and long-term stability make QDs candidates for further applications, such as 3D confocal imaging and in vivo targeted real-time imaging, cytotoxicity due to the inherently toxic heavy metals in its core (e.g., cadmium and selenium) has been a controversial issue. Therefore, QDs composed of less toxic semiconducting nanocrystals (e.g., indium phosphide and silicon) have attracted considerable attention as bright and biocompatible probes for fluorescence bioimaging [98]. In addition to lowtoxicity QDs, polymer dots based on conjugated semiconducting polymers have been investigated as alternative fluorescent probes without considering the cytotoxicity caused by ionized heavy metals [99].
These imaging modalities have various advantages and disadvantages. For instance, MRI, CT, and PET have high penetration depths, however, their spatial resolution is limited to the millimeter scale. In contrast, fluorescent imaging has high spatial resolution at the subcellular scale, however, its penetration depth is limited to a few centimeters. To benefit from each advantage, multimodal imaging has generated considerable research interest in increasing the accuracy of diagnosis using complementary information from different imaging modalities [100]. The fabrication of nanoparticulated imaging probes allows them to exhibit multifunctional characteristics; therefore, the strength of each imaging modality can be integrated into a multimodal nanoparticle. For instance, 64 Cu-labelled and RGD (Arg-Gly-Asp) peptide-conjugated iron oxide nanoparticles were developed as PET/MR dual-modality imaging probes for tumor integrin expression, whereas the near-infrared fluorescent (NIRF) dye ZW800 loaded with silica nanoparticles labelled with Gd ions and 64 Cu was developed as PET/MR/optical imaging probe for tumor-draining sentinel lymph nodes [100]. In addition to diagnostic imaging, these multifunctional nanoparticles can also serve to monitor therapeutic efficacy of cancer treatment (theranostics) [101]. For example, croconaine dye-based nanoparticles were developed for photoacoustic/fluorescent imaging-guided photothermal therapy [102], while SPIO and olaparib-loaded exosome extracted from hypoxic cells was investigated for magnetic particle imaging and therapy of hypoxic region in tumor [103]. Furthermore, multimodal nanoparticles have also improved the accuracy and quality of in vivo cellular tracking, leading to a great contribution to animal cell-based diagnostics and therapy. For instance, Huang et al. developed a mesenchymal stem cell (MSC)-based multifunctional theranostic platform for targeted delivery of MSCs to glioblastoma and multimodal imaging with hyaluronic acid-coated mesoporous silica nanoparticles with green fluorescent dye (FITC), NIRF dye (ZW800), Gd 3+ , and 64 Cu [104]. In vivo multimodal imaging with optical and magnetic resonance imaging and PET successfully revealed the feasibility of tumor tropism-facilitated delivery of their multifunctional MSC platform with improved tumor accumulation. Nanoparticle-based multimodal imaging improves the properties of MSC-based theranostics and immune cell-based theranostics for cancer. A dual-modal PET/NIRF nanoparticle-based imaging probe for labelling chimeric antigen receptor (CAR) T cells achieved long-term whole-body immune cell tracking in a mouse model of carcinomatosis [105]. This type of nanoparticlebased multimodal cellular tracking technique is crucial for advancing cell-based therapy to investigate the fate of administered cells and the therapeutic effect. These nanoparticlebased multimodal cell-tracking systems are expected to lead to the next generation of theranostics for future clinical applications. The core of QDs often contain heavy metals such as selenium, cadmium, or lead, which are toxic to the human body. To reduce toxicity, these metal cores are covered with non-toxic polymers, however, contamination during manufacturing process or leakage from deficient QDs cannot be ignored. Gd used for MRI is also toxic and bioaccumulative, thus, must be used with a chelation compound to reduce toxicity and ensure rapid elimination from the body.
Nanoparticle-Based Biological Drug Delivery System (DDS)
As previously mentioned, various nanoparticles have been developed and applied in cancer therapy, diagnostics, and imaging. As therapeutic methods using biomolecules (e.g., artificial recombinant protein, nucleic acid) are expected in the field of cancer therapy, developing innovative technology to deliver these materials to a specific place in the body (DDS technology) is essential. DDS is a key technology for improving the therapeutic effect of drugs by optimizing the physicochemical properties of active pharmaceutical ingredients (APIs), improving pharmacokinetics, and targeting specific cells [106]. Compared with conventional small-molecule drugs, the bioavailability and efficacy of biopharmaceuticals, such as recombinant proteins manufactured by animal cells, inherently rely on DDS technology for several reasons. First, the molecular size of therapeutic biomolecules is considerably larger than that of small molecules; hence, biomolecules rarely penetrate the cell membrane and reach inside the cells where the drugs function. Thus, the intracellular delivery of biopharmaceuticals is essential for achieving the therapeutic effect. Second, some biopharmaceuticals are prone to enzymatic degradation in the body. For example, therapeutic nucleic acids, especially RNA, are easily degraded by the abundant nucleases in the body. These biomolecule-derived APIs should be protected by extensive chemical modifications or delivery vesicles [107]. Therefore, DDS technology is highly desirable for material protection and targeting; hence, optimization of DDS is a pivotal step in the development of biopharmaceuticals.
DDS is mainly achieved using two methodologies: chemical and biological approaches. Chemical approaches involve the use of chemical substances or synthetic nanoparticles to deliver drugs. Pharmacokinetics can be improved via chemical conjugation of the targeting moiety to the APIs or attachment of the polymers (e.g., polyethylene glycol) to APIs.
Synthetic nanoparticles, such as liposomes, inorganic materials, and polymer materials, have been used to encapsulate APIs for delivery. In contrast, biological approaches use biomolecules or biological vectors to achieve efficient delivery of APIs. Biomolecules or biological vectors are nanometer-sized particles, similar to other synthetic nanoparticles. Nanoparticles, such as viral vectors, are powerful tools for delivering genetic information of therapeutic molecules in the form of DNA or RNA. Currently, adeno-associated virus (AAV) vectors are used for gene therapy, adenovirus vectors are used as prophylactic vaccines against infectious diseases, and lentivirus or retrovirus vectors are used for ex vivo gene therapy, such as CAR-T cell therapy. Virus-like particles (VLPs) mimic the delivery mechanism of viral vectors; moreover, they avoid genetic materials; thus, they are safer than viral vectors and are not involved in gene insertion into target cells [108]. More recently, EVs secreted by all cell types have been widely studied as novel biological vectors for DDS [109].
It is well known that biotechnology and bioengineering are central disciplines when developing a biological DDS. Using technical knowledge from these disciplines, the above biological DDS could improve existing biologics and develop novel therapeutic modalities. Engineering of existing viral vectors is an emerging topic, and several studies have shown that in vivo tropism and functionality of viral vectors can be designed. Ogden et al. analyzed the fitness of the AAV vector using a mutant capsid library and found that certain mutations in the capsid protein affected the biodistribution of recombinant AAV vectors. This machine-guided design is a powerful tool for the identification of mutant AAV vectors with favorable properties for in vivo gene therapy [110]. Mihara et al. demonstrated that a targeting moiety, a macrocyclic peptide, can be inserted into the surface-exposed loop of the AAV capsid protein, changing the tropism of the AAV vector [111]. The in silico design of proteins may also be an attractive strategy for developing a new protein-based DDS [112]. These technological advancements in biological nanoparticle-based DDS have evolved with the changing cancer therapy approach.
In addition to the importance of functional modifications, mass-scale production is a critical issue in the development of biological DDSs. Although the production process of biologics has already been established, the production and purification of biological nanoparticles, such as viral vectors and EVs, remain challenging because of the complexity and difficulties of the purification and scale-up processes [113]. Generally, the production yield of biological nanoparticles is substantially lower than that of conventional biologics (e.g., therapeutic immunoglobulins) because of their inherent properties and the lack of efficient production technologies. Establishing a balance between the yield and purity of the final product is a trade-off when producing biologics. Therefore, the development and optimization of an efficient production process is key for reducing the manufacturing costs of biological nanoparticles and achieving affordable therapeutics.
Nanoparticle-based biological DDS, such as biomolecules and biological vectors, are a promising platform for the delivery of cancer therapeutics. Owing to recent fundamental discoveries in biology, such as gene-editing technologies, biological DDS could deliver a novel type of therapeutics to the target site in the body. Therefore, biotechnology-based biological DDS is the foundation for next-generation cancer therapeutics.
Future Prospect
This brief review summarizes the current approaches to cancer therapy and diagnostics using biological nanoparticles generated by cells and other organisms (endogenous nanoparticles) and artificial nanoparticles generated by chemical synthesis and other methods (exogenous nanoparticles). For endogenous nanoparticles, we introduced EVs and DDS technology using virus vectors, which have attracted particular attention in recent years for cancer therapy and diagnostics.
Recently, gene therapy for cancer treatment has been highly anticipated with the improvement of genome editing technologies such as CRISPR/Cas9. The treatment strategy using genome editing technologies is to attack target cancer cells by returning genome-edited T cells to the body. This genome editing is performed in vitro. On the other hand, RNA interference technology has also been expected for cancer treatment. This technology specifically suppresses gene expression with a short double-stranded RNA (siRNA) by degrading its sequence-specific target mRNA [114]. To enhance the treatment effect, siRNA should be delivered to targeted cancer cells because the inhibitory effect of RNA interference on gene expression is restricted to cells where siRNA is present. The expectation of gene therapy peaked in the 2000s; however, this expectation collapsed with unfortunate accidents, which might be attributed to DDS [115][116][117]. To avoid these accidents, DDS need to be improved in terms of their function and general understanding of endogenous nanoparticle safety. There are many unclear points regarding endogenous microparticles, including their intracellular formation process, extracellular dynamics, and even their biological significance, which is an essential issue in the development of cancer diagnostic and therapeutic techniques for these particles. Adverse effects, such as allergic reactions, have been reported in clinical trials with EVs, showing the need for further investigation [118]. Furthermore, because of the high manufacturing cost in exchange for safety and high medicinal effects, endogenous nanoparticles have a bottleneck for clinical application. On the other hand, manufacturing cost of exogenous nanoparticles is low and it is easy to add various functions. We introduced synthetic exogenous nanoparticles for imaging. In addition to conventional methods using gene-editing technologies and protein engineering, the modification and functionalization of endogenous nanoparticles using scientific manufacturing technologies of exogenous nanoparticles have recently been developed [119,120]. Moreover, research articles relating to endogenous-exogeneous hybrid nanoparticles constructed in complexes with nanocarriers or inorganic particles have significantly increased [121,122]. As mentioned in this review, nanoparticles have great potential as medical materials to eradicate cancer. On the other hand, despite the intensive research and success in treating tumors in mouse models, only a few non-targeted nanoparticle formulations, such as Abraxane and Doxil, have been clinically approved at present [123]. A recent review suggested that the average delivery efficiency of previously reported nanoparticles into solid tumor was 0.7% and did not change much over past 10 years [124]. It has also been reported that since the diffusion of nanoparticles in tumor is strictly prohibited by the dense ECM network, most nanoparticles extravasated from blood vessels cannot reach the tumor core and stay near the blood vessel wall [125]. To overcome these issues for actual clinical translation, revisiting the tumor delivery strategy of nanoparticles based on the deep understanding on their interaction with biological environment is required. In addition, applying the DDS technology to diseases other than tumor has also attracted much attention: examples include antibody delivery into brain for neuro-degenerative diseases [126] or antisense oligonucleotide delivery for RNA interference for muscular dystrophy [127]. Through these investigations, future research in cancer therapy and diagnostics fields using functional nanoparticles manufactured in synthesis or biotechnologies is likely to produce many novel insights, which could also lead to the development of therapeutics by controlling cell behaviors.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,482.6 | 2023-01-19T00:00:00.000 | [
"Biology"
] |
Interleaved Multi-Contact Peripheral Nerve Stimulation to Enhance Reproduction of Tactile Sensation: A Computational Modeling Study
Peripheral nerve stimulation (PNS) is an effective means to elicit sensation for rehabilitation of people with loss of a limb or limb function. While most current PNS paradigms deliver current through single electrode contacts to elicit each tactile percept, multi-contact extraneural electrodes offer the opportunity to deliver PNS with groups of contacts individually or simultaneously. Multi-contact PNS strategies could be advantageous in developing biomimetic PNS paradigms to recreate the natural neural activity during touch, because they may be able to selectively recruit multiple distinct neural populations. We used computational models and optimization approaches to develop a novel biomimetic PNS paradigm that uses interleaved multi-contact (IMC) PNS to approximate the critical neural coding properties underlying touch. The IMC paradigm combines field shaping, in which two contacts are active simultaneously, with pulse-by-pulse contact and parameter variations throughout the touch stimulus. We show in simulation that IMC PNS results in better neural code mimicry than single contact PNS created with the same optimization techniques, and that field steering via two-contact IMC PNS results in better neural code mimicry than one-contact IMC PNS. We also show that IMC PNS results in better neural code mimicry than existing PNS paradigms, including prior biomimetic PNS. Future clinical studies will determine if the IMC paradigm can improve the naturalness and usefulness of sensory feedback for those with neurological disorders.
I. INTRODUCTION
T ACTILE sensation allows a human to detect, discrimi- nate, and identify external stimuli and respond to stimuli appropriately [1], [2].Tactile sensation can be lost through disorders such as limb amputation, nerve injury, spinal cord injury, and stroke.Losing tactile sensation creates numerous functional consequences, including impaired performance in tasks that require focus or fine motor skills [3].To mitigate these problems, researchers are developing neuroprostheses that use peripheral nerve stimulation (PNS) to restore tactile sensation.PNS of residual somatosensory nerves in the arm can elicit "artificial" touch sensations in the hand.Current PNS paradigms typically provide one or more sensory percepts on the hand, where input from a force sensor directly scales a PNS parameter, such as pulse amplitude (PA), pulse width (PW), or pulse frequency, to influence the perceived intensity of the evoked percept [4], [5], [6], [7].However, many sensations produced by PNS are described as unnatural tingling or paresthesia, which can be bothersome to participants [8], [9].
One key issue with PNS that may contribute to these perceptual deficits is that the neural activity created by PNS largely does not resemble the complex, dynamic firing patterns in natural touch [10].Recent work has begun to develop biomimetic PNS paradigms that attempt to mimic aspects of the biological neural code of natural touch.These biomimetic paradigms have demonstrated preliminary success in improving sensation naturalness and enhancing object detection reaction times, at least for some participants [11], [12].However, clinical implementation of biomimetic paradigms can be challenging, as it can be difficult to select specific stimulation parameters to evoke important aspects of the natural neural code.Prior studies in both humans and non-human primates have shown that neuron type, neuron location, neural firing rate, and neural population size over time are four important aspects of the neural code underlying the perception of touch stimuli [13], [14], yet no existing biomimetic paradigm mimics all four of these coding properties [11], [12], [15], [16], [17], [18], [19], [20].In addition, computational models of neural activation from electrical stimulation can help to predict neural activation patterns resulting from stimulation and allow us to systematically select stimulation parameters to achieve a given pattern of neural activity [21], [22], [23], [24].
We hypothesize that developing interleaved multi-contact (IMC) biomimetic PNS paradigms will enhance our ability to activate natural tactile codes and ultimately provide useful and intuitive sensory feedback to users of sensory neuroprostheses.We used TouchSim, a model that determines the response of mechanoreceptive afferents to mechanical stimuli applied to the hand [10], to identify the neural code to mimic with PNS (Fig. 1).We used an electrical activation model that predicts activation of peripheral afferents in response to nerve stimulation [42], in conjunction with non-gradient-based optimization algorithms, to select PNS stimulation parameters.We then used this tool chain to create several PNS paradigms constructed under different touch criteria, and used neural response reproduction accuracy as a metric to compare the performance between paradigms.We demonstrate that interleaved multi-contact PNS approaches improve neural response reproduction accuracy over current biomimetic paradigms.The paradigms presented here can be implemented in future clinical trials of PNS to determine how the sensations produced are perceived and utilized in sensorimotor tasks.
II. METHODS
Our goal was to reproduce neural responses to natural touch stimuli using PNS.We first used TouchSim to generate a set of neural responses to touch stimuli applied to the index finger that we aimed to replicate with PNS (Fig. 1a-b).We then coupled optimization with a model of neural activation from electrical stimulation applied through a multi-contact Composite Flat Interface Nerve Electrode (C-FINE) (Fig. 1c).This process resulted in a time series of parameters for C-FINE stimulation, called a playlist, that approximately reproduced the original neural responses (see Fig. 2 for an overview of our approach).
A. TouchSim Neural Activation Profiles (TNAPs)
TouchSim was used to generate neural responses to natural touch stimuli applied to the index finger [10].Briefly, the TouchSim model places low-threshold mechanoreceptive afferents in the palmar surface of the hand based on typical human innervation densities, and then predicts how the neurons will fire in response to spatiotemporal mechanical stimuli presented to the skin.We used TouchSim to simulate 1704 afferents in the index finger: 562 slowly-adapting type I (SA), 947 rapidlyadapting type I (RA), and 195 rapidly-adapting type II (PC).
TouchSim data was generated for 1 s indentation stimuli on the index finger that consisted of symmetrical ramp up and ramp down periods, flanking a sustained period (Fig. 1a).Two data sets were generated: 1) stimuli with a fixed depth and a ramp duration that varied from 0.05 s to 0.5 s in increments of 0.05 s, and 2) stimuli with a fixed ramp duration and Fig. 2. Optimization-based biomimetic interleaved multi-contact (IMC) peripheral nerve stimulation for tactile sensation.Blue boxes: TouchSim generated neural activation patterns for a set of mechanical touch stimuli (indentations), which were then split into 1-ms touch neural activation profiles (TNAPs).The TouchSim reachable set is the collection of all unique TNAPs for all modeled touch stimuli.Green boxes: Particle swarm and pattern search optimization were used to select stimulation parameters to reproduce each TNAP.The optimization runs the electrical activation model to generate electrically-stimulated NAPs (ENAPs) from stimulation parameters, calculates the reproduction error, and iterates until a minimum reproduction error is reached.Purple boxes: A lookup table stores the association between each TNAP and the optimal stimulation parameters to approximate it.Lookup tables using a fixed contact for all TNAPs were created using each electrode contact (1C) (n=15), along with sets of two-contact (2C) pairs (n=21).Playlists were generated by ordering stimulation parameters from the lookup tables in series to create a sequence of ENAPs that approximated a given touch stimulus.Playlists were generated for each 1C and the 2C non-IMC paradigms for illustration purposes, though these were not used for further analyses.Orange boxes: Error was compared across non-IMC lookup table results, and the contact, or pair of contacts, resulting in the lowest error for a given TNAP was selected for the 1C and 2C IMC paradigms, respectively.Playlists were generated for the 1C and 2C IMC paradigms by ordering stimulation parameters in series to create a sequence of ENAPs that approximated a given touch stimulus.Rectangles represent process steps, and rounded rectangles represent outputs.
a depth that varied from 1 mm to 6 mm in increments of 0.5 mm.We chose this range of ramp durations to span the possible values of a 1 s stimulus with symmetrical ramp on and ramp off periods.We chose this range of depths based on the values over which TouchSim was validated [10].In total, we simulated 20 touch stimuli -11 with varying depths and 10 with varying ramp durations (one stimulus was shared between the two sets).
TouchSim output the spike times for each modeled neuron (Fig. 1b).The neural response to each stimulus was parsed into 1 ms bins, for a total of 1000 bins spanning the 1 s stimuli.Each bin contains a neural activation profile (NAP), or a row vector with binary values representing whether activation occurred for each of the 1704 modeled index finger neurons (Fig. 1d).Each NAP is considered to be independent from other NAPs, such that no dependencies exist between temporally adjacent NAPs.However, a series of NAPs can be ordered in a sequence called a playlist to reproduce the neural response to a specific stimulus over time.
The unique TouchSim-generated NAPs (TNAPs) among all 20 touch stimuli were found, creating the TouchSim reachable set, or the set of NAPs we must be able to approximate with PNS to recreate the set of touch stimuli (Fig. 2).
B. Electrical Neural Activation Profiles (ENAPs)
To determine the optimal stimulation parameters for PNS, our optimization algorithm used a biophysical electrical activation model to determine which neurons in the median nerve were activated from each PNS pulse.PNS was delivered to the median nerve through a 16-contact C-FINE (Fig. 1c).Of these contacts, one was a designated return path and the other 15 contacts were used for cathodic stimulation.
The electrical activation model consisted of a previouslydeveloped finite element model (FEM) of a human median nerve [22] and a linear approximation method to determine neural firing.The FEM was created in ANSYS Maxwell 3D using ultrasound images of a median nerve that were taken during C-FINE implantation surgery for a previous clinical study [6], [19].The electrical and geometrical properties of the multi-contact C-FINE were simulated in the FEM [27], and voltage fields within the cuff-nerve complex were generated based on the propagation of a current through the tissue from C-FINE stimulation [18].All stimulation pulses were modeled as square cathodal pulses.Neurons were randomly placed within the modeled fascicles based on typical human fiber densities and with fiber diameters selected from normal human distributions, yielding a total of 7334 neurons in the median nerve [43], [44].
A linear approximation method was then used to determine neural activation of these axons in response to PNS [42].Briefly, the linear approximation method is an algebraic function that predicts activation of a myelinated neuron given inputs of stimulation PW, fiber diameter, and extracellular voltage at each node of Ranvier.The approximation was previously generated [42] by fitting a curve to the boundary between neural activation and non-activation produced by hundreds of iterations of the MRG active neural model simulated in NEURON [45].
The electrical activation model determined neural firing due to stimulation pulses delivered through the C-FINE.Each stimulation pulse was defined by the stimulating contact(s), PW, and PA.For a given stimulation pulse, the PW was held constant across all active contacts but the PA could vary across the 15 contacts.The electrical activation model output a vector with binary values representing whether each neuron was activated by the pulse of PNS.
Of the 7334 neurons in the electrical activation model of the median nerve, we selected 1704 neurons grouped within three fascicles to be in the index finger in order to match the number of neurons in the TouchSim model [10].The fascicles selected to contain index finger afferents were assumed to be adjacent Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
to one another based on prior studies [46], [47].Any neuron in the median nerve model that was not assigned to be in the index finger was removed.We then paired each remaining neuron in the nerve model with a neuron in the TouchSim model of the index finger.To produce this nerve-finger mapping, we considered the distributional characteristics of neuron type reported in the anatomical literature [48], [49], [50] and based the location assignment on an assumed somatotopy of the fascicles, such that neurons closer together in the finger were grouped together in fascicles.Since neurons in the TouchSim model had a specific position in the finger and an afferent type (SA, RA, PC), we assumed this same type and position identity for the corresponding neuron in the nerve model.Thus, when the electrical activation model was run, it produced an electrically-activated NAP (ENAP) of neurons in the index finger with the same dimensions as a TNAP, and each neuron in the ENAP had a neuron type, index finger location, and fascicle location (Fig. 1e).
C. Optimization
We used optimization to minimize the reproduction error between the TNAPs in the TouchSim reachable set and the ENAPs created with PNS.We defined the reproduction error between a TNAP and an ENAP, or the objective function of the optimization problem, to be the mismatch of four critical elements of the neural code of tactile sensation: neuron type, neuron location, firing rate, and population size.We calculated this mismatch as follows: First, each neuron modeled by TouchSim and the electrical activation model had a type (SA, RA, PC) and location in the index finger (proximal, middle, distal segment).Thus, there were a total of nine possible type-location combinations, and each neuron fell into one of these nine categories.Each NAP could then be represented as a 9-dimensional vector, where each component contained the count of active neurons in each of the 9 type-location categories.The firing rate was implicit because of the timing of the TNAP within the stimulus, and the population size was represented by the overall error vectors.
To quantify the error (E) between a TNAP and an ENAP, we took the absolute value of the difference between the 9-dimensional vectors generated from the TNAP (T ) and that generated form the ENAP (M), and then summed the components, as in Equation ( 1).
We used a hybrid of particle swarm and pattern search optimization algorithms to determine optimal stimulation parameters that are used to approximate the TNAPs via PNS (Fig. 2).For each TNAP being approximated, particle swarm optimization was run first, and then pattern search optimization was run using the particle swarm optimization results.We limited PW to 0-0.2 ms and PA to 0-0.5 mA.Our optimization results were verified for a subset of TNAPs with an enumeration method.See [51] for more information on the optimization parameters and enumeration method.
We then repeated this process for each TNAP in the reachable set.Although some TNAPs might appear multiple times within a given stimulus, or multiple times across all modeled stimuli, we only ran the optimization once per TNAP in the reachable set to reduce total computation time.We stored the results of a given optimization in a "look-up table," such that each entry included the target TNAP and its associated stimulation parameters that resulted in the closest matching ENAP (Fig. 2).
D. Lookup Tables
We repeated the process described in Section C with four sets of constraints, resulting in four stimulation paradigms.First, we allowed the optimization to select PA and PW for each pulse, but required that the same single C-FINE contact be selected for each TNAP in the reachable set.We repeated this for each of the 15 C-FINE contacts, creating 15 singlecontact lookup tables.
Second, we generated an interleaved pattern of stimulation we call the "IMC" paradigm in which only a single contact could provide stimulation at a time, but the active contact could switch on a pulse-by-pulse basis.The single-contact IMC lookup table was created by selecting among the 15 singlecontact lookup tables to find the contact (and stimulation parameters) that led to the lowest error for each TNAP (Fig. 2).
Third, we generated two-contact stimulation paradigms in which two contacts could be active simultaneously.We generated 21 non-IMC two-contact lookup tables using all two-contact pairs of the following C-FINE contacts: M1, M2, M3, M4, M13, M14, and M15.These non-IMC patterns used the same pair of electrodes for all TNAPs in the reachable set.We chose these contacts because they resulted in the lowest error (see Results).
Fourth, we generated a two-contact IMC lookup table using an interleaved pattern in which two contacts could activate simultaneously, but the active contact pair could switch on a pulse-by-pulse basis.The two-contact IMC lookup table was created by selecting amongst the 21 two-contact lookup tables generated as described above.
In summary, these lookup tables were generated (Fig. 2): • 15 single-contact non-IMC lookup tables (one for each of the 15 cathodal C-FINE contacts) • 1 single-contact IMC lookup table that switches between the 15 C-FINE contacts on a pulse-by-pulse basis • 21 two-contact non-IMC lookup tables (one for each pair of M1, M2, M3, M4, M13, M14, and M15) • 1 two-contact IMC lookup table that switches between the selected 21 two-contact pairs on a pulse-by-pulse basis
E. Playlists
To demonstrate the implementation of the IMC approach in conveying touch stimuli via PNS, we created PNS pulse trains, called a playlist, to reproduce a typical touch stimulus encountered in everyday tasks: an indentation stimulus with a depth of 3 mm and a ramp duration of 250 ms (Fig. 1a).Whereas a lookup table holds the C-FINE stimulation parameters for the entire TouchSim reachable set, a playlist holds the pattern of C-FINE stimulation parameters that, when applied to the nerve in sequence, mimic the time course of this indentation stimulus as closely as possible.Thus, each playlist contains an ordered sequence of 1000 sets of stimulation parameters, one for each pulse in a 1000 Hz pulse train to create the 1000 TNAPs in the 1 s touch stimulus.We constructed four versions of this playlist by pulling the stimulation parameters from each of the following lookup tables: 1) single-contact non-IMC lookup table constructed for contact M1 (Fig. 3a), 2) singlecontact IMC lookup table (Fig. 3b), 3) two-contact non-IMC lookup table constructed for contacts M1 and M2 (Fig. 3c), and 4) two-contact IMC lookup table (Fig. 3c-d).Contacts M1 and M2 were chosen for the non-IMC playlists based on their frequency of occurrence in the IMC lookup tables (Fig. 3e) [51].Note that the two-contact IMC paradigm had the option to use single-contact PNS to create an ENAP if it was determined that single-contact PNS would result in a lower reproduction error than two-contact PNS.The optimization algorithm selected single-contact stimulation for 39 % of ENAPs (Fig. 3d).Note that only the two-contact IMC playlist was used for further analyses; the non-IMC and single-contact IMC playlists are only depicted to explain the differences across paradigms.
F. Analyses 1) Comparison of Non-IMC and IMC Methods:
We compared reproduction accuracy between the non-IMC and IMC PNS paradigms to determine any benefits of the interleaved multicontact approach.We hypothesized that using IMC PNS will improve reproduction accuracy compared to non-IMC PNS.
For this analysis, we compared the single-contact IMC lookup table to each of the 15 non-IMC lookup tables.We compared lookup tables rather than playlists so that each TNAP had equal weight in the error calculation since each TNAP is sampled only once in a lookup table, whereas a single TNAP could occur multiple times within a playlist.Thus, the reproduction error across a lookup table more accurately represents the ability of the stimulation paradigm to replicate TNAPs associated with touch in general, whereas a playlist more accurately represents the ability of the paradigm to replicate specific tactile stimuli.
We subtracted the reproduction error of every ENAP in each of the 15 non-IMC lookup tables from the error of the corresponding ENAP in the IMC lookup table to compare magnitude of reproduction error difference between IMC and non-IMC paradigms.Then, we calculated the prevalence of ENAPs that produced a smaller, larger, or equal reproduction error with the IMC lookup table compared to each of the 15 non-IMC lookup tables.We also performed Wilcoxon rank sum tests to see if there was a significant difference (α < 0.05) between the IMC lookup table reproduction errors and each of the 15 non-IMC lookup table reproduction errors.
2) Comparison of Single-Contact and Two-Contact IMC PNS:
We then compared the reproduction accuracy between the single-and two-contact IMC PNS paradigms to determine any benefits of using two-contact stimulation.We hypothesized that using two-contact IMC PNS will improve reproduction accuracy compared to single-contact IMC PNS.
For this analysis, we compared the single-contact IMC lookup table and the two-contact IMC lookup table.We subtracted the reproduction error of every ENAP in the single-contact IMC lookup table from the error of the corresponding ENAP in the two-contact IMC lookup table to compare magnitude of reproduction error difference between single-contact and two-contact IMC paradigms.Then, we calculated the prevalence of ENAPs that produced a smaller, larger, or equal reproduction error with the two-contact IMC lookup table compared to the single-contact IMC lookup table.We also used the Wilcoxon rank sum test to test for a significant difference (α < 0.05) between the reproduction errors of the single-contact and two-contact IMC lookup tables.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
3) Comparison of IMC Paradigm and Existing PNS
Paradigms: Finally, we sought to compare our novel IMC paradigm to existing PNS paradigms reported in the literature.We compared each existing PNS paradigm to the two-contact IMC paradigm, as this IMC paradigm performed the best in our previous analyses (see Results).For this analysis, we compared playlists rather than lookup tables, as existing PNS paradigms in literature are presented in terms of their ability to replicate touch stimuli rather than their ability to replicate TNAPs in general.We hypothesized that the IMC paradigm would more closely mimic the neural code of tactile sensation, resulting in lower reproduction error, than existing PNS paradigms.
For the non-biomimetic PNS paradigms, we selected stimulation parameters from Table I.The fixed-pulse paradigm [4], [6], [8] had fixed stimulation parameters across the entire stimulus duration (Fig. 5b).We fixed PW at PW 25% = 0.05 ms and PA at PA TNAP = 0.091 mA.The force-based paradigm [5], [7] varied PA based on the indentation depth throughout the stimulus (Fig. 5c).Thus, to reproduce our target stimulus, the PA ramped up linearly with the force exerted on the finger, then held constant, then ramped down.We fixed PW at PW 25% = 0.05 ms.We set the upper limit of PA to PA TNAP = 0.091 mA and the lower limit to PA th = 0.07 mA.The sinusoidal paradigm [6], [7], [8] varies PW over time based on the amplitude of a 1-Hz sine wave (Fig. 5d).We fixed PA at PA 25% = 0.125 mA.We set the upper limit of PW to PW TNAP = 0.035 ms and the lower limit to PW th = 0.025 ms.
The two biomimetic paradigms [11], [18] attempted to match the neural population size of the TNAP with electrical stimulation, but in different ways.Biomim 1 chose stimulation parameters by directly matching the overall neural population size for each stimulation pulse, while Biomim 2 linearly scaled the PA of the stimulation based on a percentage of the maximum neural population size.For Biomim 1 [18], we fixed PA at PA 25% = 0.125 mA.We then used the electrical activation model to find a PW that activated an ENAP with the closest match in neural population size to that of the TNAP at each millisecond of the neural response (Fig. 5e).
For Biomim 2 [11], we specifically mimicked the neural population size component of HNM2.We first calculated the neural population size of the TNAP at each millisecond and found the maximum neural population size (NPS max ) of any TNAP across the stimulus, which was 58 neurons for the chosen indentation stimulus.We then used Equation ( 2) to calculate PA for PNS at each millisecond of the stimulus.We fixed PW to PW 25% = 0.05 ms, and we set PA max in (2) to PA TNAP = 0.091 mA.The PA calculated by (2) was rounded to three decimal places to match our 0.001 mA step size used in the other paradigm simulations (Fig. 5f).
P A =
N P S N P S max P A max (2) As described above, the IMC paradigm uses optimization algorithms to select optimal PWs and PAs for each TNAP in a stimulus over the range of 0 ms to PW max = 0.2 ms and 0 mA to PA max = 0.5 mA, respectively (Fig. 5a).Note that the IMC paradigm is the only paradigm that varied PW, PA, and contact on a pulse-by-pulse basis.The other paradigms only vary at most one stimulation parameter at each millisecond and use the same contact for the entire stimulus (Fig. 5b-f).
To calculate reproduction error, we first used the electrical activation model to generate neural responses for each existing PNS paradigm (Fig. 5).We then calculated the reproduction Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
TABLE I STIMULATION PARAMETERS USED FOR PNS PARADIGMS
error of each existing paradigm at each millisecond of the stimulus, using the error metric as in Eqn 1.We also calculated the reproduction error in number of neurons broken down by afferent type (SA, RA, PC) for each paradigm.
Then, we subtracted the reproduction error of every ENAP in each of existing PNS paradigm playlists from the error of the corresponding ENAP in the IMC playlist to compare magnitude of reproduction error difference between the IMC paradigm and existing paradigms.We also determined the prevalence of ENAPs that had a smaller, equivalent, or larger reproduction error when using the IMC paradigm compared to each of the existing paradigms.Finally, we used the Wilcoxon rank sum test to test for a significant difference (α < 0.05) between the IMC paradigm and each existing paradigm.
A. The IMC Method More Accurately Reproduced Neural Activity Than the Non-IMC Method
First, we compared the reproduction error of each of the 15 single-contact non-IMC lookup tables to the single-contact IMC lookup table.While all paradigms in this analysis used our modeling and optimization approach to select stimulation parameters, only the IMC paradigm involved interleaving stimulation contacts on a pulse-by-pulse basis.The single-contact IMC lookup table had significantly lower reproduction errors than all 15 single-contact non-IMC lookup tables (Wilcoxon rank sum test, p<0.001) (Fig. 4a).The single-contact IMC reproduction error was less than or equal to the 15 singlecontact non-IMC reproduction errors for all ENAPs (Fig. 4c).Thus, we conclude that IMC PNS outperformed non-IMC PNS and used IMC PNS for all remaining analyses.
While no single-contact non-IMC ENAPs had lower error than the IMC ENAPs, non-IMC paradigms with contacts M1-M3 and M13-M15 had a higher proportion of ENAPs with error equivalent to that of the IMC paradigm (16-57 %), compared to contacts M4-M12 (1 %) (Fig. 4c).That is, contacts M1-M3 and M13-M15 generally reproduced TNAPs more accurately than contacts M4-M12.This is why these contacts were selected to generate the two-contact IMC approach.
B. Two-Contact IMC PNS More Accurately Reproduced Neural Activity Than Single-Contact IMC PNS
After determining that IMC PNS outperformed non-IMC PNS, we then compared the single-contact and two-contact IMC lookup table reproduction errors to determine the influence of field shaping on ENAP reproduction accuracy.The two-contact IMC lookup table had significantly lower reproduction error than the single-contact IMC lookup table, (Wilcoxon rank sum test, p<0.001) though the numerical difference in error was very small (Fig. 4b).The two-contact IMC paradigm had smaller, equivalent, or larger error than the one-contact IMC paradigm for 22%, 77%, and <1% of ENAPs, respectively (Fig. 4d).Thus, we conclude that two-contact IMC PNS more closely mimicked touch neural responses than single-contact IMC PNS, and we used twocontact IMC PNS for all remaining analyses.
C. The IMC Paradigm More Accurately Reproduced the Neural Code of Tactile Sensation Than Existing PNS Paradigms
We compared the two-contact IMC paradigm playlist to three non-biomimetic paradigm playlists (fixed-pulse, forcebased, and sinusoidal) and two biomimetic paradigm playlists (Biomim 1 [18] and Biomim 2 [11]) for a representative indentation stimulus delivered to the index finger (Fig. 1a).The stimulation parameters for each paradigm and the neural activation pattern resulting from each paradigm are shown in Fig. 5a-f.The natural neural activation for this touch stimulus as predicted by TouchSim is shown in Fig. 1b for comparison.
We calculated the reproduction error for each paradigm and found that across all paradigms, the reproduction error was highest for RA neurons, intermediate for SA neurons, and lowest for PC neurons (Fig. 5, right column).As expected, this demonstrates that the relative contribution of each afferent type to the overall reproduction error correlates with its proportion in the neural population [1].We then compared the reproduction errors of each of the existing non-biomimetic and biomimetic PNS paradigms to the IMC paradigm.The twocontact IMC paradigm had significantly smaller reproduction errors than all five existing paradigms (Wilcoxon rank sum test, p<0.001) (Fig. 6a).While the two-contact IMC paradigm had lower error than any existing paradigm in the majority of all ENAPs, the existing biomimetic paradigms performed better than the existing non-biomimetic paradigms (Fig. 6b).The two-contact IMC paradigm had less reproduction error in a higher proportion of ENAPs when compared to the non-biomimetic existing paradigms (98-100 %) than when compared to the biomimetic existing paradigms (41-59 %).In addition, the two-contact IMC paradigm was more likely to have an equivalent reproduction error to the existing biomimetic paradigms (41-59 % of ENAPs) compared to the existing non-biomimetic paradigms (0-3 % of ENAPs) (Fig. 6b).Still, the two-contact IMC ENAPs rarely had reproduction errors larger than the existing paradigm ENAPs (<1 %).In fact, this only occurred for one ENAP in one existing paradigm (Biomim 2).Thus, we conclude that the two-contact IMC paradigm performed better than the existing PNS paradigms we simulated, including the previously reported biomimetic paradigms.
D. Discussion 1) Interleaved Multi-Contact Stimulation Most Closely
Approximates Natural Neural Activity: We showed that IMC PNS more accurately reproduced the neural response to touch stimuli than non-IMC PNS, demonstrating that interleaved multi-contact biomimetic stimulation outperforms non-interleaved single-contact stimulation.Importantly, the non-IMC PNS paradigms were also biomimetic, since they were generated using the same optimization approach to approximate natural neural activity as the IMC PNS, yet they still resulted in higher error than the interleaved approach.We believe that the critical factor was that the IMC paradigm was able to recruit numerous different populations of neurons throughout the stimulus, whereas traditional biomimetic (i.e.non-interleaved) approaches can only scale the size of a single recruited population.Prior interleaved stimulation approaches to reduce fatigue in motor neuroprostheses were similarly founded on the concept of activating smaller sub-groups of neurons at different times, although these prior approaches did so by cyclically repeating a set of contacts rather than attempting biomimicry [36], [37], [38], [39].While a prior study reported an interleaved surface stimulation approach for sensory feedback [41], this approach was based on the idea of modulating the activation threshold of a single neural population by applying stimulation through two contact pairs with sub-millisecond delays, rather than activating multiple sub-groups of afferents by successive pulses.
Our results also show that the two-contact IMC paradigm resulted in significantly smaller reproduction error than the single-contact IMC paradigm.This is likely because the twocontact approach produced field shaping to enable selective activation of even more specific groups of neurons than can be achieved with single-contact stimulation [25], [26], [27], [28].While no prior biomimetic PNS paradigm applied stimulation through multiple contacts simultaneously, these approaches are beginning to be evaluated for biomimetic brain stimulation using groups of up to four microelectrode contacts simultaneously [35].Assuming the initial clinical results from intracortical microstimulation will extend to PNS, multi-contact biomimetic PNS may be able to improve force discriminability and dynamic range of evoked touch percepts.However, one drawback to multi-contact stimulation is that it is likely to require higher power consumption, since more electrical charge could be delivered throughout the stimulus.
Given that our paradigm benefited the ability to apply interleaved stimulation via contacts distributed around the nerve, as well as the ability to stimulate with multiple contacts at a time, further studies should examine the impact of contact number, spacing, and size on the implementation and outcomes of IMC biomimetic stimulation.These findings could influence future multi-contact electrode designs.
2) The IMC Paradigm Outperforms Prior PNS Paradigms: We compared the performance of our novel IMC paradigm to several existing PNS paradigms previously reported in the literature.We found that the IMC paradigm outperformed every simulated existing paradigm, including both biomimetic and non-biomimetic approaches, by producing lower reproduction error.
In natural touch, tactile sensation produces a dynamic neural response such that the size and composition of the recruited neural population varies throughout the stimulus [1].We found that prior biomimetic paradigms [11], [18] resulted in smaller reproduction error than the non-biomimetic fixed-pulse, force-based, and sinusoidal paradigms.The prior biomimetic paradigms have better performance than the nonbiomimetic paradigms because they are attempting to replicate some aspect of the neural code, such as neural population size and firing rate.However, they do not consider neuron type or neuron location, which likely contribute to their lower performance as compared to the IMC paradigm, which did Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
account for these neural codes in addition to neural population size and firing frequency.In addition, the IMC paradigm allows pulse-by-pulse switching of the stimulating contact(s), as well as the use of multi-contact stimulation for a given pulse, to enable selective recruitment of specific neural populations throughout the stimulus, while the prior paradigms only involved a single electrode contact for a given stimulus.
However, the importance of lower reproduction error in creating more natural or more informative sensation has not yet been confirmed in clinical tests.Further, it is unknown how low reproduction error needs to be for the evoked neural activity to be "close enough" to natural activity to yield a natural feeling sensation.Future clinical testing of the IMC paradigm and direct comparisons to other paradigms would allow a better understanding of the relationship between reproduction error and sensation experience.Additionally, our results are based on neural activation patterns predicted from a biophysical model, which has not yet been validated.Neurophysiological studies utilizing neuron-specific recording techniques, such as microneurography, are needed to confirm that each stimulation paradigm produces the predicted neural firing patterns in vivo.
3) Future Improvements to the Optimization Approach: In the single contact analyses, electrode contacts M1-M3 and M13-M15 had smaller errors than contacts M4-M12 and were preferentially selected for the IMC paradigm.This might be explained by the proximity of these contacts to the fascicles of interest.Interestingly, the optimization algorithm selected near-contacts rather than far-contacts even though our error metric did not penalize the recruitment of off-target neurons (i.e.neurons in the median nerve that innervate other hand regions other than the index finger).Future research should examine the effect of the inclusion of a penalty for off-target neural recruitment in the optimization function [51].
Another factor impacting contact selection was the bias of the optimization algorithm toward lower numbered contacts.In times where reproduction error was equivalent between one or more contacts, the optimization algorithm would select the lower numbered contact.For example, if an ENAP had the same error when using contacts M1 and M15, contact M1 would be selected by the algorithm.In scenarios where multiple contacts output the same reproduction error for an ENAP, the contact selection process could be improved by adding a secondary optimization term after reproduction error is minimized, such as selecting the contact that produces the minimum charge rather than the contact with the smallest numerical label.
Regarding the use of field shaping in the IMC approach, we hypothesize that enabling the optimization algorithm to select more than two contacts at a time (i.e. three or more simultaneous multi-contact PNS) has the potential to further reduce reproduction error.Our optimization algorithm can easily be expanded to account for additional stimulating contacts, but with a tradeoff in computation time.Another benefit of the field shaping approach is that the algorithm can still choose to use fewer contacts for a given pulse, and thus may select a single contact if it would result in a lower reproduction error.
4) Adapting the IMC Paradigm for Real-World Use: Ultimately, our goal is to implement the IMC paradigm into real-time sensory encoders to be used to deliver sensory stimulation to augment functional task performance for people with neurological disorders or disabilities.For example, an upper limb prosthesis user could receive IMC stimulation to convey naturalistic, informative haptic feedback from pressure sensors on the prosthetic fingertips.
First, we will need to test the IMC paradigm in human participants to characterize its perception.To implement the IMC paradigm in clinical studies, we should develop patientspecific electrical activation models based on the participant's fascicular structure, neuroanatomy, and electrode configuration.We would then couple neural activation predictions from the novel IMC paradigm with clinical assessments of the perceived location of sensations produced by single-contact stimulation to identify target fascicles in the nerve corresponding to specific hand locations.We would also need to scale the stimulation parameters in the model to real stimulation parameters based on clinical assessments of sensory detection threshold and maximum limits.To use the IMC paradigm during functional task performance, we would need to reconfigure the optimization approach to select stimulation parameters on a pulse-by-pulse basis in real-time based on an incoming sensor signal.In contrast, in the approach reported here, optimization was used offline to select stimulation parameters for an idealized indentation stimulus.The optimization algorithm would need to be sped up and must be able to account for sensor noise or artifact.Moreover, it would be ideal if the optimization algorithm could select stimulation parameters and electrode contact in a single step, rather than sequentially as reported here (see Fig. 2).This would increase the difficulty of solving the optimization problem, increasing computational complexity and time.
In addition, we must map TNAPs to parameters of natural stimuli that can be measured with real-time sensors.To approach this problem, we could use our existing dataset to correlate TNAPs with specific depths in the ramp-up, hold, and ramp-down phases of the indentation stimuli.This would be challenging, however, as multiple TNAPs produce the same depth in each phase of our TouchSim-generated neural responses.The most important features, or in this case neurons, that contribute to a specific sensation must be selected and reproduced from all TNAPs that produce the same depth at a stimulus.The TNAPs that came beforehand must also be considered, as current TNAPs depend on previous TNAPs rather than being independent at each millisecond.
IV. CONCLUSION
In conclusion, we present a new approach to biomimetic stimulation.The IMC paradigm is based on biophysical models of neural activation and can approximate the neural response to tactile stimuli with high accuracy.The IMC paradigm's key innovation is that it includes pulse-by-pulse updating of electrode contact and stimulation pulse parameters to selectively recruit different small sub-fascicular afferent populations throughout the stimulus time course.The IMC paradigm can also include multi-contact field shaping approaches, which leads to better performance than single contact stimulation.Overall, the IMC mimics critical neural coding properties better than existing PNS paradigms, including previously reported biomimetic PNS paradigms.Future studies will test the IMC paradigm in human subjects to determine whether it leads to improvements in the perceptual experience and information content of generated sensations.If human subject testing yields improved perceptual performance, the IMC paradigm will then be adapted for real-time use in sensor-based neuroprosthetic devices.We envision that the IMC paradigm will provide natural sensory feedback to those who have lost tactile sensory capabilities, greatly improving their functionality and overall quality of life.
Fig. 1 .
Fig. 1.(a) Example indentation stimulus with a 3 mm depth and a 250 ms ramp duration.The star ( * ) denotes the timing of the example TouchSim neural activation profile (TNAP) displayed in panel d.(b) Raster plot of the TouchSim-generated neural response to the touch stimulus in panel a.Color indicates afferent type.(c) 16-contact Composite Flat Interface Nerve Electrode (C-FINE) placed around a nerve.(d) Layout of all low-threshold mechanoreceptive afferents in the index finger (grey) showing neurons active in the starred TNAP (purple).(e) Cross section of a portion of the nerve with surrounding C-FINE showing a subset of fascicles.Neurons active in the example electricallyactivated NAP (ENAP) are highlighted in purple (grey neurons are inactive).
Fig. 3 .
Fig. 3. Stimulation parameters selected by the optimization algorithm for example non-IMC and IMC paradigms.(a) PA and PW for the first 50 ms of a single-contact (1C) non-IMC playlist for the example touch stimulus in Fig. 1a.Note that for non-IMC PNS, the same contact is used for every pulse.(b) PA and PW for the 1C IMC playlist for the same stimulus.Color denotes the stimulating contact for the pulse (legend in panel e).Note that for IMC PNS, the stimulating contact can change on a pulse-by-pulse basis throughout the stimulus.(c) Contact usage for an example two-contact (2C) non-IMC playlist and the 2C IMC playlist.With 2C non-IMC PNS, the same two contacts are used for each pulse.With 2C IMC PNS, two contacts can be active during each pulse, and the two-contact pair can vary across pulses.(d) Selection frequency of 1C (39 %) and 2C (61 %) across pulses in the 2C IMC playlist for the touch stimulus.(e) Breakdown of contact usage in the 1C non-IMC, 2C non-IMC, 1C IMC, and 2C IMC playlists shown in panels a-c.
Fig. 4 .
Fig. 4. Reproduction error comparisons for lookup table ENAPs across optimization-based biomimetic paradigms.Significant differences (Wilcoxon rank sum test, p<0.001) are denoted with a star ( * ).Note that outliers were removed from boxplots.(a) Comparison of the magnitude of reproduction error between each of the 15 single-contact (1C) non-IMC paradigms and the 1C IMC paradigm.Positive error indicates that IMC error was greater than non-IMC error, while negative error indicates that IMC error was less than non-IMC error.(b) Comparison of the magnitude of reproduction error between the 1C IMC and two-contact (2C) IMC paradigms.Positive error indicates that 2C error was greater than 1C error, while negative error indicates that 2C error was less than 1C error.(c) Percentage of ENAPs in the 1C non-IMC look-up tables that had error greater than, equal to, or less than the corresponding IMC ENAPs.Note that the IMC paradigm never had a larger reproduction error than any of the non-IMC paradigms for any ENAP.(d) Percentage of ENAPs in the 1C IMC look-up table that had error greater than, equal to, or less than the corresponding 2C IMC ENAPs.
Fig. 5 .
Fig. 5. Stimulation playlists reproducing the touch stimulus shown in Fig. 1a.Left: Stimulation parameters selected for each PNS paradigm.PA is shown in grey on the left axis and PW is shown in purple on the right axis.Middle: Raster plots of neural responses to each stimulation pulse train.Color denotes neuron type (SA, RA, PC).Right: Reproduction error magnitude for each paradigm broken down by neuron type (SA, RA, PC).Each row is a different stimulation paradigm: (a) Two-contact IMC PNS.(b) Fixed-pulse PNS.(c) Force-based PNS.(d) Sinusoidal PNS.(e) Biomimetic PNS Approach 1 (Biomim 1).(f) Biomimetic PNS Approach 2 (Biomim 2).
Fig. 6 .
Fig.6.Comparison of reproduction error between the interleaved multi-contact (IMC) approach and existing (EP) of peripheral nerve stimulation.(a) Reproduction error magnitude between the two-contact IMC playlist and each EP playlist.Positive error indicates that IMC error was greater than EP error, while negative error indicates that IMC error was less than EP error.(b) Percentage of pulses in the playlist in which the IMC error was equal to, less than, or greater than the EP error.Significant differences (Wilcoxon rank sum test, p<0.001) are denoted with a star ( * ).Note that outliers were removed from boxplots.EP key: B: Fixed-pulse PNS.C: Force-based PNS.D: Sinusoidal PNS.E: Biomimetic PNS approach 1 (Biomim 1).F: Biomimetic PNS approach 2 (Biomim 2). | 10,067.4 | 2024-06-17T00:00:00.000 | [
"Engineering",
"Medicine"
] |
SEXUAL CYCLE OF WHITE BREAM, BLICCA BJOERKNA (ACTINOPTERYGII, CYPRINIFORMES, CYPRINIDAE), FROM THREE SITES OF THE LOWER ODER RIVER (NW POLAND) DIFFERING IN TEMPERATURE REGIMES
Background. One of the largest European populations of white bream, Blicca bjoerkna (Linnaeus, 1758), can be found in the estuary of the Oder River. This fi sh is not only very abundant in this area but also attains sizes that have no match in other areas of central Europe. In search for the clues behind such reproductive success we decided to study the annual development cycle of gonads of white bream from three sites in the lower Oder River, north-western Poland, differing in temperature regimes (depending on their position and the distance from the discharge outlet of the Dolna Odra Power Plant). Materials and methods. White bream individuals were obtained from the three sites as bycatch of commercial fi shing in 2009 and 2010. Three sites were sampled: (1) the Oder River above the power plant, (2) the warm-water canal with post-cooling water discharged from the power plant, and (3) Lake Dąbie, 20 km below the warm-water canal. The fi sh age was determined as 2+ through 9+. In total, 506 females and 190 males were designated for histological analyses. The analysis of the annual cycle of gonad development was performed in both sexes using histological methods. A standard paraffi n technique and Heidenhain’s iron hematoxylin staining were used. Results. In the Oder River, spawning of white bream lasted from early May to late June. In Lake Dąbie, it extended until the beginning of July. The spawn was laid in 2 or 3 portions. In the warm-water canal spawning began one month earlier, in April. The bream males from the thermally unaffected Oder River were ready for reproduction approximately one month earlier than the females and maintained their reproductive potential similarly to the females. Males from the warm-water canal became sexually mature two months earlier (February) than those from the river above the power plant. Conclusion. In waters with elevated temperature, gametogenesis of white bream occurs without problems and the fi sh exhibit a typical pace of growth depending on the temperature. In the perspective of climate warming, white bream will be able to maintain its status of a common species in the natural waters of the region.
INTRODUCTION
One of the largest and most viable European populations of white bream, Blicca bjoerkna (Linnaeus, 1758), can be found in the estuary of the Oder River (on the Polish-German border. This fi sh is not only very abundant in this area but also attains sizes that have no match in other areas of central Europe. Specimens weighing 1 kg are not an exception while in other European bodies of water they rarely exceed 25 cm (FL) (Lammens et al. 1992) or 300 g (Specziár et al. 1997, Kopiejewska andKozłowski 2007). In the search for the clues behind such welfare and reproductive success we decided to study the annual development cycle of gonads of white bream from three sites in the lower Oder River, north-western Poland, differing in temperature regimes. A good place for such observation was a thermal power plant Dolna Odra, situated some 100 km from the Baltic Sea. Thermal post-cooling effl uents from the power plant affect the river temperature, but this effect diminished with the distance from the discharge outlet.
White bream, Blicca bjoerkna, is a common cyprinid species present in the waters of almost entire Europe (Tadajewska 2000) although its economic signifi cance is rather minor. Its ubiquity, however, suggests that it is a very important component for the proper functioning of ecosystems (Kompowski and Neja 1999). White bream is a multiple spawner (Brylińska andŻbikowska 1997, Lefl er et al. 2008). The available studies of the species consider its reproductive behaviour (Poncin et al. 2010), hormonal oogenesis regulation (Rinchard and Kestemont 2003), analysis of gonads from selected months (Kopiejewska and Kozłowski 2007), as well as the full-year gonad development cycle only in females, but under ambient thermal conditions Kestemont 1996, Lefl er et al. 2008). A broad analysis of ecological conditions favourable for the reproduction of white bream has also been conducted (Janáč et al. 2010).
The temperature of the aquatic environment is one of the most important factors affecting fi sh development (Brett 1979, Herzig and Winkler 1986, Jobling 2003. The water temperature also affects the characteristics associated with fi sh reproduction, such as sex determination, gametogenesis dynamics, gamete quality, fertility, fertilisation effectiveness, age of sexual maturity, as well as the duration of the reproductive season (Breton et al. 1980, Billard 1986, Jafri 1989, Sandström et al. 1995, Alavi and Cosson 2005, Lahnsteiner and Mansour 2012, Domagała et al. 2013). Post-cooling water discharged from power plants into natural waters increases water temperature and alters the living conditions of fi sh. In fi sh living under these conditions, accelerated gametogenesis was usually observed (Mattheeuws et al. 1981, Lukšjenė and Svedäng 1997, Lukšjenė et al. 2000. A negative effect on gonad development and abnormalities of oogenesis were observed (Luksjene and Sandström 1994, Lukšjenė and Svedäng 1997, Lukšjenė et al. 2000. Unusual changes in gonads, such as oocyte atresia in early vitellogenesis, were observed in fi sh inhabiting endorheic bodies of water into which post-cooling water is discharged (Lukšjenė et al. 2000). Also, in locations where post-cooling water enters open waters, spawning abnormalities were observed in such species as Perca fl uviatilis Linnaeus, 1758; Rutilus rutilus (Linnaeus, 1758); and Esox lucius Linnaeus, 1758 (see Lukšjenė et al. 2000).
No studies of the effect of temperature on the sexual cycle of either sex of white bream have been performed to date. The aim of this study was to analyse the annual gonad maturation cycle of white bream in three locations differing in thermal regimes (depending on their position and the distance from the discharge outlet of the Dolna Odra Power Plant): the Oder River, the warm-water canal, and Lake Dąbie. This study was intended to elucidate the dynamics of the reproductive cycle of white bream from north-western Poland and the potential effect of elevated temperature on the cycle.
MATERIAL AND METHODS
The The Oder River estuary extends more than 100 km inland from the Baltic Sea from the point where the river bifurcates forming two major branches (eastern and western) and a multitude of interconnecting canals. A shallow fl ow-through Lake Dąbie is located in the midway to the sea not far away from the Szczecin Lagoon, which is separated from the Baltic by two large post-glacial islands: Wolin and Usedom.
The warm-water canal of the power plant is relatively deep and fast fl owing. Its mean water temperature is by 6-8ºC higher than that of the river upstream and in the warmest months the water temperature reaches 26-30ºC Kondratowicz 2005, Domagała andPilecka-Rapacz 2007). Water temperature at the fi rst and the third site was similar. The characteristics of the temperature in the studied period in the three locations are presented in Fig. 1. White bream were caught as bycatch by commercial fi shermen who used gillnets at the three aforementioned sites between September 2009 and August 2010, from one to four times a month at each site. Altogether, 506 females and 190 males were designated for histological analyses. Once numerous samples were collected, ovaries for a detailed assessment of maturation stage based on the value of the gonadosomatic index (GSI) were selected (Table 1). The fi sh age was determined as 2+ through 9+ based on the analysis of rings on the collected scales. The total length (TL) and standard length (SL) of each fi sh specimen were measured to the nearest 1 mm and each fi sh was weighed on an electronic scale to the nearest 0.1 g. Then, the gonads were dissected, fi xed in Bouin fl uid, and weighed to the nearest 0.1 mg. Two parameters were calculated: the Fulton condition factor (K) and the gonadosomatic index (GSI) using the respective formulas: where: TL is the total length of fi sh [cm], W g is the gonad weight [g], and W b is the total fi sh weight [g].
For histological analysis gonad fragments 0.5 cm in length were cut out from the middle part of the gonad and processed using a standard paraffi n technique. Between 50 and 100 sections were made from each gonad. Five-μmthick sections were regressively stained with Heidenhain's iron hematoxylin. The histological slides were analysed and measured under a light microscope Nikon Eclipse 80i and photographic documentation was made using the program NIS Elements 3.20 and a digital camera Nikon DS-5Mc-U2, of 5 mln pixel resolution.
There were three principal reasons for DOIng histological evaluation of female gonads of white bream: • Determining their maturity-6 degree scale applied in Sakun and Buckaâ (1963); • Determining the size of oocytes in particular stages of development and in individual months; • Identifying the number of degenerating oocytes.
The following histological measurements were made: • The most developed oocytes from the most developed gonads in a given month (5 females from each site, 30 oocytes measured from one gonad); • Oocytes from each stage of development from all sites (10 gonads representing each stage, 30 oocytes measured from one gonad); • Oocyte size at vitellogenesis in the stage of vacuolisation, yolk accumulation and completion of vitellogenesis. The guidelines provided by Wallace and Selman (1981) were used for oocyte identifi cation. The measurements were made to the nearest 0.01 m. The oocyte diameter was calculated from measurements of the longest and shortest diameters of the mid cross sections of the oocyte (Hunter and Goldberg 1980). The sexual cycle of males was described using modifi ed 6-stage scale proposed by Sakun and Buckaâ (1963). The division into early and late substages of the stages II (II E and II L ) and III (III E and III L ) was introduced, as well as overlapping of the gonadal cycles designated as substages VI-I and VI-II. spermatogonia start forming adjacent to the tubule wall Statistical analysis. The nonparametric Kruskal-Wallis test and following multiple comparisons were used to compare the characteristics of the fi sh: body length and weight, condition coeffi cient (K), gonadosomatic index (GSI), and oocyte size. All analyses were performed at the signifi cance level of 0.05 using the Statistica v.10 software (StatSoft, Inc.).
RESULTS
Principal somatic parameters of fi sh. The caught fi sh were at the age of 2 and above. The number of the sampled white bream males was much lower than that of the females. In some months, only single individuals were caught (Table 1). The length, weight, and condition factor of the studied white bream differed signifi cantly between sexes (Kruskal-Wallis test, H = 134.4, df = 5, P < 0.05; H = 166.4, df = 5; P < 0.05, and H=148.4, df = 5, P < 0.05, respectively). Body length and weight of the females caught at individual sites were signifi cantly higher than those of the males collected at the same sites (P < 0.05). The length, weight, and condition factor of the females from three locations studied did not differ signifi cantly (P > 0.05). Also in the males caught in the three locations, these features did not show statistically signifi cant differences between samples (P > 0.05). The condition factor of the males from natural waters was lower than that of females (P < 0.05), while the K value of the males from the warm-water canal was similar in both sexes of the fi sh caught in other studied locations (P > 0.05) ( Table 2). The smallest maturing female was 14.3 cm long (SL) and was caught in Lake Dąbie. In March and April, also single large females, 30-34 cm in length, with gonads which would not mature in that season were caught in this location. The males that underwent sexual maturation in the three investigated bodies of water were longer than 8.5 cm. In parallel, in the two investigated natural habitats there were males with body sizes of 23-28 cm that did not undergo sexual maturation in the investigated reproductive season. Variation of GSI in females. In each month of spawning, gonads were divided into 2 groups. One group were pre-spawning gonads, while the other group were spawning gonads. The highest GSI values in the investigated white bream were observed in the spring. Mean monthly values are presented in a chart ( Fig. 2A). However, if fi sh in each location are divided according to the preparation for spawning from April to June, the females belong to one of two groups: the group with pre-spawning and spawning gonads and the group with post-spawning gonads, which is refl ected in the GSI values. The average GSI values (mean ± standard deviation) in the group of females from the Oder River with pre-spawning gonads in April, May, and June were 12.79 ± 4.27, 15.86 ± 2.70, and 13.44 ± 1.31, respectively, while in the post-spawning group, the values in May and June were 4.66 ± 1.26 and 2.31 ± 2.47, respectively. GSI of the pre-spawning group from Lake Dąbie in these months was 11.88 ± 4.68, 14.32 ± 1.26, and 8.85 ± 1.87 respectively, whereas GSI in the post-spawning group was 4.66 ± 1.61 and 2.63 ± 1.26, respectively. GSI of the white bream caught in the warm-water canal in April and May just before spawning was 9.63 ± 1.20 and 18.25 ± 1.25, respectively, while GSI after spawning was 0.89 ± 0.50 and 0.75 ± 0.89, respectively. The highest GSI value of 23.0 was observed in single females caught in May in the Oder River and Lake Dąbie. GSI was the lowest in July, as gonads in all females were in the post-spawning state, and the mean values in the white bream from the Oder River and Lake Dąbie were 1.20 ± 0.78 and 1.19 ± 0.73, respectively. In the subsequent months, GSI gradually increased and a pronounced increase in its value occurred in the white bream from all sites between August and November, with the highest values just before spawning (March-May).
Variation of GSI in males.
During the year, gonad weight and the gonadosomatic index fl uctuated along with the sexual cycle (annual changes in GSI are shown in Fig. 2B). The high GSI values were observed in white bream from the Oder River and Lake Dąbie in April. Mean GSI in the individuals from the Oder fi nalizing their spermatogenesis (stages III L to V) in that month was 7.18 ± 1.53 (mean ± standard deviation), ranging from 2.59 to 9.03, while for individuals caught in Lake Dąbie in that month, the mean index was 5.26 ± 1.88, ranging from 1.5 to 8.4. In April, in addition to the individuals fi nalizing their spermatogenesis, individuals at stage I, II L , and III E were caught. In the warm-water canal, mature male gonads at stage IV were found as early as in February, with GSI of 7.98 in this particular male. In May and June individuals ready for spawning in natural waters have a lower value of GSI than in April. A large reduction in the values of the parameter was reported in the Oder population in July (mean GSI = 0.35 ± 0.36), with individuals at stage II E characterized by GSI of 0.76 ± 0.18 and stage I of 0.08 ± 0.05. From September, gonad weight and the GSI values slowly increased until the following spring, i.e., the next reproductive season (Fig. 2B). and males (B) from the study areas: Δ Odra River; □ Lake Dąbie; ○ warm-water canal; Values marked with different letters (a, b) show signifi cant differences between the features in the study area in a particular month (P < 0.05, ANOVA Kruskal-Wallis test); Due to the low number of males caught, the statistic was not performed for that sex; mean ± SD Reproductive cycle of females-Oder River. In mid July, white bream females had post-spawning gonads with regenerating oocytes at stage II and III (onset of vitellogenesis). The gonads contained degenerating oocytes from the previous pool that had not been laid in the spawning season (Fig. 3A). In August, all gonads were at stage III, with multiple lipid droplets. In September, gonads also contained oocytes at stage III, but these were larger than the oocytes from the preceding month. In the gonads from August and September, still present were degenerating oocytes not released during spawning (Fig. 3B). In October, the majority of females had gonads with stage IV oocytes. However, the remaining females caught in that month still had gonads at stage III. Between November and March, all females had gonads at stage IV. In these gonads, among the more mature vitellogenic oocytes were dispersed oocytes in previtellogenesis (Fig. 3C). In mid-April, egg yolk deposition in oocytes and the fi nalization of vitellogenesis occurred. In the fi rst days of May, mi-gration of the nuclei toward the poles of oocytes, which reached their maximum size, i.e., stage V, was observed. Also, in early May, degenerating oocytes (Fig. 3D) were observed in post-spawning gonads fi lled with another portion of stage III oocytes in advanced vitellogenesis (Fig. 3E). Stage V was observed in white bream females until mid-June. In the second half of that month, all white bream caught had post-spawning gonads characterized by the atresia of oocytes that had not been laid during spawning. In these gonads, oocytes at stage III that could possibly still mature in the same season and be laid in a subsequent portion were present (Fig. 3F). Spawning of the white bream from that location occurred in portions, with the beginning in the fi rst half of May and the end in late June. Detailed percentage of females with gonads at each maturity stage is shown in Fig. 4A.
Reproductive cycle of females-Lake Dąbie. In early July, the gonads of some white bream females contained oocytes in advanced vitellogenesis just before laying. Other females probably had already deposited their last portion of eggs. Their gonads contained envelopes of laid oocytes, degenerating oocytes not laid during spawning, as well as the next pool of oocytes at stage II or early stage III (single lipid droplets). Females caught in mid-July had gonads with degenerating post-spawning oocytes, probably constituting the last portion, as other oocytes were in stage II and would not mature in that season. In late July, females whose gonads contained mature oocytes were caught, some of which was in the process of degeneration. These females probably did not manage to depose the fi nal portion of eggs. At the end of that month, females with gonads containing stage III oocytes and degenerating post-spawning oocytes were also caught. From September to July, the gonads of females from that location had a similar appearance as the gonads of the females from the Oder River. The spawning of white bream in this location ended one month later than in the Oder River and was divided into at least 2 clearly distinguishable portions.
Some gonads developed asynchronously: apart from oocytes in vitellogenesis, approx. 50% of gonad section was occupied by oocytes in previtellogenesis and at the onset of vitellogenesis.
Detailed percentage of females with gonads at each maturity stage is shown in Fig. 4B. Reproductive cycle of females-warm-water canal. In August, females with stage II gonads containing degenerating oocytes not laid during spawning were caught. In September, all gonads were at stage III and contained open envelopes, the remainder of the deposed oocytes. Between October and March, gonads were at stage IV. In the gonads of one female caught in February, 1/3 of its gonad section consisted of degenerated oocytes, while the remaining part consisted of stage IV oocytes. In April, most females had gonads with oocytes in vitellogenesis, but in single females, additional groups of oocytes in previtellogenesis could be observed. At the beginning of May, the females had gonads in advanced vitellogenesis and post-spawning gonads. Therefore, spawning in this body of water could take place at the beginning of April. White bream specimens with post-spawning gonads containing multiple oocytes in vitellogenesis (the following portion) were caught in a small shallow, separated gulf, where the conditions were appropriate for spawning. Other white bream specimens, due to strong water current in the middle part of the Canal, probably moved below the Canal for spawning. Based on the collected material, we may assume that two portions of eggs were deposited in the warm-water canal, the fi rst portion in April and the second portion in early May. In June and July, no mature white bream females were caught in this body of water and therefore it is diffi cult to determine the date of the completion of spawning. Detailed percentage of females with gonads at each maturity stage during the calendar year is shown in Fig. 4C. Reproductive cycle of females-oocyte size. The white bream females from the warm-water canal had the smallest oocytes during gonad recovery after spawning (Table 3). The females from this location also had the largest oocytes at the end of vitellogenesis compared to those from the other two sites (Fig. 5).
Fig. 5.
Monthly distribution of the diameter of the most developed oocytes of white bream, Blicca bjoerkna, from the study areas: Δ Oder River; □ Lake Dąbie; ○ warm-water canal; Values marked with different letters (a, b, c) shows signifi cant differences between the features in study area in a particular month (P < 0.05, ANOVA Kruskal-Wallis test); mean ± SD Reproductive cycle of males-Oder River. In the lower Oder River, the investigated white bream males remained ready for reproduction from mid-April to the late June. The highest percentage of males ready for spawning was observed in May. After spawning, in July, male gonads were at stage I (Fig. 6A) or early stage II. In the latter case, the formation of cysts with type B spermatogonia began in tubules. The cyst contained up to 10 cells. In the following months, from August to January, the gonads of all caught males reached substage II L (Fig. 6B). In the autumn and winter, the number of type B spermatogonia in cysts increased from 20-30 cells observed in cyst cross-sections in August to 50 in January. Sporadically, individuals remaining at stage I were recorded. In February, some individuals reached early stage III E . In the testes of these males, primary spermatocytes appeared. In the gonads of other males, no initiation of meiosis by germ cells was observed and the gonads were at stage II L . In March, the histological im- age of the gonads was similar as in the preceding month. In April, individuals at very different stages of maturity, from stage II L to stage V, were observed. In the individuals at stage III L , the fi rst spermatozoa appeared (Fig. 6C). At stage IV, spermatozoa fi lled both the lumen of the seminiferous tubules and the efferent ducts, while less developed cells underwent maturation in (still) numerous cysts at the tubule wall (Fig. 6D). In the individuals at stage V of maturity, spermatogenesis in the seminiferous tubules was almost completed and the lumen of the tubules was fi lled with spermatozoa (Fig. 6E). In May, maturing individuals were at stage IV or V. In that month, the highest percentage of males ready for spawning was recorded, but there were also few individuals at substage II E . In June, one-third of males were still ready for reproduction (stage V), while the remaining males were at stage I or II L . A small quantity of spermatozoa from the late reproductive season was found in one individual. In July, gonads containing spermatozoa were no longer observed, and the males were at stage I and early stage II E . Detailed percentages of males with gonads at each maturity stage during the calendar year are shown in Fig. 7A. Reproductive cycle of males-Lake Dąbie. Similarly as the males from the Oder River, those caught in Lake Dąbie remained ready for reproduction in the same period, i.e., from early April through late June. In May, this site was characterized by the highest percentage of males with spawning gonads (stage IV and V). However, this site was also characterized by a slightly earlier (in February) occurrence of the fi rst spermatozoa in gonads (stage III L ) than in the river above the power plant. Individuals at stage IV were recorded at this site as late as in June. In the white bream specimens from Lake Dąbie, the dynamics of gonad recovery after spawning was similar to that of the white bream specimens from the Oder River. Post-spawning gonads were observed occasionally (stage VI-II) (Fig. 6F), while the unreleased remaining spermatozoa from the completed cycle were present only in very small quantities until the end of December. The post-spawning substage VI-II was found in only one individual from this site in June. In this month, gonads at substage II E or stage I were also found. Detailed percentages of males with gonads at each maturity stage during the calendar year are shown in Fig. 7B. Reproductive cycle of males-warm-water canal. The study material from that location was obtained with diffi culty. At this site, single males with spawning gonads (stage IV) were recorded as early as in February, two months earlier than in the waters supplying the Canal and Lake Dąbie. In the subsequent months, individuals at stages IV and V of gonad development were also caught. In June and July, no white bream males were caught at this site. After the reproductive season, the level of development of gonads in the white bream from the Canal corresponded to stages I and II L . Detailed percentages of males with gonads at each maturity stage during the calendar year are shown in Fig. 7C.
DISCUSSION
White bream, along with common roach and common bream, constitutes the most signifi cant group of freshwater fi shes in the central part of Europe (Tadajewska et al. 1997, Molls 1999, Tátrai et al. 2003, Matondo et al. 2007, Wiśniewolski et al. 2009). Cyprinids, including white bream, are a fl exible group that adapts well to eutrophic conditions (Persson et al. 1991, Lammens et al. 1992, Olin 2009), although they tend to avoid acidifi ed waters (Leuven et al. 1987). They are characterized by a high fertility and fl exibility of reproduction (Barthelmes 1983). In the investigated sites, more females than males were caught, therefore the results for males were based on a lower number of specimens. In other locations, females also quantitatively dominated over males, with a female-to-male ratio of 1 : 0.35 in Lake Kuş (Balık et al. 1999-cited after Yılmaz et al. 2012, and 1 : 0.53 in Lake Sapanca (Hamalosmanoğlu 2003-cited after Yılmaz et al. 2012), although other authors recorded more equal proportions of the sexes: 1 : 1.07 in Lake Sapanca (Gürsoy unpublished * ) and 1 : 0.98 in Ladik Lake (Yılmaz et al. 2012(Yılmaz et al. , 2015. The white bream specimens we analysed were in a good condition, similar to those caught in other locations (Okgerman et al. 2012, Yilmaz et al. 2012. The females in the study were larger and heavier than the males, similarly as in other studies (Okgerman et al. 2012).
White bream in Poland reaches its sexual maturity at the age of 2 to 7 years and the length of 5.5 to 20.2 cm (Tadajewska 2000). According to other authors, mature individuals are aged 3+ (14.7 cm) or 4+ (18.6 cm) (Koli 1990, Okgerman et al. 2012. Some males become mature as early as at the age of 2, while females at the age of 3-5 (Okgerman et al. 2012). The smallest maturing males we caught were 8.5 cm long (SL) (age 2+), while females with mature gonads were aged 3+ and 10 cm long. In the two investigated natural habitats, there were males with body sizes of 23-28 cm that did not undergo sexual maturation in the investigated reproductive season.
The breeding season of white bream is extended in time, since the eggs are laid several times, in 2 or even 3 portions (Tadajewska 2000). However, Kopiejewska and Kozłowski (2007) concluded that some white bream females have an undetermined type of reproduction with one or several portions of eggs laid. The egg laying lasts from 35 to 75 days (Kopiejewska 1996) and from 21 to 52 day by Spivak (1987). According to Mann et al. (1984), the number of portions of laid eggs depends on the abundance of fi sh in a given body of water and on geographic latitude.
Spawning period in the majority of the analysed white bream specimens from the natural waters of the Oder River and Lake Dąbie was similar, although slightly longer in Lake Dąbie and it lasted from early May to late June or even early July. Ovaries with ripe oocytes were seen as late as in July, although these were single cases. In other locations in Europe: northern Russia (Berg 1949), Belgium Kestemont 1996, 2003), Germany (Spratte andHartmann 1997, Molls 1999), spawning usually occurs in a similar period: begins in May and ends in June. The earliest spawning period was observed in Turkey in mid-April, in Lake Kuş (Balık et al. 1999-cited after Yılmaz et al. 2012 and late April in Lake Sapanca (Okgerman et al. 2012). Spawning occurring only in May was reported in Russia (Slastenenko 1956-cited after Yılmaz et al. 2012) and Austria (Hacker 1979), while the latest ending date, July, was observed in Finland (Koli et al. 1990) and Turkey (Okgerman et al. 2012).
In the investigated white bream, early vitellogenesis took place in August and from that month, a steady advance of gonad recovery was observed, similarly as in the white bream from Hungary (Lefl er 2010) and in some white bream from the Meuse River (Rinchard and Kestemont 1996). According to Rinchard and Kestemont (1996), the long spawning period and low temperatures in the autumn cause delays in the growth of oocytes in vitellogenesis of white bream. According to these authors, cyprinids that spawn in portions are characterized by a long rest period of gonads. Kopiejewska and Kozłowski (2007) also reported that the vitellogenesis in white bream gonads starts in the spring. These changes are visible based on the gradual increase in GSI and the increasing diameter of the largest oocytes. In most white bream females, vitellogenesis occurs in the spring (Trâpicyna 1975, Kopiejewska 1996, Kopiejewska and Kozłowski 2007. The presence of oocytes of trophoplasmatic growth in the autumn and winter may indicate high adaptive abilities of the species (Kopiejewska and Kozłowski 2007). Prior to the spawning period, in the gonads of the analysed white bream females, the variability of the pool of younger oocytes decreased, which was also observed by other authors (Kopiejewska andKozłowski 2007, Rinchard andKestemont 1996). The size of oocytes at the different stages of development was similar to reported for the white bream from the Meuse River (Rinchard and Kestemont 1996), and the diameter of the most developed oocytes was smaller than that reported by Lefl er (2010) ( Table 4). The smaller size of oocytes in completed vitellogenesis in the females from the warm-water canal may be a result of the infl uence of temperature on the endocrine system and the rate of vitellogenin uptake into the oocyte (Tyler and Sumpter 1996).
The stages of reproductive cycle of cyprinid fi sh, including white bream, have been determined mostly in females (Kopiejewska 1996, Rinchard and Kestemont 1996, Kopiejewska and Kozłowski 2007, Lefl er et al. 2008. No histological studies on the sexual cycle of the testis have been published. The male gonads of the investigated white bream during winter were at stage II, with some individuals starting meiosis as early as in February. In other cyprinids, gonads enter stage III earlier, even in the autumn (Mattheeuws et al. 1981, Billard 1986. Another feature of the cycle was the quick cleaning of the seminal tubules from spermatozoa not expelled during mating. The analysed white bream females from all sites had the highest GSI (above 20) before and during spawning. The white bream specimens of the Meuse River had the highest GSI (14.5) just before spawning (Rinchard and Kestemont 1996), while those from Lake Sapanca had GSI slightly above 15 (Okgerman et al. 2012). In the spawning season, the GSI values of the investigated fi sh decreased systematically, which is related to depositing consecutive portions of eggs and has also been observed in white bream specimens from other locations (Rinchard and Kestemont 1996). According to our observations, the following group is ready for laying eggs after approx. 3 weeks. Taking into account gonad weight, GSI and the histological structure of gonads, it seems that the investigated white bream specimens deposit at least 2 portions of eggs in one season. In the analysed fi sh, the period of gonad recovery after spawning lasted a few weeks in July, similarly as in the fi sh of the same species in the Danube, Lake Balaton, and the Meuse River Kestemont 1996, Lefl er 2010). The repeated rapid increase in GSI starting from August was a result of an intense growth of oocytes, which in turn was a result of nutrient accumulation in early vitellogenesis. This process was also described by Lefl er (2010), who demonstrated that high GSI values in white bream specimens from the Danube in August (3.8) are a result of oocyte growth as an effect of increasing progesterone levels. The completion of exogenous vitellogenesis in white bream occurred before spawning and the vitellogenic activity of the liver decreased during spawning. This phenomenon is contrary to that observed in other multispawner cyprinid species, e.g., Alburnus alburnus (Linnaeus, 1758) (see Rinchard and Kestemont 2003).
In the studied white bream males from the Oder River, the highest GSI values were those recorded at the beginning of reproductive season, similarly as in the white bream investigated by Okgerman et al. (2012). The maximum GSI observed by these authors was 10.7. In our study, the mean value of the index was 7.1, while it reached 11.3 in individual animals.
In Poland white bream spawning occurs in water at 9.6-19.4ºC but the temperature preferred by this species is 20-24ºC (Tadajewska 2000). In Lake Sapanca, breeding takes place at 13.7-28.5ºC (Okgerman et al. 2012). According to Spivak (1987), in the Kahovka Reservoir, spawning of white bream occurs at the temperature of 12-23ºC. The second portion of eggs is laid in July at 18-29ºC (Lefl er 2010). White bream tolerates a wide range of temperature for spawning. Moreover, the larvae of white bream are able to survive at higher temperature than those of other cyprinids (Matondo et al. 2007). Therefore, the species was able to adapt to the environment of warmed water of the studied warm-water canal and proceed with spawning. Spawning in the warm-water canal began one month earlier (April) than in local waters of ambient temperature, while the males produced spermatozoa as early as in February (two months earlier). A similar offset in the sexual cycle has been observed in roach, Rutilus rutilus (Linnaeus, 1758), from the Meuse River, where water temperature is increased (by a mean of +3ºC) by water discharged from the Tihange Nuclear Power Plant (Mattheeuws et al. 1981). In tench, Tinca tinca (Linnaeus, 1758), kept in artifi cially heated water (by 6ºC exceeding the ambient temperature), spermatogenesis began earlier too and the spawning period was considerably longer (by approx. 2 months) (Breton et al. 1980). During the presently reported study, in the hottest months of the year, no mature white bream were caught in the warm-water canal. Perhaps in that period, adult individuals moved to the waters below that site. Therefore, it is diffi cult to determine the date of the completion of spawning. An additional indicator of successful spawning in the warm-water canal is the large abundance of juvenile individuals of this species observed near the shore of a small bay inside the canal. White bream juveniles of a total length in the range of 5.5-8.0 cm, 4.0-8.0 cm, and 5.5-7.0 cm were caught at the end of April, end of May, and at the beginning of July, respectively. The size of the youngest bream indicate that breeding took place at the beginning of April and the latest occurred at the end of June.
White bream is a common species in the lower Oder River. Spawning at that site occurs from early May to early July. The habitat of the warm-water canal, into which warm water from a power plant is discharged, is attractive to white bream, although hydrological features such as fast current and substantial depth of water limited the presence of the fi sh and their intensity of reproduction. No anomalies or disorders of oogenesis and spermatogenesis were found in the Canal. In that location, spawning of white bream occurs earlier than in the neighbouring thermally unaffected bodies of water. In the perspective of climate warming (Souchon and Tissot 2012), it may be expected that white bream will be able to maintain its status of a common species in the open waters of central Europe, since in waters with elevated temperature, its gametogenesis occurs without problems and the fi sh exhibit a typical pace of growth depending on the temperature. | 8,684.2 | 2015-01-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Calculating the water dissipation of buildings in urban areas based on global nighttime light data
Urban water dissipation is increasing gradually as urbanization progresses. Urban water dissipation mainly includes the dissipation of water in buildings and natural water evapotranspiration. Previous studies have mainly focused on calculating natural evapotranspiration in urban areas and have overlooked the dissipation of water in buildings under the inuence of strong human-related water use activities. In this paper, the concept of building water dissipation (BWD) was proposed to describe the phenomenon that water dissipation occurs inside buildings. Moreover, a BWD calculation model was established and applied to calculate global building water dissipation. To reveal the specic water dissipation inside buildings, it is necessary to obtain the urban building oor area rst. This paper proposed a new method to calculate the urban building oor area based on global nighttime light data obtained from NPP-VIIRS. Taking the oor area results into the BWD calculation model, the global building water dissipation in urban areas was found to be 127 billion m 3 in 2015. The vast building water dissipation that occurs in urban areas mostly results from rapidly developing economies and intense human activities. The results provide a basic understanding of the nexus between water resources and the energy-heat island effect in urban areas. This study proposes the concept of building water dissipation, which is closely related to human daily life and industrial production. The mechanism and calculation of urban water dissipation have great signicance in analyzing the uxes involved in the urban water cycle. This paper mainly calculated the building water dissipation in the global region based on NPP-VIIRS nighttime light (NTL) data. The dataset from the year 2015 was selected in this study. The calculation method of the urban water dissipation was proposed based on the oor areas of urban buildings, as identied using global nighttime light data. Methods such as statistics and bionics were used to study the process of BWD and to measure the relevant calculation parameters.
Introduction
The sustainable utilization of urban water resources is vital to the sustainable development of residents' lives and the urban social economy. Since the urban population has grown rapidly in recent years, many studies have focused on water resource consumption 1 and urban drainage systems 2 by analyzing urban water cycle systems 3 and the impacts of urban expansion on water security 4 . The urban water cycle is one of the foremost contemporary research themes and provides the foundation for urban water resource management 5 . Cities are typical natural-social dualistic water cycle areas 6 and consist of a natural water cycle with the principal process of "precipitation-in ltration-evapotranspiration-runoff" and the social water cycle with the process of "supply-consumption-dissipation-discharge" 5-7−8 . Water dissipation and its mechanism are the key components in the "intake-conveyance-use-dissipation-drainage-recyclingreuse" process of water in urban areas. However, water dissipation has been ignored in previous studies of urban evapotranspiration and the urban water cycle 9 . Urban water dissipation primarily means that water vapor conversions occur in the process of various indoor water use activities; this is a di cult and challenging component in the research of the urban water cycle 10 . According to the characteristics of water dissipation, urban surfaces can be divided into ve types: building interiors, building roofs and hardened ground, soils, vegetation, and water surfaces. Among these types, the water dissipation that occurs inside buildings mostly re ects the social aspects of water dissipation. Water dissipation occurs on the urban surfaces of the other four types; this is classi ed as urban surface evapotranspiration and re ects to the natural water dissipation component in cities 11 . "Evapotranspiration" (ET) and "dissipation" constitute the water vapor conversion process of the hydrological cycle and are the primary sources of water vapor in urban areas 9 . Moreover, several methods have been used to estimate ET, including the water balance method 12 , the meteorological method 13 , and the energy balance remote sensing model 14 .
With a rapid increase in the urban population and economic development, building water dissipation has an increasing trend 15 . Previous studies have concentrated on the natural side of the urban hydrological cycle and have shown that most urban evapotranspiration comes from water surfaces or green land types, such as trees, grasslands, and other vegetation-coverage areas 11 . However, previous studies have ignored the water dissipation that occurs on the social side, which accounts for a large proportion of the city's water consumption. In this paper, the concept of building water dissipation (BWD) was proposed to refer to the water dissipation caused by human activities in buildings.
Building water consumption accounts for a large share of the total water use in cities. Buildings are major underlying surfaces in cities because their roofs exist as impervious surfaces, while indoor areas are vital places for water dissipation in the social water cycle 10 . Signi cant water use activities occur inside buildings to support urban citizens' daily lives. The BWD is mainly in uenced by the number of building oors 16-17 , the level of economic development 18 , and human activities 19 . The indoor water vapor produced by human activities will travel outside through doors, windows, and other channels to participate in the atmospheric water cycle. Nevertheless, this component of water dissipation has been ignored in previous studies of urban evapotranspiration and the urban water cycle 9 .
Nighttime light (NTL) observations from remote sensing products provide us with temporal and spatial human activity measurements 20 . Many studies have applied nighttime light data to analyses of urban construction [21][22][23] , social and economic situations 24 , environmental circumstances 25 , climate change 26 , etc. The nighttime light intensity can also re ect the level of urbanization. The higher the light intensity in a region is, the more intense the human activities in the region are. Different arti cial light source types, such as streetlights and residential and vehicle lights, have speci c time signatures. This feature makes it possible to estimate the amount of brightness contributed by each light source 27 . The arti cial lights of buildings play a signi cant role in urban nighttime light.
At present, the two most widely used nighttime light data sources are the Defense Meteorological Satellite Program/the Operational Linescan System (DMSP/OLS) and National Polar-orbiting Partnership/Visible Infrared Imaging Radiometer Suite (NPP/VIIRS) 28 ; these products are provided by the National Oceanic and Atmospheric Administration (NOAA) and the National Geophysical Data Center (NGDC) 29 , respectively. The DMSP-OLS data can be used to map urban areas in global regions 28-30 to study the urbanization process [31][32] . Urban areas are clearly expanding globally by looking at the area of nighttime light. According to changes in the nighttime light intensity value, the urban-rural boundary can be identi ed 33 , and the temporal and spatial changes in the urbanization pattern can be analyzed. Nighttime light data can also be used to extract urban commercial areas 21 and to analyze urban economic development. These data are often applied to assess spatial distribution information such as population density 34 . More recently, the day/night band (DNB) data of the Visible Infrared Imager Radiometer Suite (VIIRS) of the Suomi National Polar-Orbiting Partnership have improved the quality of nighttime light data over that of the DMSP/OLS data with a higher spatial resolution, broader radiometric measurement range, more accurate radiometric calibration, and better geometric quality 35 . Moreover, the DNB dataset has a longer time series, which makes up for the defect in that DMSP data are only updated until 2013. Compared with the DMSP-OLS nighttime light data, the VIIRS data can re ect human habitation and socioeconomic activities more clearly 36 and can be used to extract urban land areas and urban build-up areas 28 , estimate urban building densities and simulate population densities 34 .
This study proposes the concept of building water dissipation, which is closely related to human daily life and industrial production. The mechanism and calculation of urban water dissipation have great signi cance in analyzing the uxes involved in the urban water cycle. This paper mainly calculated the building water dissipation in the global region based on NPP-VIIRS nighttime light (NTL) data. The dataset from the year 2015 was selected in this study. The calculation method of the urban water dissipation was proposed based on the oor areas of urban buildings, as identi ed using global nighttime light data. Methods such as statistics and bionics were used to study the process of BWD and to measure the relevant calculation parameters.
Methods
Building water dissipation.
Building water dissipation is a process that accompanies water consumption. Based on the principle of bionics, this paper compares building areas to concrete forests composed of urban building trees 9-10 . Figure 1 shows the main type of water dissipation that occurs inside a building: the conversion of the phase of water from liquid to gaseous. Then, with air circulation, the water vapor inside a building is released into the air through the doors, windows, vents, and other pathways, such as through the leaf pores of trees, participating in the hydrological cycles in urban areas 9-10 .
The internal water dissipation links of urban buildings mainly include water used for drinking, showering, cooking, ushing, etc. For instance, water vapor evaporation occurs in the process of showering, steam is released in the process of cooking, water vapor evaporation occurs in the process of drying wet clothes, and evaporation occurs from the scrubbing of surfaces such as oors, glass surfaces, walls, desktops, etc.
Correction and Treatment of NTL.
To correct oversaturation in nighttime light (NTL) data, it is necessary to nd a suitable lighting threshold value with which to identify and remove oversaturated areas. The more developed a city is, the higher its nighttime light brightness is. In this study, four developed cities, Beijing, Shanghai, Guangzhou and Shenzhen, were chosen to ascertain the maximum lighting values of urban areas in China. The data showed that the maximum lighting values of Beijing, Shanghai, Guangzhou and Shenzhen were 256, 202, 264 and 195, respectively. The nighttime light value of 264 was selected as the maximum lighting value in China. This selected value can almost cover the range of the nighttime lighting in urban areas throughout the whole country. Pixels representing oversaturated light values in other areas of China were removed. The nighttime light image of China was resampled to a pixel size of 500 m × 500 m and then reprojected to the Alberts projection coordinate system. As shown in Figure 2, a lighting value of 15 was chosen as a threshold value to distinguish urban and rural areas; thus, the nighttime lighting range of 15-264 represents urban building areas in China. The geographic coordinate system of the NPP-VIIRS nighttime light data is GCS_ WGS_ 1984, and the image grid deforms with a change in latitude. First, the global image data were projected to the Alberts projection coordinate system and then resampled to a pixel size of 500 m × 500 m 28 . Since the NPP-VIIRS sensor has a higher sensitivity than the DMSP-OLS sensor, weak nighttime light appears in some areas in the high-noise images. It is thus necessary to denoise the global nighttime light image. The brightness value of nighttime light data is closely related to the level of urban economic development. In this paper, several developed cities were selected, such as New York, Los Angeles, London, and Beijing; their maximum lighting values were 260, 317, 428 and 256, respectively. As shown in Figure 3, 428 was selected as the global maximum brightness value, and this threshold was used to eliminate the excessive brightness values present in some regions of the world.
Calculation of the building oor area.
To calculate the urban building oor area, the nighttime light comprehensive coe cient α is introduced. It is assumed that the higher the light brightness is, the higher the building density is, i.e., the larger the building oor area is. Moreover, the cumulative value of the building oor area is proportional to the cumulative brightness value. The calculation formula can be expressed as follows: where A is the total building oor area of the city in km2; P is the pixel size, and the value used in this paper is 0.25 km2; V is the nighttime light value; C is the number of pixels; and n is the nighttime light data value category. The maximum nighttime light values represents the brightest parts of a city and are usually located in the centers of downtown areas. The minimum nighttime light values represent areas with fewer buildings. The comprehensive light coe cient α is 0.015. To verify the accuracy of the urban building distribution represented by the nighttime light data, Xiamen was introduced as the sample city in this study, and the vector boundary of the Xiamen administrative region was used to extract the lighting image data of Xiamen city, as shown in Figure 4. According to the extracted light data, the lighting data values 15 to 196 were selected to represent lights within the urban buildings of Xiamen city, and a coverage comparison was made between this area and the urban area extracted from the Google satellite mixed map of Xiamen city. The coverage results are shown in Figure 5. It can be seen that the coverage precision between the two areas is high; thus, it can be concluded that the coverage represented by nighttime light values above 15 is equal to the distribution scope of urban buildings in the study area. Figure 6. The average daily water dissipation per unit building area differs regionally because the level of economic development varies among different regions of the world. According to the lighting value of Xiamen city, the global lighting data were analyzed. Lighting areas with values of 15-428 were selected for the analysis and calculation of the building areas, as shown in Figure 7.
Calculation of the global BWD.
Arti cial water dissipation, also known as enhanced evaporation, mainly refers to the internal water dissipation processes that occur in urban buildings and to arti cial water sprinkling on hardened roads. With the acceleration of urbanization, the proportion of building water dissipation is gradually increasing.
The higher the degree of economic development is, the greater the building water dissipation is. In this paper, the following formula was used to calculate the building water dissipation: (2) where B D is the water dissipation that occurs in buildings in one year in m 3 ; D f is the average daily water dissipation per unit building area in cm/d, considering the economic aggregate, and the reference index is the per-capita GDP of each continent. The meanings of the other symbols are the same as those previously described.
It is supposed that D f is related to the per-capita GDP level. The higher the per-capita GDP is, the greater the D f value is. Since the economic development level varies greatly among the continents in the world, this paper used a sigmoid function to address each continent's per-capita GDP value to moderate the in uences of the maximum and minimum per-capita GDP values on the D f calculation. The function is as follows: where x represents the per-capita GDP of each continent, in units of 10 4 dollars.
According to formula (1), Lanzhou's urban building oor area was calculated to be 104 km 2 , and the RE was 99.5%. The proportions of the nighttime light values in Beijing and Lanzhou are shown in Fig. 8.Therefore, the comprehensive light coe cient α value of 0.015 was reasonable. The reason why the building oor area calculated by the light data was larger is that the statistical data may overlook the building oor areas in urban-rural fringe regions, while the calculation conducted using lighting data avoided this problem.
The water dissipation is equal to the water consumption minus the water drainage. By consulting the Beijing Water Resources Bulletin 40 (2), the D f value of Beijing were 0.12 cm/d and that of Lanzhou was 0.08 cm/d. The three sample cities selected in the process of calculating urban building water dissipation are representative of northern, southern, and western China. There are apparent differences in the degree of economic development among these regions, so the BWD intensity also differs. The intensity values were obtained by dividing the building water dissipation by the area of nighttime light. In 2015, the BWD intensity in Beijing was 188 mm and that in Lanzhou was 148 mm.
Results And Discussion Table 3 shows that the global building oor areas in 2015 followed the trend of Asia > North America > Europe > South America > Africa > Oceania. Asia has the largest territory and population in the world. The total GDP of Asia ranked rst in the world in 2015, and its building oor area was also largest. As most of North America's area consists of large nations, and the economic rankings of the United States and Canada are at the forefront of the world, a similar condition to North America, there are several developed countries in Europe, whose total GDP and per-capita GDP were close to those of North America in 2015.
People's living standards are relatively high in North America and Europe. Oceania's building oor area was smaller than that of developing Africa, mainly because of its small territorial area and small population. The building water dissipation in a region is related to the economic situation and human water consumption habits of that region. The per-capita GDP was regarded as the reference parameter and was used to calculate the D f value. To ensure the rationality of the calculation results, this paper selected the per-capita GDP of countries with vigorous light intensities on each continent, as shown in Fig. 9. In the gure, the mean value is taken to represent the urban per-capita GDP of each continent, and the per-capita GDP data were obtained from World Bank 44 , as shown in Table 4. According to the statistical data of the per-capita GDP of each continent in 2015 released by the World Bank and calculation results, Oceania >
Page 9/23
North America > Europe > Asia > South America > Africa. To calculate the D f values of the six continents reasonably, this paper used the S(x) values of the six continents to subtract the S(x) values of Beijing and Lanzhou; the smaller the absolute value of the difference was, the more similar the degree of economic development was. Then, based on the D f values of Beijing and Lanzhou, the D f values of each continent were calculated according to the proportional relationship. The results are shown in Table 5. After the calculations, Fig. 10 shows the resulting total water dissipation of buildings on all continents in 2015: Asia > North America > Europe > South America > Africa > Oceania. In 2015, the global building water dissipation was 127 billion m 3 . Asia has the largest share (36.8%) of the building water dissipation due to its largest population and fast-growing economy. The building water dissipation of North America accounted for 28.0% of the global total, and approximately 88% of North American areas belong to developed countries, such as the United States and Canada. In particular, the U.S. economy ranks rst in the world, with a very high human development index (HDI) and economic development level. Canada is the second-largest country globally, with a vast territory and a high level of economic development.
Europe is similar to North America in an economic sense, and the building oor area of Europe is next to that of North America, so the building water dissipation of Europe ranked third, accounting for 22.2% of the total BWD in the world. Oceania has the lowest building oor area proportion and the smallest population, so its building water dissipation was the lowest.
The BWD intensity is also relevant to economic development. Figure 11 shows the BWD intensities in 2015 in cities of the six continents globally: the BWD intensity in North America was 285 mm, that in South America was 177 mm, that in Europe was 263 mm, that in Oceania was 217 mm, that in Asia was 253 mm, and that in Africa was 140 mm. Due to the intense lighting at night and the large building oor areas of North American cities, the water dissipation per unit urban area was high on this continent.
The above values represent the average building water dissipation in cities on each continent. For buildings in cities, the population number is relatively steady [45][46][47] , so the indoor water dissipation uctuates minimally and is almost unchanged 10 .
Conclusion
The study identi ed the building areas and distributions in urban areas using NPP-VIIRS nighttime light data collected in 2015. In the calculation process, Xiamen was selected as a sample city with which to calculate the comprehensive nighttime lighting coe cient α. Beijing and Lanzhou were selected as sample cities to verify the rationality of the D f values for urban buildings with different degrees of economic development, and several main conclusions were obtained, as follows.
According to the nighttime light data of Xiamen city obtained in 2015, this paper deduced that the light comprehensive coe cient value α was 0.015, and the global urban building area was 0.281 million km 2 .
The building oor areas of cities on each continent were mainly proportional to the cumulative light brightness value in the corresponding region. Therefore, the higher the per-capita GDP was and the greater the population was, the higher the light brightness value and the larger the building oor area of the region was.
This paper identi ed the social duality attributes of UWD and proposed the concept of building water dissipation (BWD) and the water dissipation mechanism in buildings. The arti cial water supply is the main source of water dissipation for human activities and the primary leading source for social water dissipation in urban areas. Building water dissipation and the use of arti cial sprinklers on roads were regarded as the social side of the UWD. This study estimated the water dissipation of urban buildings globally. The results showed that the total water dissipation amount was 127 billion m 3 in 2015, and the building water dissipation in Asia accounted for 36.8% of this total and ranked 1st in the world. It could be concluded that human-related water activities play an increasingly signi cant role during the socialnatural dualistic water cycle process 48 . North America and Europe have highly developed economies, and their building water dissipation accounts for 28.0% and 22.2% of the global total, respectively. The higher the degree of economic development is, the greater the water dissipation inside buildings is. The results provide a quanti ed assessment of urban water dissipation caused by human activities in urban areas of the world; these results are meaningful for analyzing the nexus between water and energy use in urban areas. These results also link the heat island effect with water use in urban areas based on the hydrothermal equilibrium theory. Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors.
Declarations
Page 18/23 Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors. Global night light data representing urban building distribution. Note: The designations employed and the presentation of the material on this map do not imply the expression of any opinion whatsoever on the part of Research Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors. The countries selected to calculate Df for each continent. Note: The designations employed and the presentation of the material on this map do not imply the expression of any opinion whatsoever on the part of Research Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors.
Figure 10
Global oor area and building water dissipation in 2015 Page 23/23 Figure 11 The intensity of global building water dissipation | 5,717.6 | 2021-04-21T00:00:00.000 | [
"Environmental Science",
"Physics",
"Engineering"
] |
Stabilization of Gob-Side Entry with an Artificial Side for Sustaining Mining Work
Hong-sheng Wang 1,*, Dong-sheng Zhang 2, Lang Liu 1, Wei-bin Guo 1, Gang-wei Fan 2, KI-IL Song 3,* and Xu-feng Wang 2 1 School of Energy Engineering, Xi’an University of Science and Technology, Key Laboratory of Western Mine Exploitation and Hazard Prevention with Ministry of Education, Xi’an 710054, China<EMAIL_ADDRESS>(L.L<EMAIL_ADDRESS>(W.-b.G.) 2 School of Mines, State Key Laboratory of Coal Resources & Safe Mining, China University of Mining & Technology, Xuzhou 221116, China<EMAIL_ADDRESS>(D.-s.Z<EMAIL_ADDRESS>(G.-w.F<EMAIL_ADDRESS>(X.-f.W.) 3 Department of Civil Engineering, Inha University, Incheon 402-751, Korea * Correspondence<EMAIL_ADDRESS>(H.-s.W<EMAIL_ADDRESS>(K.-I.S.); Tel.: +86-029-85556295 (H.-s.W.); +82-32-860-7577 (K.-I.S.)
Introduction
A concrete artificial side (AS) at a gob-side entry (GSE) is an important part for GSE retaining.The long-term stability of the concrete AS is a key issue for successful mining practice [1,2].However, the long-term stability of AS can be affected by the state of stress and deformation characteristics.The AS has to experience mining-induced influences and long-term creep deformation (1-2 years), which can easily lead to stress or deformation failure.If the stress and deformation cannot be controlled effectively, the AS will be destabilized, and the whole gob side entry will finally collapse.Therefore, studying the failure criteria and the stress variation of the AS is important for successful implementation of a GSE retaining.Based on a theoretical study, a failure criterion can be implemented in practical design, which will improve the recovery rate of coal resources and ensure working safety.
The behavior of a material can be defined with a strength criterion.The Tresca yield criterion [3] can be used to define the behavior of an elastic material.When the maximum shear stress reaches a certain value, the Tresca material will yield and slide along the maximum shear stress direction.In the Mohr criterion [4], the failure of a certain point in the material is mainly determined by the maximum principal stress and the minimum principal stress.The failure of a Mohr material is independent of the intermediate principal stress.The ultimate shear strength of the material is closely related to the cohesion and the internal friction angle in Coulomb criterion [5].These three representative strength criteria only consider the influence of normal stress in a shear stress plane.Thus, these strength criteria consider only the maximum and minimum principal stress, but not the intermediate principal stress, and are known as single shear strength theory.Yu (1983) proposed a generalized twin shear strength theory that is appropriate for a material that has different strength in tension and compression [6][7][8][9].The failure mode of the AS is affected by many factors [10][11][12], especially dynamic load induced by blasting which can cause significant damage to AS.In this study, we determined the failure characteristics and the variation of shear stress of the concrete AS of the GSE, which were affected by the coal mining.To achieve this, a uniaxial compression failure experiment was conducted with large and small-scale specimens.The distribution characteristics of shear stress were obtained from a numerical simulation.Based on the results, a failure criterion was determined and implemented in a strengthening method for the artificial side.
Structural Model of the Artificial Side
A schematic diagram of a GSE is presented in Figure 1.The immediate roof above the AS prevents ground movement and rock deformation induced by GSE formation.The stiffness of the main roof and immediate roof are assumed to be much greater than that of the AS and the rigid floor that supports it.Thus, deformation occurs at the upper boundary of the artificial side, and the vertical displacement at the lower boundary is ignorable.Based on these conditions, a structural model of the AS was established, as shown in Figure 2. The behavior of a material can be defined with a strength criterion.The Tresca yield criterion [3] can be used to define the behavior of an elastic material.When the maximum shear stress reaches a certain value, the Tresca material will yield and slide along the maximum shear stress direction.In the Mohr criterion [4], the failure of a certain point in the material is mainly determined by the maximum principal stress and the minimum principal stress.The failure of a Mohr material is independent of the intermediate principal stress.The ultimate shear strength of the material is closely related to the cohesion and the internal friction angle in Coulomb criterion [5].These three representative strength criteria only consider the influence of normal stress in a shear stress plane.Thus, these strength criteria consider only the maximum and minimum principal stress, but not the intermediate principal stress, and are known as single shear strength theory.Yu (1983) proposed a generalized twin shear strength theory that is appropriate for a material that has different strength in tension and compression [6][7][8][9].The failure mode of the AS is affected by many factors [10][11][12], especially dynamic load induced by blasting which can cause significant damage to AS.In this study, we determined the failure characteristics and the variation of shear stress of the concrete AS of the GSE, which were affected by the coal mining.To achieve this, a uniaxial compression failure experiment was conducted with large and small-scale specimens.The distribution characteristics of shear stress were obtained from a numerical simulation.Based on the results, a failure criterion was determined and implemented in a strengthening method for the artificial side.
Structural Model of the Artificial Side
A schematic diagram of a GSE is presented in Figure 1.The immediate roof above the AS prevents ground movement and rock deformation induced by GSE formation.The stiffness of the main roof and immediate roof are assumed to be much greater than that of the AS and the rigid floor that supports it.Thus, deformation occurs at the upper boundary of the artificial side, and the vertical displacement at the lower boundary is ignorable.Based on these conditions, a structural model of the AS was established, as shown in Figure 2.During the entire service period, the AS experiences three stages [1,2].The bearing load of the AS in the process of face mining can be classified into two parts: the dead weight of the immediate roof and main roof, and the additional load induced by rotation of the main roof [13].Thus, high abutment pressure develops behind the working face.The deformation of the AS increases due to The behavior of a material can be defined with a strength criterion.The Tresca yield criterion [3] can be used to define the behavior of an elastic material.When the maximum shear stress reaches a certain value, the Tresca material will yield and slide along the maximum shear stress direction.In the Mohr criterion [4], the failure of a certain point in the material is mainly determined by the maximum principal stress and the minimum principal stress.The failure of a Mohr material is independent of the intermediate principal stress.The ultimate shear strength of the material is closely related to the cohesion and the internal friction angle in Coulomb criterion [5].These three representative strength criteria only consider the influence of normal stress in a shear stress plane.Thus, these strength criteria consider only the maximum and minimum principal stress, but not the intermediate principal stress, and are known as single shear strength theory.Yu (1983) proposed a generalized twin shear strength theory that is appropriate for a material that has different strength in tension and compression [6][7][8][9].The failure mode of the AS is affected by many factors [10][11][12], especially dynamic load induced by blasting which can cause significant damage to AS.In this study, we determined the failure characteristics and the variation of shear stress of the concrete AS of the GSE, which were affected by the coal mining.To achieve this, a uniaxial compression failure experiment was conducted with large and small-scale specimens.The distribution characteristics of shear stress were obtained from a numerical simulation.Based on the results, a failure criterion was determined and implemented in a strengthening method for the artificial side.
Structural Model of the Artificial Side
A schematic diagram of a GSE is presented in Figure 1.The immediate roof above the AS prevents ground movement and rock deformation induced by GSE formation.The stiffness of the main roof and immediate roof are assumed to be much greater than that of the AS and the rigid floor that supports it.Thus, deformation occurs at the upper boundary of the artificial side, and the vertical displacement at the lower boundary is ignorable.Based on these conditions, a structural model of the AS was established, as shown in Figure 2.During the entire service period, the AS experiences three stages [1,2].The bearing load of the AS in the process of face mining can be classified into two parts: the dead weight of the immediate roof and main roof, and the additional load induced by rotation of the main roof [13].Thus, high abutment pressure develops behind the working face.The deformation of the AS increases due to During the entire service period, the AS experiences three stages [1,2].The bearing load of the AS in the process of face mining can be classified into two parts: the dead weight of the immediate roof and main roof, and the additional load induced by rotation of the main roof [13].Thus, high abutment pressure develops behind the working face.The deformation of the AS increases due to the abutment pressure.The fracture on the AS induced by abutment pressure decreases its structural integrity and load-carrying capacity.Finally, the failure of the AS can cause overall structural instability of the GSE.
Uniaxial Compression Failure Experiment
The mixture ratio of a specimen is determined according to the specifications for the mix proportion design of ordinary concrete (JGJ 55-2011) [14], as shown in Table 1.In the process of mixing, the amounts of the various materials need to be adjusted based on the moisture content of the sand and the particle size of the pebbles.Based on Table 1, small and large specimens were fabricated.The small specimen was 70 mm ˆ70 mm ˆ70 mm, and the large specimen was 1500 mm ˆ600 mm ˆ900 mm (length ŵidth ˆheight, respectively).The specimens were moist cured to delay shrinkage for 28 days.One-time-concreting shaping technology was applied to cast the large specimen.The fabricated specimens are shown in Figure 3.
Sustainability 2016, 8, 627 3 of 17 the abutment pressure.The fracture on the AS induced by abutment pressure decreases its structural integrity and load-carrying capacity.Finally, the failure of the AS can cause overall structural instability of the GSE.
Uniaxial Compression Failure Experiment
The mixture ratio of a specimen is determined according to the specifications for the mix proportion design of ordinary concrete (JGJ 55-2011) [14], as shown in Table 1.In the process of mixing, the amounts of the various materials need to be adjusted based on the moisture content of the sand and the particle size of the pebbles.1, small and large specimens were fabricated.The small specimen was 70 mm × 70 mm × 70 mm, and the large specimen was 1500 mm × 600 mm × 900 mm (length × width × height, respectively).The specimens were moist cured to delay shrinkage for 28 days.One-time-concreting shaping technology was applied to cast the large specimen.The fabricated specimens are shown in Figure 3.The uniaxial compression failure experiment was conducted with the custom-built large-scale experimental system shown in Figure 4 for the large-scale specimen.This system can be used to characterize the behavior of large-scale coal and rock under high loading conditions.The maximum size of a specimen is 1500 mm × 600 mm × 900 mm.The maximum pressure is 20 MPa, and the strain gauges have a strain measurement accuracy of one micro strain.For small-scale specimens, the experiment was conducted with a computer-controlled electronic universal testing machine.The uniaxial compression failure experiment was conducted with the custom-built large-scale experimental system shown in Figure 4 for the large-scale specimen.This system can be used to characterize the behavior of large-scale coal and rock under high loading conditions.The maximum size of a specimen is 1500 mm ˆ600 mm ˆ900 mm.The maximum pressure is 20 MPa, and the strain gauges have a strain measurement accuracy of one micro strain.For small-scale specimens, the experiment was conducted with a computer-controlled electronic universal testing machine.Figures 5 and 6 present the failure pattern of a concrete AS for different specimen sizes.An X-shaped failure pattern was found in the small-scale specimens.The angle between the failure plane and the upper and lower planes ranged from 48° to 56°, as shown in Figure 5.The failure pattern of the large-scale specimen also appeared X-shaped, as shown in Figure 6.The AS showed brittle failure after transfixed fractures, and then it quickly lost its load-carrying capacity.From Figures 5 and 6, it can be seen that the failure pattern of the specimen appears X-shape regardless of specimen size.Unfortunately, it is hard to explain this failure pattern with one maximum shear stress (τ13).It is clear that the X-shaped failure pattern results from two sets of shear stresses (τ13 and τ12), as shown in Figure 7. Figures 5 and 6 present the failure pattern of a concrete AS for different specimen sizes.An X-shaped failure pattern was found in the small-scale specimens.The angle between the failure plane and the upper and lower planes ranged from 48 ˝to 56 ˝, as shown in Figure 5.The failure pattern of the large-scale specimen also appeared X-shaped, as shown in Figure 6.The AS showed brittle failure after transfixed fractures, and then it quickly lost its load-carrying capacity.Figures 5 and 6 present the failure pattern of a concrete AS for different specimen sizes.An X-shaped failure pattern was found in the small-scale specimens.The angle between the failure plane and the upper and lower planes ranged from 48° to 56°, as shown in Figure 5.The failure pattern of the large-scale specimen also appeared X-shaped, as shown in Figure 6.The AS showed brittle failure after transfixed fractures, and then it quickly lost its load-carrying capacity.From Figures 5 and 6, it can be seen that the failure pattern of the specimen appears X-shape regardless of specimen size.Unfortunately, it is hard to explain this failure pattern with one maximum shear stress (τ13).It is clear that the X-shaped failure pattern results from two sets of shear stresses (τ13 and τ12), as shown in Figure 7. Figures 5 and 6 present the failure pattern of a concrete AS for different specimen sizes.An X-shaped failure pattern was found in the small-scale specimens.The angle between the failure plane and the upper and lower planes ranged from 48° to 56°, as shown in Figure 5.The failure pattern of the large-scale specimen also appeared X-shaped, as shown in Figure 6.The AS showed brittle failure after transfixed fractures, and then it quickly lost its load-carrying capacity.From Figures 5 and 6, it can be seen that the failure pattern of the specimen appears X-shape regardless of specimen size.Unfortunately, it is hard to explain this failure pattern with one maximum shear stress (τ13).It is clear that the X-shaped failure pattern results from two sets of shear stresses (τ13 and τ12), as shown in Figure 7. From Figures 5 and 6, it can be seen that the failure pattern of the specimen appears X-shape regardless of specimen size.Unfortunately, it is hard to explain this failure pattern with one maximum shear stress (τ 13 ).It is clear that the X-shaped failure pattern results from two sets of shear stresses (τ 13 and τ 12 ), as shown in Figure 7.
Distribution of the Maximum Shear Stress
An experiment can be conducted to characterize the failure mode.The variation and distribution of the maximum shear stress of a specimen cannot be monitored in the experiment, so numerical analysis was performed using the commercial nonlinear analysis software LS-DYNA.The Drucker-Prager criterion was adopted for the simulation of a specimen, and the mechanical parameters of constitutive model are presented in Table 2.The simulation process was divided into 10 load steps, thus the 2 MPa of step load was imposed on top of the specimen until the axial deformation reached 3%.The boundary condition for the numerical model was identical to the experimental set up, and the four sides of the specimen were free to deform.Figure 8 shows the variations of the maximum shear stress of the small specimen during a uniaxial compression failure experiment.A shallow maximum shear stress band appeared at upper and lower parts of the specimen in the initial stage of loading.As the load increased, the shear stress level increased at the center of the specimen.The shear stresses that started from the upper and lower parts of the specimen overlapped and formed the X-shaped maximum shear stress distribution shown in Figure 8f.The distribution pattern of the maximum shear stress is basically consistent with the failure pattern of the small-scale specimen in Figure 5.It can be concluded that the distribution pattern of the maximum shear stress determines the failure pattern of the small-scale specimen.
The variation of the maximum shear stress in the large-scale specimen during the uniaxial compression failure test is shown in Figure 9.The maximum shear stress initially appeared at upper and lower parts of the specimen.Two internal shear stress bands developed in the middle of the specimen, and the distribution range of the maximum shear stress band was extended to the peripheral part as the load increased.The maximum shear stress band formed an X-shaped pattern, as shown in Figure 9f.The distribution pattern of the maximum shear stress is consistent with the failure pattern of the large-scale specimen in Figure 6.It can be concluded that the distribution pattern of the maximum shear stress determines the failure pattern of the large-scale specimen as well.
Distribution of the Maximum Shear Stress
An experiment can be conducted to characterize the failure mode.The variation and distribution of the maximum shear stress of a specimen cannot be monitored in the experiment, so numerical analysis was performed using the commercial nonlinear analysis software LS-DYNA.The Drucker-Prager criterion was adopted for the simulation of a specimen, and the mechanical parameters of constitutive model are presented in Table 2.The simulation process was divided into 10 load steps, thus the 2 MPa of step load was imposed on top of the specimen until the axial deformation reached 3%.The boundary condition for the numerical model was identical to the experimental set up, and the four sides of the specimen were free to deform.Figure 8 shows the variations of the maximum shear stress of the small specimen during a uniaxial compression failure experiment.A shallow maximum shear stress band appeared at upper and lower parts of the specimen in the initial stage of loading.As the load increased, the shear stress level increased at the center of the specimen.The shear stresses that started from the upper and lower parts of the specimen overlapped and formed the X-shaped maximum shear stress distribution shown in Figure 8f.The distribution pattern of the maximum shear stress is basically consistent with the failure pattern of the small-scale specimen in Figure 5.It can be concluded that the distribution pattern of the maximum shear stress determines the failure pattern of the small-scale specimen.
The variation of the maximum shear stress in the large-scale specimen during the uniaxial compression failure test is shown in Figure 9.The maximum shear stress initially appeared at upper and lower parts of the specimen.Two internal shear stress bands developed in the middle of the specimen, and the distribution range of the maximum shear stress band was extended to the peripheral part as the load increased.The maximum shear stress band formed an X-shaped pattern, as shown in Figure 9f.The distribution pattern of the maximum shear stress is consistent with the failure pattern of the large-scale specimen in Figure 6.It can be concluded that the distribution pattern of the maximum shear stress determines the failure pattern of the large-scale specimen as well.(a) Figure 10 shows the contours of the maximum shear stress and maximum principal stress of the large-scale specimen.Based on the stress distribution pattern, the specimen was divided into four parts as shown in Figure 11.The upper and lower parts in Figures 10 and 11 show the compressive stress zone, while the left and right parts show the tensile stress zone.The locations of these zones are closely related to the aspect ratio of the specimen.The area of the compressive stress zone is inversely proportional to the aspect ratio, while the area of the tensile stress zone is proportional to the aspect ratio [1,2].The distribution pattern of the maximum shear stress of the specimen appears X-shaped, and the maximum shear stresses in the four shear planes are identical in opposite directions.Figure 10 shows the contours of the maximum shear stress and maximum principal stress of the large-scale specimen.Based on the stress distribution pattern, the specimen was divided into four parts as shown in Figure 11.The upper and lower parts in Figures 10 and 11 show the compressive stress zone, while the left and right parts show the tensile stress zone.The locations of these zones are closely related to the aspect ratio of the specimen.The area of the compressive stress zone is inversely proportional to the aspect ratio, while the area of the tensile stress zone is proportional to the aspect ratio [1,2].The distribution pattern of the maximum shear stress of the specimen appears X-shaped, and the maximum shear stresses in the four shear planes are identical in opposite directions.
Orthogonal Octahedron and Its Stress Function
The cube unit considered in this study is a space aliquot body.A cube unit is commonly used in general material mechanics and in elastic and plastic mechanics.It is composed of three pairs of mutually perpendicular sections, as shown in Figure 12a.When four sections inclined 45 degrees in
Orthogonal Octahedron and Its Stress Function
The cube unit considered in this study is a space aliquot body.A cube unit is commonly used in general material mechanics and in elastic and plastic mechanics.It is composed of three pairs of mutually perpendicular sections, as shown in Figure 12a.When four sections inclined 45 degrees in
Orthogonal Octahedron and Its Stress Function
The cube unit considered in this study is a space aliquot body.A cube unit is commonly used in general material mechanics and in elastic and plastic mechanics.It is composed of three pairs of mutually perpendicular sections, as shown in Figure 12a.When four sections inclined 45 degrees in σ 1 and σ 3 space intersect the cube unit, the space aliquot body becomes the unit body of maximum shear stress, as shown in Figure 12b.A set of mutually perpendicular sections with principal shear stress are used to cut the unit body of maximum shear stress, and the space aliquot body becomes the orthogonal octahedron shown in Figure 12c.There are two sets of principal shear stresses in the orthogonal octahedron, τ13 and τ12.Thus, it can be considered as a twin shear stress unit.The stress components in the orthogonal octahedron are τ12, τ13, τ23, σ12, σ13, and σ23.The twin shear stress function can be defined as follows: In this function, two straight inclined lines show the effect of intermediate principal stress σ2.
Twin Shear Strength Theory
For the twin-shear stress unit shown in Figure 12c, the stress components can be defined as follows: The twin shear strength theory was established based on the concept of the twin shear stress state in the twin-shear stress unit [8].In twin shear strength theory, the material will fail or yield when a critical state is reached in the influence function of the two sets of shear stress and normal stress on corresponding planes in the twin shear stress unit.The mathematical expression is defined as follows: There are two sets of principal shear stresses in the orthogonal octahedron, τ 13 and τ 12 .Thus, it can be considered as a twin shear stress unit.The stress components in the orthogonal octahedron are τ 12 , τ 13 , τ 23 , σ 12 , σ 13 , and σ 23 .The twin shear stress function can be defined as follows: In this function, two straight inclined lines show the effect of intermediate principal stress σ 2 .
Twin Shear Strength Theory
For the twin-shear stress unit shown in Figure 12c, the stress components can be defined as follows: (2) The twin shear strength theory was established based on the concept of the twin shear stress state in the twin-shear stress unit [8].In twin shear strength theory, the material will fail or yield when a critical state is reached in the influence function of the two sets of shear stress and normal stress on corresponding planes in the twin shear stress unit.The mathematical expression is defined as follows:
Reinforcement of Artificial Side
According to the experimental and numerical studies, an X-shaped failure pattern is commonly found in an artificial side, regardless of specimen size.From a theoretical point of view, the X-shaped failure pattern is mainly induced by the combination of two sets of shear stresses.To enhance the performance of the artificial side, bolt-type reinforcements such as an anchor bolt, bolt, or anchor bar is suggested in this study as shown in Figure 13.
where β and C are material parameters that can be determined by the limit (σt, σc) of the tensile strength, compressive strength, and the ratio of the tensile strength to compressive strength (α = σt/σc): By substituting Equations ( 2), (3), and ( 5) into (4), the principal stress in the twin shear strength theory can be written as follows: As shown in Equation ( 6), the twin shear strength theory reflects the influence of the intermediate principal stress σ2 on the material behavior.
Reinforcement of Artificial Side
According to the experimental and numerical studies, an X-shaped failure pattern is commonly found in an artificial side, regardless of specimen size.From a theoretical point of view, the X-shaped failure pattern is mainly induced by the combination of two sets of shear stresses.To enhance the performance of the artificial side, bolt-type reinforcements such as an anchor bolt, bolt, or anchor bar is suggested in this study as shown in Figure 13.
Reinforcement Mechanism
The limit equilibrium condition of the plastic softening material can be defined as follows: where φ and c are the internal friction angle and the cohesion, and σ1 and σ3 are equivalent to the load-carrying capacity and lateral confining stress, respectively.The bolt-type reinforcement can develop a lateral confining effect, σ3 > 0. From Equation ( 7), σ1 increases as σ3 increases.Increasing the lateral confining stress changes the state of stress on both sides of the AS from two-dimensional to three-dimensional.In the three-dimensional state of stress, the plastic property of the AS can be fully activated under excessive loading.When the AS is stable, bolt-type reinforcement constrains the lateral deformation and improves the internal friction angle φ, cohesion c, and enhances the shear strength of the artificial side.When
Reinforcement Mechanism
The limit equilibrium condition of the plastic softening material can be defined as follows: where φ and c are the internal friction angle and the cohesion, and σ 1 and σ 3 are equivalent to the load-carrying capacity and lateral confining stress, respectively.The bolt-type reinforcement can develop a lateral confining effect, σ 3 > 0. From Equation ( 7), σ 1 increases as σ 3 increases.Increasing the lateral confining stress changes the state of stress on both sides of the AS from two-dimensional to three-dimensional.In the three-dimensional state of stress, the plastic property of the AS can be fully activated under excessive loading.When the AS is stable, bolt-type reinforcement constrains the lateral deformation and improves the internal friction angle φ, cohesion c, and enhances the shear strength of the artificial side.When the AS is unstable due to damage, the constraining effect induced by bolt-type reinforcement can prevent a broken block from sliding along the shear failure plane and improve the residual strength of the artificial side.Thus, the load-carrying capacity of the AS can be sustained.Therefore, the bolt-type reinforcement improves φ and c of the AS and enhances the load-carrying and anti-deformation capacity.
Validation of Bolt-Type Reinforcement
An additional numerical study was performed to validate the effect of the reinforcement.It was assumed that nine anchor bolts were installed in a large-scale specimen.The variations of the maximum shear stress and maximum principal stress of the reinforced large-scale specimen are presented in Figure 14.
In Figure 14a, the distribution of the maximum shear stress changed: the X-shaped pattern disappeared, the maximum shear stress decreased, and the shear strength increased.The lateral deformation was also constrained significantly.In Figure 14b, the distribution pattern of the principal stress has changed: the load-bearing area expanded while the tensile area shrank.The maximum principal stress and the maximum tensile stress decreased.Due to the installation of anchor bar, the compressive strength of the AS increased, and the load-carrying capacity of the AS was improved.
Sustainability 2016, 8, 627 13 of 17 the AS is unstable due to damage, the constraining effect induced by bolt-type reinforcement can prevent a broken block from sliding along the shear failure plane and improve the residual strength of the artificial side.Thus, the load-carrying capacity of the AS can be sustained.Therefore, the bolt-type reinforcement improves φ and c of the AS and enhances the load-carrying and anti-deformation capacity.
Validation of Bolt-Type Reinforcement
An additional numerical study was performed to validate the effect of the reinforcement.It was assumed that nine anchor bolts were installed in a large-scale specimen.The variations of the maximum shear stress and maximum principal stress of the reinforced large-scale specimen are presented in Figure 14.
In Figure 14a, the distribution of the maximum shear stress changed: the X-shaped pattern disappeared, the maximum shear stress decreased, and the shear strength increased.The lateral deformation was also constrained significantly.In Figure 14b, the distribution pattern of the principal stress has changed: the load-bearing area expanded while the tensile area shrank.The maximum principal stress and the maximum tensile stress decreased.Due to the installation of anchor bar, the compressive strength of the AS increased, and the load-carrying capacity of the AS was improved.
Geological Conditions
Figure 15 shows the layout of the Left No. 1 Working Face in the West No. 2 Working Area of the Jixian Coalmine.The seam thickness of No. 9 ranged from 1.4-1.68m with an average of 1.6 m.The seam pitch was 10 ˝.The immediate roof was grey medium sandstone with a thickness of 1.7 m.The immediate floor was grey fine sandstone with a thickness of 0.6 m.The main roof and floor were black siltstone with thicknesses of 2.6 m and 3.3 m, respectively.A belt transportation roadway was tunneled along the roof of the No. 9 seam and used for air intake and coal transportation.The length of the roadway was 768 m.GSE retaining technology was adopted to increase the resource recovery rate.The section of the GSE is shown in Figure 16.
Parameters of the Artificial Side
To decrease the load on the AS when the main roof fractures and rotates, a soft-hard structure was used for the AS [1,2,15].Based on the seam thickness of the coal, the given deformation of the key block in the main roof was about 240 mm, so the thicknesses of the upper soft structure and the hard structure at the bottom were 240 mm and 1400 mm, respectively.The height of the AS was 1640 mm.The results [1,2] show that the load-bearing area and load-carrying capacity are highest when the aspect ratio is 1:1.Therefore, the width of the AS is decided as 1600 mm.
Construction Method of the Artificial Side
Considering the geological conditions of the working face, rail transport and monorail system were used to transport large concrete blocks that were fabricated in advance.Concrete blocks were stacked in two layers to build up the artificial side.
Parameters of the Artificial Side
To decrease the load on the AS when the main roof fractures and rotates, a soft-hard structure was used for the AS [1,2,15].Based on the seam thickness of the coal, the given deformation of the key block in the main roof was about 240 mm, so the thicknesses of the upper soft structure and the hard structure at the bottom were 240 mm and 1400 mm, respectively.The height of the AS was 1640 mm.The results [1,2] show that the load-bearing area and load-carrying capacity are highest when the aspect ratio is 1:1.Therefore, the width of the AS is decided as 1600 mm.
Construction Method of the Artificial Side
Considering the geological conditions of the working face, rail transport and monorail system were used to transport large concrete blocks that were fabricated in advance.Concrete blocks were stacked in two layers to build up the artificial side.
Parameters of the Artificial Side
To decrease the load on the AS when the main roof fractures and rotates, a soft-hard structure was used for the AS [1,2,15].Based on the seam thickness of the coal, the given deformation of the key block in the main roof was about 240 mm, so the thicknesses of the upper soft structure and the hard structure at the bottom were 240 mm and 1400 mm, respectively.The height of the AS was 1640 mm.The results [1,2] show that the load-bearing area and load-carrying capacity are highest when the aspect ratio is 1:1.Therefore, the width of the AS is decided as 1600 mm.
Construction Method of the Artificial Side
Considering the geological conditions of the working face, rail transport and monorail system were used to transport large concrete blocks that were fabricated in advance.Concrete blocks were stacked in two layers to build up the artificial side.
Reinforcement of Artificial Side
Anchor bars were installed in the AS to improve the load-carrying capacity.The anchor bar was thread steel, its diameter was 18 mm, and its length was 1600 mm.One hundred millimeters of both ends were folded inside, and the anchor bars were fixed with round steel of 14 mm in diameter.The anchor bar framework was fixed at the center of a mold, and then concrete paste was added.The AS is wet-cured for more than 28 days at normal temperature.The anchor bar framework and the large concrete blocks are shown in Figure 17a,b, respectively.Anchor bars were installed in the AS to improve the load-carrying capacity.The anchor bar was thread steel, its diameter was 18 mm, and its length was 1600 mm.One hundred millimeters of both ends were folded inside, and the anchor bars were fixed with round steel of 14 mm in diameter.The anchor bar framework was fixed at the center of a mold, and then concrete paste was added.The AS is wet-cured for more than 28 days at normal temperature.The anchor bar framework and the large concrete blocks are shown in Figure 17a,b, respectively.
Effect of GSE Retaining for the Second Mining
The second mining work started on 17 March 2014, at the Left No. 2 Working Face, and was finished recently.Even after the second mining process, the AS was well-preserved intact condition except for partial cracks that appeared in the shotcreting layer.The AS withstood the second mining influence.Deformation induced by the second mining is presented in Figure 19.Anchor bars were installed in the AS to improve the load-carrying capacity.The anchor bar was thread steel, its diameter was 18 mm, and its length was 1600 mm.One hundred millimeters of both ends were folded inside, and the anchor bars were fixed with round steel of 14 mm in diameter.The anchor bar framework was fixed at the center of a mold, and then concrete paste was added.The AS is wet-cured for more than 28 days at normal temperature.The anchor bar framework and the large concrete blocks are shown in Figure 17a,b, respectively.
Effect of GSE Retaining for the Second Mining
The second mining work started on 17 March 2014, at the Left No. 2 Working Face, and was finished recently.Even after the second mining process, the AS was well-preserved intact condition except for partial cracks that appeared in the shotcreting layer.The AS withstood the second mining influence.Deformation induced by the second mining is presented in Figure 19.
Effect of GSE Retaining for the Second Mining
The second mining work started on 17 March 2014, at the Left No. 2 Working Face, and was finished recently.Even after the second mining process, the AS was well-preserved intact condition except for partial cracks that appeared in the shotcreting layer.The AS withstood the second mining influence.Deformation induced by the second mining is presented in Figure 19.
Performance Evaluation
The AS was constructed and applied to the mining site to stabilize the entry.The AS with built-in anchor bar is still intact and has been stable for last three years, even after the two mining works.It can be seen that the anchor bar improved the load-carrying capacity of the AS and secured the stability of the working face.
Conclusions
An AS was introduced to stabilize the entry of a mining site and a failure criterion of AS was developed.A theoretical solution was derived to explain the failure mechanism, and a numerical study and experimental study were performed to validate the theoretical solution.An actual AS was installed at a mining site and its performance was validated.
An X-shaped failure pattern was found in the small and large-scale specimens.The X-shaped failure pattern obtained from experimental testing shows good agreement with the numerical simulation result.The failure pattern was clearly explained with a combination of two sets of principal shear stresses.Therefore, it can be concluded that the distribution of the maximum shear stress determines the failure pattern, regardless of specimen size.
Bolt-type reinforcement was introduced to enhance the load-carrying and anti-deformation capacity of artificial side.When the AS is stable, bolt-type reinforcement constrains the lateral deformation and improves the internal friction angle, cohesion c, and enhances the shear strength of the artificial side.When the AS is unstable due to damage, the constraining effect induced by bolt-type reinforcement can prevent a broken block from sliding along the shear failure plane and improve the residual strength of AS so that the load-carrying capacity can be sustained.In the field application, the reinforced AS was still stable even after mining operations for the last three years.It can be concluded that the anchor bar improved the load-carrying capacity of the AS and secured the stability of the working face.
Performance Evaluation
The AS was constructed and applied to the mining site to stabilize the entry.The AS with built-in anchor bar is still intact and has been stable for last three years, even after the two mining works.It can be seen that the anchor bar improved the load-carrying capacity of the AS and secured the stability of the working face.
Conclusions
An AS was introduced to stabilize the entry of a mining site and a failure criterion of AS was developed.A theoretical solution was derived to explain the failure mechanism, and a numerical study and experimental study were performed to validate the theoretical solution.An actual AS was installed at a mining site and its performance was validated.
An X-shaped failure pattern was found in the small and large-scale specimens.The X-shaped failure pattern obtained from experimental testing shows good agreement with the numerical simulation result.The failure pattern was clearly explained with a combination of two sets of principal shear stresses.Therefore, it can be concluded that the distribution of the maximum shear stress determines the failure pattern, regardless of specimen size.
Bolt-type reinforcement was introduced to enhance the load-carrying and anti-deformation capacity of artificial side.When the AS is stable, bolt-type reinforcement constrains the lateral deformation and improves the internal friction angle, cohesion c, and enhances the shear strength of the artificial side.When the AS is unstable due to damage, the constraining effect induced by bolt-type reinforcement can prevent a broken block from sliding along the shear failure plane and improve the residual strength of AS so that the load-carrying capacity can be sustained.In the field application, the reinforced AS was still stable even after mining operations for the last three years.It can be concluded that the anchor bar improved the load-carrying capacity of the AS and secured the stability of the working face.
Figure 2 .
Figure 2. Structural model of the artificial side.a: width, b: height, θ: rotating angle of main roof, p: load.
Figure 2 .
Figure 2. Structural model of the artificial side.a: width, b: height, θ: rotating angle of main roof, p: load.
Figure 2 .
Figure 2. Structural model of the artificial side.a: width, b: height, θ: rotating angle of main roof, p: load.
Figure 4 .
Figure 4.The uniaxial compression failure experiment system.(a) Schematic diagram of experiment system; (b) Material object of experiment system.
Figure 5 .
Figure 5. X-shaped failure pattern of small-scale specimen.
Figure 4 .
Figure 4.The uniaxial compression failure experiment system.(a) Schematic diagram of experiment system; (b) Material object of experiment system.
Figure 4 .
Figure 4.The uniaxial compression failure experiment system.(a) Schematic diagram of experiment system; (b) Material object of experiment system.
Figure 5 .
Figure 5. X-shaped failure pattern of small-scale specimen.
Figure 5 .
Figure 5. X-shaped failure pattern of small-scale specimen.
Figure 4 .
Figure 4.The uniaxial compression failure experiment system.(a) Schematic diagram of experiment system; (b) Material object of experiment system.
Figure 5 .
Figure 5. X-shaped failure pattern of small-scale specimen.
Figure 7 .
Figure 7. Failure characteristics of a test specimen.
Figure 7 .
Figure 7. Failure characteristics of a test specimen.
Figure 10 .
Figure 10.Stereogram of the iso-surface of large-scale specimen.(a) Maximum shear stress; (b) Maximum principal stress.
Figure 11 .
Figure 11.Stress analysis of the specimen.
Figure 11 .
Figure 11.Stress analysis of the specimen.
Figure 11 .
Figure 11.Stress analysis of the specimen.
Figure 14 .
Figure 14.Stereogram of the stress distribution in reinforced large-scale specimen.(a) Maximum shear stress; (b) Maximum principal stress.
Figure 14 .
Figure 14.Stereogram of the stress distribution in reinforced large-scale specimen.(a) Maximum shear stress; (b) Maximum principal stress.
Figure 15 .
Figure 15.Layout of the working face.
Figure 16 .
Figure 16.Section of the GSE.
Figure 16 .
Figure 16.Section of the GSE.
Figure 16 .
Figure 16.Section of the GSE.
5. 5 .
Performance of Artificial Side 5.5.1.Effect of GSE Retaining for the First Mining Mining work started on 26 September 2012, at the Left No. 1 Working Face, and it finished on 25 October 2013.The length of the retaining roadway was 768 m.Deformation induced by the first mining work is shown in Figure 18.Sustainability 2016, 8, 627 15 of 17
Figure 18 .
Figure 18.Deformation due to the first mining.(a) Large deformation at 180 m behind working face; (b) Stabilization with shotcreting.
Figure 18 .
Figure 18.Deformation due to the first mining.(a) Large deformation at 180 m behind working face; (b) Stabilization with shotcreting.
Table 2 .
Mechanical parameters for numerical analysis.
Table 2 .
Mechanical parameters for numerical analysis. | 9,840.6 | 2016-07-04T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
UniPrimer: A Web-Based Primer Design Tool for Comparative Analyses of Primate Genomes
Whole genome sequences of various primates have been released due to advanced DNA-sequencing technology. A combination of computational data mining and the polymerase chain reaction (PCR) assay to validate the data is an excellent method for conducting comparative genomics. Thus, designing primers for PCR is an essential procedure for a comparative analysis of primate genomes. Here, we developed and introduced UniPrimer for use in those studies. UniPrimer is a web-based tool that designs PCR- and DNA-sequencing primers. It compares the sequences from six different primates (human, chimpanzee, gorilla, orangutan, gibbon, and rhesus macaque) and designs primers on the conserved region across species. UniPrimer is linked to RepeatMasker, Primer3Plus, and OligoCalc softwares to produce primers with high accuracy and UCSC In-Silico PCR to confirm whether the designed primers work. To test the performance of UniPrimer, we designed primers on sample sequences using UniPrimer and manually designed primers for the same sequences. The comparison of the two processes showed that UniPrimer was more effective than manual work in terms of saving time and reducing errors.
Introduction
The field of comparative genomics has emerged as a result of several whole-genome sequencing projects. At present, six primate whole-genome sequences (human, chimpanzee, gorilla, orangutan, gibbon, and rhesus macaque) are available at the UCSC genome browser (http://www.genome.ucsc .edu/) [1][2][3][4]. Based on these data sets, several insertions/ deletions (INDELs) and coy number variations (CNVs) have been studied by comparing primate genome sequences [5][6][7][8][9][10]. However, the computational data analysis output should be experimentally verified using the polymerase chain reaction (PCR), quantitative PCR, comparative genome hybridization array, or single nucleotide polymorphism genotyping array. Among the wet-bench methods, PCR is the most popular and easily accessible skill in molecular biology. Primer selection is very important in PCR-based systems because a specific pair of primers should amplify only a single target from a whole genome. In other words, the properties of the primers determine the specificity of PCR.
Several web-based tools for primer design such as Prim-er3 [11], Primer3Plus [12], PDA [13], PRIMO [14], and PrimeArray [15] have been developed and upgraded. Along with these software, OligoCalc [16] and Oligo Analysis tools (http://www.operon.com/tools/oligo-analysis-tool.aspx/) are available to calculate the molecular weight, GC content, melting temperature, intermolecular self-hybridization, and intramolecular hairpin loop formation of oligomers or primers. These web-accessible engines are particularly useful for manually selecting PCR primers and optimizing the PCR assay. Here, we introduce a novel web-based primer design tool, UniPrimer, which compares multiple primate sequences and designs primers for the conserved sequences. Before designing primers, the UniPrimer is linked to the repetitive Start and end of the program indicating the "submit entry" and "receive data" Input of query sequence and adjustment of options Figure 1: UniPrimer work flow chart. The flowchart represents an overall process for UniPrimer. The sequential steps are denoted as red alphabetical order and numbers. Start and End symbols, pictured as circles, indicate the "submit entry" and "receive product", respectively.
DNA annotation utility RepeatMasker (http://www.repeatmasker.org/cgi-bin/WEBRepeatMasker/) to eliminate the building of any candidate primers containing repetitive elements. The candidate primers are also linked to OligoCalc for users to easily and rapidly access the properties of the designed primers. "UniPrimer" unites all the necessary algorithms and applications needed for designing good quality primers. Users are able to save considerate amount of time and energy that, otherwise, would have been spared on finding each of these web tools separately, submitting sequence over and over again if several primate genomes are compared and validating primers manually one by one.
UniPrimer is an easy-to-use tool at hand that combines preeminent primer designing algorithms available on the internet so far and it is accessible at http://biosw.dankook.ac .kr/UniPrimer/.
Sources.
UniPrimer incorporates the search results of popular web-based tools and software to produce output. The following is a list of programs and their brief introductions that we utilized for our study.
BLAT (see [17]). It is a popular and one of the most powerful homology search tools used to look up the location of a sequence in the genome or determine the exon structure of an mRNA. It is designed to quickly find DNA sequences of 92% and higher similarity with lengths of 40 bases or more. It also searches protein sequences of 80% and greater similarity.
BLAT's speed and sensitivity surpass other tools of its kind; this algorithm is much faster and more accurate.
RepeatMasker. (http://www.repeatmasker.org/cgi-bin/WEB-RepeatMasker/) It is a program that screens DNA sequences for repetitive elements and low complexity DNA sequences and delivers a detailed annotation of the repeats that are present in the query sequences. Currently, it is more popular than other similar programs and summarizes repetitive elements found in the primate genomic DNA sequences.
Primer3Plus (see [12]). It is a web interface for Primer3. It is a program for designing PCR primers, as well as hybridization of oligomers and sequencing primers. While designing primers, it takes into consideration many criteria, such as PCR product size, oligonucleotide melting temperature, and GC content and all these criteria are user specifiable. As a result, the user can get as accurate primer design as possible.
OligoCalc (see [16]). It is a web-based oligonucleotide properties calculator that computes single or double-stranded DNA and RNA properties, including molecular weight, solution concentration, melting temperature, estimated absorbance coefficients, self-complementarity, and hairpin loop formation.
UCSC In-Silico PCR. (http://genome.csdb.cn/cgi-bin/hg-Pcr/) searches a sequence database with a pair of PCR primers. It is fast in performance as indexing strategy is used for search.
Comparative and Functional Genomics Table 1 shows the development environment for UniPrimer. It was developed on Java server pages based on Apache Tomcat 6.0 and the user interface was written on HTML and jQuery. It is well compatible with recent versions of Mozilla, Safari, and Chrome browsers and is accessible from any computer that has access to the internet. Figure 1)
Work Flowchart of UniPrimer (see
User Interface. The first column depicts the user interface and options followed by columns of processing steps. Only first column contents are visible to user. BLAT (see [17]). When user submits a sequence, UniPrimer processes BLAT (step A in Figure 1), where it searches for sequences of high similarity to the user's input. Percent identity can be either a user defined value or a default (>93% depending on the divergence of humans). RepeatMasker.
[http://www.repeatmasker.org/cgi-bin/WEB-RepeatMasker/] In the next step (step B in Figure 1), the program identifies repetitive elements in query sequences and returns their detailed annotation.
Primer3Plus (see [12]). Primers can be designed on any region of query sequence (step C-1 in Figure 1). However, if user wants to get better PCR result, he should take into account the outputs of BLAT (step C-2 in Figure 1) and RepeatMasker (step C-3 in Figure 1).
OligoCalc (see [16]). UniPrimer is able to check the selfcomplementarity of candidate primers using OligoCalc (step D in Figure 1). If the candidate primers do not contain a potential hairpin formation, 3 complementary, and all potential self-annealing sites, it returns "NONE". Thus, the user is able to choose primers with good quality for the PCR assay.
[http://genome.csdb.cn/cgi-bin/hgPcr/] a final step (step E in Figure 1)is that UniPrimer searches for a sequence with a pair of PCR primers using UCSC In-Silico PCR. When successful, it returns a primer pair with the sequence lying between them. A more detailed review of the above steps is included in UniPrimer interface section.
Input. The screenshot of the input interface for
UniPrimer is shown on Figure 2. The main menu is located on the top right corner of the figure (Figure 2a) where the user can find information about tools, contacts, and a tutorial. The topmost tabs (Figure 2b) in the main screen are pages that will contain search results for each step after a query sequence is submitted. After selecting the type of query sequence and its assembly (Figure 2c), the user pastes the target sequence directly into text area (Figure 2d) or uploads it from a file (Figure 2e). The position (Figure 2f) option can be used if the user wants to limit the query to a specific chromosome or region. In contrast, it is also possible to add extra sequences at the 5 and 3 ends of the query sequence (Figure 2g). Next go the genome types, their assembly versions, and identity percentages (Figure 2h) where up to six-genome assembly similarities are searched at once. The last option in the input form is selecting the target locus (Figure 2i) in which user can select either default values (+500, −500) or define the positions that the PCR product should contain.
BLAT Search.
BLAT step allows a user to find orthologous between a query sequence and several other genomes. A BLAT search result is shown in Figure 3. Identical bases between the query sequence and its corresponding sequences from other primates are colored black but mismatches between sequences are all combined and marked as red letters. However, below it, user can find links to a detailed comparison of a query sequence with each of selected genomes, separately. It is possible to design primers on mismatching areas, but 100% conserved sequence among multiple species gives the best PCR result. Notably, UniPrimer's BLAT search compares the sequences from more than two species at the same time. As such, this approach is particularly helpful for users conducting PCR assay with more than two species.
RepeatMasker.
RepeatMasker helps to eliminate the building of any candidate primer containing repetitive elements by masking them. Primers that are built on unmasked areas are more successful than those built on masked areas. The output result of RepeatMasker ( Figure 4) returns a detailed annotation of repeat elements that are present in the query sequence. The annotation is shown in two formats; a table format contains a list of the repeats (Figure 4a), and repeat sequences are highlighted with light green color in the other format (Figure 4b). On the right side of the window, the user can find options to change the target locus and the type of primer (Figure 4c). After selecting the options, the user obtains a list of designed primers by clicking "Pick primer" on the top right corner of the main screen (step C-3 in Figure 1). Under the masked sequence, there are separate tables that describe masked areas of genomes, separately.
Details about the table and options are listed below. How to read the table result in Figure 4a: This section shows the output of RepeatMasker (http:// www.repeatmasker.org/cgi-bin/WEBRepeatMasker/). Exclude repeat: avoid designing primers for repeat sequences.
Include repeat in forward primer or reverse primer: one could include repeat sequences for either forward or reverse primers.
Include repeat in forward primer and reverse primer: could include repeat sequences for both the forward and reverse primers. Additionally, the user can set the minimum perc div. of repetitive elements that belong to the primer. Higher perc div. leads to better results.
Pick Primer.
Pick primer is linked to Primer3Plus, which selects the primer pairs that fit best to the selected parameters and orders them by quality. The Pick primer output is shown in Figure 5. The left side table (Figure 5a) will contain detailed information of all possible primers present in the query sequence, as well as the whole sequence with BLAT and RepeatMasker output results shown together; the left primers are highlighted in purple, the right primers are highlighted in yellow, the BLAT mismatches are marked in red, and the masked repeats are highlighted in light green.
Similar to the RepeatMasker step, if the user wants more specified primers he can recalculate the Pick primer step after customizing with additional options (Figure 5b) on the right-hand side of the page.
How to read a table result (Figure 5a): How to adjust options for Pick primer (Figure 5b): This option is followed by Primer3 [11].
Number to return: the maximum number of primer pairs to return. Setting this parameter to a large value will increase running time. sites in the sequence from which the primers are designed.
Primer Size: minimum, optimum, and maximum lengths of the primer oligo.
Primer TM: minimum, optimum, and maximum melting temperatures for the primer oligo in Celsius.
Product TM: minimum, optimum, and maximum melting temperatures for the amplicon.
GC %: minimum, optimum, and maximum percentages of Gs and Cs in any primer.
Mispriming/Repeat library indicates what mispriming library (if any) should be used to screen for interspersed repeats or for other sequence to avoid as a primer location.
We added a mismatch base range (begin with 5 ) and the number of mismatch bases options. A mismatched base at the 5 end of the primer is more tolerable than that at the 3 end to obtain a successful PCR amplification.
Mismatch base Range (begin with 5 end): it allows for a mismatched base from the 5 end.
The number of mismatch bases: it allows the number of mismatched base within the mismatched base range.
Final
Result. The last page of UniPrimer contains the primer information picked by the user (Figure 6). The upper part of the table shows the left and right primer sequences and their melting temperatures in Celsius. Figure 7 shows a pop-up window that the user can reach by clicking the "OligoClac" button on the Figure 6 and potential hairpin, 3 complementarity, and self-annealing sites of candidate primers are calculated and displayed. The lower part of the table in Figure 6 shows the genome type, the genomic position, and the expected size of the PCR product. Nationally, the user can access UCSC In-Silico PCR by clicking the "Product" button on the right side of Figure 6 (Figure 8). In the results of In-Silico PCR, primers that are written in capital letters and sequences that lie between them are written in small letters (Figure 8).
Performance Comparison
As mentioned before, UniPrimer is constructed to compare up to six primate sequences at once and design primers on the conserved regions among them. We believe that using this tool to design PCR primers would save great deal of time and reduce possible errors because of two reasons described below. First, UniPrimer is not required to submit the sequence over and over again to match it with each of selected genomes. Second, since masked and mismatching areas in the query sequence are marked accordingly on Pick primer step ( Figure 5), user is able to determine a proper region for designing good primers. To estimate the performance and scalability, we used Uni-Primer to design PCR primers using a sample sequence that is related to Alu recombination-mediated deletions in the human genome [8] as a query. In addition, we designed primers for the same sequence using a manual method to test the efficiency of UniPrimer. Table 2 shows the approximate time consumed by UniPrimer and the manual method to design the primers for the sample sequence. The result indicates that UniPrimer works much faster than that of the manual method. The user needs to open each web-based tool separately for the manual method and submit a query sequence on each window. The loading time required to return a result depends on the length of the query sequence 8 Comparative and Functional Genomics and the number of genomes that are selected by user. Any error that occurs will add time.
Conclusions
We developed UniPrimer, a web-based tool to design PCRand DNA-sequencing primers. This tool is able to find conserved regions across different primate species and designs primers for the region. Then, users are allowed to select various options for picking the best primer for their purpose. UniPrimer was developed to reduce the time required for designing PCR primers and the errors that occur during the process. We conducted a performance test to determine whether this tool works as intended, and the result showed that UniPrimer was easy to use and saved time and effort. In conclusion, we believe that UniPrimer could be a useful tool for comparative analyses of primate genomes. | 3,717 | 2012-05-27T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Opportunities and considerations for visualising neuroimaging data on very large displays
Neuroimaging experiments can generate impressive volumes of data and many images of the results. This is particularly true of multi-modal imaging studies that use more than one imaging technique, or when imaging is combined with other assessments. A challenge for these studies is appropriate visualisation of results in order to drive insights and guide accurate interpretations. Next-generation visualisation technology therefore has much to offer the neuroimaging community. One example is the Imperial College London Data Observatory; a high-resolution (132 megapixel) arrangement of 64 monitors, arranged in a 313 degree arc, with a 6 metre diameter, powered by 32 rendering nodes. This system has the potential for high-resolution, large-scale display of disparate data types in a space designed to promote collaborative discussion by multiple researchers and/or clinicians. Opportunities for the use of the Data Observatory are discussed, with particular reference to applications in Multiple Sclerosis (MS) research and clinical practice. Technical issues and current work designed to optimise the use of the Data Observatory for neuroimaging are also discussed, as well as possible future research that could be enabled by the use of the system in combination with eye-tracking technology.
A natural trend in many scientific disciplines is towards greater size and complexity of the empirical data sets that are collected. This may be driven by the development of entirely new research methodologies or diagnostic tests, further refinements of existing technology (e.g. greater resolution of imaging, or higher speed of sampling), or by the incorporation of multiple measurement methods to examine a single question. In the era of 'Big Data' (Katal et al., 2015;Marx, 2013) some scientists are now developing specialist techniques to handle truly enormous data sets.
This trend is certainly evident in the field of neuroimaging. Functional Magnetic Resonance Imaging (fMRI) is now the workhorse method in cognitive neuroscience and can generate impressively large and complex data sets. Recent advances in fMRI acquisition software to achieve increased spatial and temporal resolution (e.g. Moeller et al., 2010) have driven a further increase in data volumes. Large scale endeavours such as the Human Connectome Project (HCP; Van Essen et al., 2013) aim to gather a variety of different data from large cohorts. The HCP is currently acquiring data using four different MRI procedures (structural, resting-state fMRI, task fMRI, and diffusion imaging), from 1200 subjects, with a sub-set also completing magnetoencephalography (MEG) and electroencephalography (EEG) scans, and a further sub-set also completing additional scans on a high-field strength (7 Tesla) MRI scanner. With additional demographic, behavioural, and questionnaire measures, the final HCP data set will be a tremendous resource, but its sheer size will require specialist methods of data-handling and analysis.
The HCP illustrates two common features of modern neuroimaging research. First is the collection of multiple types of data from a set of subjects using a single imaging modality, most commonly MRI. These multi-paradigm and multi-modality studies are of great value in providing complementary and converging evidence to characterise healthy brain function, examine various disease states, and in drug development (Matthews et al., 2011). The challenges involved in analysing and manipulating large multi-modal datasets have been partly addressed by advances in hardware and software.
For example, the issue of fusing images from different modalities has largely been solved by modern software (e.g. Gunn et al., 2016) using automated co-registration algorithms that generally produce good-quality results. One remaining challenge is the provision of appropriate visualisation technologies that can provide an overview of a set of (sometimes disparate) results images, and can enable accurate interpretations to be made. Many specialised software tools now exist for visualising neuroimaging data (a comprehensive list, and a useful guide to visualisation can be found in Madan, 2015) however, their utility is necessarily constrained by the users display hardware; typically a single, or several standard desktop computer monitors. Advances in display technology have only been incompletely addressed, with most tools not optimised for larger displays, and also not incorporating modern user-interface features such as touch input. This occurs for two reasons, firstly that physical display hardware has only recently begun to support the higher resolutions required to display the higher fidelity data which are now routinely captured. Secondly the software used to display scientific data has not benefited from the revolution in distributed, cloud computing which data processing systems such as Map-Reduce and Hadoop provide (Patel et al., 2012).
To address these challenges and enable high-resolution collaborative exploration of detailed scientific data a new generation of advanced visualisation suites are being developed (Febretti et al., 2013). One example is the KPMG Data Observatory (DO) at Imperial College London. This is a panoramic display covering a 313 degree arc with a 6 m diameter, providing an immersive and collaborative space for exploration of data (see Figure 1 and Figure 2). The key differentiator of the space is its high resolution which totals 132 megapixels, in contrast with the low-resolution projector based approach of traditional CAVE systems. The system is driven by 32 rendering nodes that enable distributed analysis and rendering of data, and the display area can be flexibly configured into either a single display surface, or a number of sections displaying different information sources or applications. The key goal of the observatory is to provide a collaborative space for research teams to explore and discuss data in a visual format.
This collaborative, high-resolution approach to visualisation has much to offer the neuroimaging community. Particularly: 1) The ability to view images at full resolution without the need for interruptive actions such as zooming or panning through an image.
2) The ability for multiple practitioners to share the same, high resolution, view of data for discussion in a collaborative environment.
3) The ability to simultaneously show many views of the same or complementary data; large-scale visualisation allows complementary data to be shown simultaneously and accessed by a turn of the head, which enables easy comparison.
These benefits are of particular value to collaborative interdisciplinary groups exploring such multi-modal imaging studies. One case study under exploration at Imperial College involves Multiple Sclerosis. Synthesising and visualising the results of these varied tests is a challenge that is being addressed by current work on the DO. Lesion volume change and brain volume change data from analysis of MRI images are a critical components for tracking disease progression. Images from each contrast type (T1, T2, gadoliniumenhanced) provide unique information along with complementary limitations. The ability to register and view all modalities simultaneously enables the viewer to crosscheck the same regions of interest across large screens without the current need to toggle between screens or windows.
In addition, tools can be built to replicate inputs across imaging modalities, and between image sessions. A tool that highlights a region of interest on one modality can automatically replicate the marking of the same region across other modalities, on other sections of the DO display. Similarly, a lesion may be marked in the baseline image and have that mark replicated in a registered image from a follow-up session. The ability to view changes in these images simultaneously in the context of data collected from other tests such as OCT, VEP, functional criteria or radiological reports enables viewing of disparate sources of information in tight context.
The large display-area provides space to fit a timeline that incorporates imaging data, clinical events, treatments, written reports, and clinical test results; this gives a unique visualisation of the cause and effects of disease progression, treatments, and relapse events in MS. The flexibility and size of the display space enables novel visualisations, such as the scope to concurrently view individual results from a group of research subjects, or to view multiple sets of longitudinal data from a single subject. Clinicians and researchers can view, correlate, and cross-validate findings across heterogeneous data types within a single environment. As it is designed to be a collaborative environment for exploring images, multiple clinicians can highlight and share findings from different modalities and sources.
Environments such as the DO may one day become commonplace, however currently they are an expensive rarity, with only a few comparable systems existing worldwide (e.g. the 'HIPerWall' at University of California, San Diego). This currently strongly limits their accessibility to many researchers and clinicians. These constraints make the use of a tool such as the DO in current clinical practice impractical. The DO is more effectively deployed for clinical research purposes, or perhaps in consultant meetings, when high-level discussion of an individual case is required.
Technical considerations
From a technical perspective, software used within such highresolution environments must be adapted to cope with higher pixel densities and to work across a network of rendering computers. This is rarely a straightforward change, although with the advent of new rendering systems it is becoming easier. In general, vectorbased graphics systems that display neuroimaging data as a 3D mesh using rendering engines like OpenGL (e.g. Surf Ice) work better than bitmap-based display tools, that are limited by the (often poor) underlying resolution of the images themselves. Ideally, display software needs to evolve to support distributed visualisation systems able to support display across a large rendering surface, and scalability to support high-resolution environments. Also important will be the development of appropriate algorithms to support decision-making, for instance to highlight areas of potential interest to clinicians for review. This method of focussing attention and insight will be a critical area of development in the near future, and will need to have extremely high levels of robustness and reliability, particularly if algorithms will eventually have some input into clinical decision-making. Machine-learning platforms such as Google's Tensorflow (Abadi et al., 2016) are likely to be important components of such systems.
One potential area of investigation enabled by the DO is the quantification of how images are used within a visualisation space, particularly which data and which image regions are of most interest to clinicians. The key means to doing this is via head-and eyetracking systems, which are starting to become available within such visualisation spaces. This would provide a means of identifying patterns of behaviour in how clinicians use images to identify the most salient features. One hypothesis worthy of further investigation is to explore how clinicians with different levels of experience explore, manipulate, and interpret a set of different images. Eye tracking can also be used to improve user experience, to ensure that the most commonly accessed information is placed in prominent display areas.
Conclusions
Visualisation spaces such as the DO are relatively novel environments, and discovering the most effective ways of using them is still an on-going process. High-resolution spaces like the DO offer greater fidelity over previous large-scale systems, which can potentially drive greater insights. The large size of the space enables easy comparison and synthesis of multiple types of data, most obviously imaging formats, but also other clinical or research data types. Finally, the immersive collaboration space it provides can help to initiate and strengthen multi-disciplinary collaboration between clinicians, researchers, and data scientists. Large format displays like the DO have much to offer and will likely form an important part of future research and clinical practice.
Author contributions
All authors contributed to the first draft of the manuscript, and also were involved in revisions and editing of the final version. All authors have agreed to the final content.
Competing interests MW's primary employer is Imanova Ltd., a privately owned company specialising in contract research work for the pharmaceutical and bio-technology industries.
MY and DB have no competing interests to declare.
Grant information
The author(s) declared that no grants were involved in supporting this work.
For these reasons, this manuscript will be relevant and interesting to the readership of F1000Research and I recommend its indexing after some minor technical points have been addressed (see below).
Minor recommended changes: 1) The very first citation ("Katal , 2015"; paragraph 1, line 4) does not contain a hyperlink. The same et al. is true for the citation "Bakshi , 2008" in the first paragraph of the Case Study section. et al.
2) The abbreviation CAVE should be explained at its first instance in the text (5th paragraph) I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed. Competing Interests: | 2,871.4 | 2016-09-02T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Complete genome sequence of the filamentous anoxygenic phototrophic bacterium Chloroflexus aurantiacus
Background Chloroflexus aurantiacus is a thermophilic filamentous anoxygenic phototrophic (FAP) bacterium, and can grow phototrophically under anaerobic conditions or chemotrophically under aerobic and dark conditions. According to 16S rRNA analysis, Chloroflexi species are the earliest branching bacteria capable of photosynthesis, and Cfl. aurantiacus has been long regarded as a key organism to resolve the obscurity of the origin and early evolution of photosynthesis. Cfl. aurantiacus contains a chimeric photosystem that comprises some characters of green sulfur bacteria and purple photosynthetic bacteria, and also has some unique electron transport proteins compared to other photosynthetic bacteria. Methods The complete genomic sequence of Cfl. aurantiacus has been determined, analyzed and compared to the genomes of other photosynthetic bacteria. Results Abundant genomic evidence suggests that there have been numerous gene adaptations/replacements in Cfl. aurantiacus to facilitate life under both anaerobic and aerobic conditions, including duplicate genes and gene clusters for the alternative complex III (ACIII), auracyanin and NADH:quinone oxidoreductase; and several aerobic/anaerobic enzyme pairs in central carbon metabolism and tetrapyrroles and nucleic acids biosynthesis. Overall, genomic information is consistent with a high tolerance for oxygen that has been reported in the growth of Cfl. aurantiacus. Genes for the chimeric photosystem, photosynthetic electron transport chain, the 3-hydroxypropionate autotrophic carbon fixation cycle, CO2-anaplerotic pathways, glyoxylate cycle, and sulfur reduction pathway are present. The central carbon metabolism and sulfur assimilation pathways in Cfl. aurantiacus are discussed. Some features of the Cfl. aurantiacus genome are compared with those of the Roseiflexus castenholzii genome. Roseiflexus castenholzii is a recently characterized FAP bacterium and phylogenetically closely related to Cfl. aurantiacus. According to previous reports and the genomic information, perspectives of Cfl. aurantiacus in the evolution of photosynthesis are also discussed. Conclusions The genomic analyses presented in this report, along with previous physiological, ecological and biochemical studies, indicate that the anoxygenic phototroph Cfl. aurantiacus has many interesting and certain unique features in its metabolic pathways. The complete genome may also shed light on possible evolutionary connections of photosynthesis.
Background
The thermophilic bacterium Chloroflexus aurantiacus was the first filamentous anoxygenic phototrophic (FAP) bacterium (also known as the green non-sulfur bacterium or green gliding bacterium) to be discovered [1]. The type strain Cfl. aurantiacus J-10-fl was found in a microbial mat together with cyanobacteria when isolated from a hot spring near Sokokura, Hakone district, Japan. Cfl. aurantiacus can grow phototrophically under anaerobic conditions or chemotrophically under aerobic and dark conditions.
The photosystem of Cfl. aurantiacus includes the peripheral antenna complex known as a chlorosome, the B808-866 light-harvesting core complex, and a quinonetype (or type-II) reaction center [2,3]. While Cfl. aurantiacus primarily consumes organic carbon sources (i.e. acetate, lactate, propionate, and butyrate) that are released by the associated cyanobacteria in the Chloroflexus/cyanobacterial mats of its natural habitat, it can also assimilate CO 2 with the 3-hydroxypropionate (3HOP) autotrophic carbon fixation cycle [4,5]. Further, studies have reported carbon, nitrogen and sulfur metabolisms of Cfl. aurantiacus [1].
During the transition from an anaerobic to an aerobic world, organisms needed to adapt to the aerobic environment and to become more oxygen-tolerant. Most of the gene products can function with or without oxygen, whereas several proteins and enzymes are known to be exclusively functional in either aerobic or anaerobic environments. Thus, gene replacements have been found in the evolution of many metabolic processes [18][19][20]. Some aspects of the genome annotation of Chloroflexi species have been discussed by Bryant, Ward and coworkers [5,21].
Several genes encoding aerobic and anaerobic enzyme pairs, as well as a number of duplicated gene clusters, have been identified in the Cfl. aurantiacus genome. In this report, we use genomic annotation, together with previous physiological and biochemical studies, to illustrate how Cfl. aurantiacus may be a good model system for understanding the evolution of metabolism during the transition from anaerobic to aerobic conditions. Some of the genomic information is compared with that of the genome of Roseiflexus castenholzii, a recently characterized FAP bacterium that lacks chlorosomes [22].
Genome properties
The genome size of Cfl. aurantiacus J-10-fl (5.3-Mb) ( Table 1 and Figure 2) is comparable to that of other phototrophic Chloroflexi species: Chloroflexus sp. Y-400fl (5.3-Mb), Chloroflexus aggregans (4.7-Mb), Roseiflexus sp. RS-1 , and Roseiflexus castenholzii DSM 13941 . Here, we summarize several unique features in the Cfl. aurantiacus genome, and compare some of the features with other Chloroflexi species and various photosynthetic and non-photosynthetic microorganisms. The complete genome has been deposited in GenBank with accession number CP000909 (RefSeq entry NC_010175). Further information is available at the Integrated Microbial Genome database (http://img. jgi.doe.gov/cgi-bin/pub/main.cgi?section=TaxonDetail&-page=taxonDetail&taxon_oid=641228485). The oriC origin is at twelve o'clock of the circular genome map ( Figure 2). Like in many prokaryotes, AT-rich repeated sequence can be found in the origin of replication.
A. Photosynthetic antenna and reaction center genes
Cfl. aurantiacus has chimeric photosynthetic components, which contain characteristics of green sulfur bacteria (e.g., the chlorosomes) and purple photosynthetic bacteria (e.g., the integral-membrane antenna core complex surrounding a type II (quinone-type) reaction center), As the first FAP bacterium to be discovered, the excitation energy transfer and electron transfer processes in Cfl. aurantiacus have been investigated extensively [3]. During phototrophic growth of Cfl. aurantiacus, the light energy is first absorbed by its peripheral light-harvesting antenna, the chlorosome, which is a self-assembled bacteriochlorophyll complex (the major bacteriochlorophyll in chlorosomes is bacteriochlorophyll c (BChl c)) and encapsulated by a lipid monolayer. Energy is then transferred to the B808-866 light-harvesting core antenna complex, which is a protein-pigment complex associated with two spectral types of bacteriochlorophyll a (BChl a) (B808 and B866), through the baseplate of chlorosomes. The baseplate is a CsmA chlorosome protein-bacteriochlorophyll a (BChl a)-carotenoid complex (i.e. a protein-pigment complex) [23]. Finally, the excitons are transferred to the reaction center (RC), in which photochemical events occur. While both purple photosynthetic proteobacteria and Cfl. aurantiacus have a type II RC [24], the Cfl. aurantiacus RC is simpler than the purple bacterial RC [2] and contains only the L-and M-subunits (PufL and PufM), and not the H-subunit [25,26].
All of the genes encoding the B808-866 core complex (α-subunit (Caur_2090) and β-subunit (Caur_2091)) and RC (pufM (Caur_1051) and pufL (Caur_1052)) are present in the Cfl. aurantiacus genome. The pufL and pufM genes are fused in the Roseiflexus castenholzii genome. The arrangement of genes for the core structural proteins of the photosynthetic complexes is significantly different from that found in purple bacteria, where the puf (photosynthetic unit fixed) operon invariably contains the LH complex genes, the RC genes encoding for the L and M subunits and the tetraheme cytochrome associated with the reaction center (if present) [27].
Although proteins are not required for BChl c selfassemblies in chlorosomes, various proteins have been identified to be associated with the lipid monolayer of the Cfl. aurantiacus chlorosomes [3]. In addition to the baseplate protein CsmA, the chlorosome proteins CsmM and CsmN have been characterized [3], and used to be considered as the only two proteins associated with the chlorosome mono-lipid layer. Other chlorosome proteins have also been reported, either through biochemical characterization (CsmP (unpublished results in Blankenship lab from in-solution trypsin digestion of the Cfl. aurantiacus chlorosomes) and AcsF [28]) or genomic analysis by analogy to green sulfur bacteria (CsmO, CsmP, CsmY) [29]. Among these proteins, AcsF, a protein responsible for chlorophyll biosynthesis under aerobic and semi-aerobic growth conditions, was unexpectedly identified from chlorosome fractions during anaerobic and photoheterotrophic growth of Cfl. aurantiacus [28].
There has been some discussions as to whether AcsF is obligated to be associated with the chlorosomes [21,30], and the role of AcsF under anaerobic growth condition remains to be addressed, because it is an oxygen-dependent enzyme in other systems [31,32]. Figure 3A shows the proposed pathway of photosynthetic electron transport in Cfl. aurantiacus and purple photosynthetic proteobacteria. Similar to the purple photosynthetic proteobacteria, a cyclic electron transport pathway in Cfl. aurantiacus is also proposed. Nevertheless, some protein complexes in the electron transport chain of Cfl. aurantiacus are recognized to be substantially different from those of purple bacteria. Cfl. aurantiacus uses menaquinone as liposoluble electron and proton carrier [33][34][35][36], and purple proteobacteria use either ubiquinone [37,38] or menaquinone [39] as the mobile carrier in light-induced cyclic electron transport chain. The genetic information, analyses, and possible roles in photosynthesis and respiration for the complexes are described below.
B. Electron transport complex genes
(I) Alternative complex III (ACIII) Integral membrane oxidoreductase complexes are essential for energy metabolism in all bacteria. In phototrophic bacteria, these almost invariably include the photoreaction center and a variant of respiratory Complex III, either the cytochrome bc 1 complex (anoxygenic) or cytochrome b 6 f complex (oxygenic). No homolog of the Complex III has been identified biochemically in Cfl. aurantiacus, and no genes with significant homology to Complex III are found in the Cfl. aurantiacus genome. Previous experimental evidence indicated that alternative complex III (ACIII) complexes, identified in Cfl. aurantiacus and some non-phototrophic bacteria, function in electron transport [34,[40][41][42][43][44]. Genes encoding an ACIII have also been identified in the genome of Candidatus Chloracidobacterium thermophilum [21], an aerobic phototrophic Acidobacterium [45]. In the Cfl. aurantiacus genome, two ACIII operons have been identified: one encodes the C p (subscript p stands for photosynthesis) ACIII complex for anaerobic photosynthesis, and the other encodes the C r (subscript r stands for respiration) ACIII complex for aerobic respiration ( Table 2). The C p operon is similar to a seven-gene nrf operon in E. coli strain K-12. Hussain et al. suggested that the nrf operon in E. coli is essential for reducing nitrate to ammonia [46]. The Cfl. aurantiacus C p operon (Caur_0621 to Caur_0627) contains genes encoding two types of cytochrome c; a multiheme cytochrome c (component A, actA, Caur_0621), which has recently been identified experimentally to be a penta-heme component [44], and a mono-heme cytochrome c (component E, actE, Caur_0625), which forms a homodimer in the ACIII complex [44], a putative FeScluster-hydrogenase component-like protein (component B, actB, Caur_0622), a polysulfide reductase (component C, actC, Caur_0623), similar to NrfD and likely involved in the transfer of electrons from the quinone pool to cytochrome c, an integral membrane protein (component F, actF, Caur_0626) and two uncharacterized proteins (component D (actD, Caur_0624) and component G (actG, Caur_0627)) ( Figure 3B). The proposed C r ACIII operon contains 12 genes (Caur_2133 to 2144) encoding a putative FAD-dependent oxidase (component K, actK, Caur_2133), D-lactate dehydrogenase (component L, actL, Caur_2134), a Cysrich protein with Fe-S binding motifs (component M, actM, Caur_2135), components B (actB, Caur_2136), E (actE, Caur_2137), A (actA, Caur_2138), and G (actG, Caur_2139) in the C p ACIII operon, an electron transport protein SCO1/SenC (Caur_2140), and four subunits of cytochrome c oxidase (component J, Caur_2141 -2144). The cytochrome c oxidase (COX, or complex IV, EC 1.9.3.1) genes in the C r operon are part of complex IV, so the C r ACIII operon clustered with complex IV genes could create a respiratory superoperon ( Figure 3B). Additionally, a gene cluster encoding a putative SCO1/SenC electron transport protein (Caur_2423) and two COX subunits (Caur_2425 (subunit II) and Caur_2426 (subunit I)) is 300 genes away from the putative C r ACIII operon. Note that genes encoding components C (actC), D (actD) and F (actF) in the C p ACIII operon are absent in the C r ACIII operon. Whether these three components are required for the formation of the ACIII complex under aerobic respiratory growth will be addressed with biochemical studies.
(II) Auracyanin
Two type I blue copper proteins have been isolated and proposed to function as the mobile electron carriers in photosynthetic electron transport of photosynthetic organisms: one is plastocyanin in cyanobacteria, photosynthetic algae and higher plants and the other is auracyanin in Chloroflexus and Roseiflexus. The type I blue copper protein auracyanin, which has a single copper atom coordinated by two histidine, one cysteine and one methionine residues at the active site, is proposed to participate in the electron transfer from ACIII to the reaction center in Cfl. aurantiacus [35,[47][48][49], and it has also been recently characterized in Roseiflexus castenholzii [50]. Additionally, an auracyanin gene (trd_0373) has been identified in the genome of the non-photosynthetic bacterium Thermomicrobium roseum DSM 5159, which is evolutionally related to Cfl. aurantiacus [51]. Two ACIII operons are proposed in Cfl. aurantiacus, and the two auracyanin proteins of Cfl. aurantiacus, AuraA and AuraB, which share 38% sequence identity, have been suggested to function with the two variant ACIII complexes [35]. AuraA, a water-soluble protein, can only be detected during phototrophic growth, whereas AuraB, a membrane-tethered protein, is synthesized during both phototrophic and dark growth [35]. It has been hypothesized that AuraA transports electron from the C p ACIII during photosynthesis and AuraB from the C r ACIII during respiration. The auraA (Caur_3248) and auraB (Caur_1950) genes are distant from the C p operon (Caur_0621 to Caur_0627) and C r operon (Caur_2132 to Caur_2144). In addition to auraA and auraB, two more genes encoding auracyanin-like proteins (or type I blue-copper proteins) (Caur_2212 and Caur_2581) have also been found in the Cfl. aurantiacus genome. In contrast, Roseiflexus castenholzii has only one copy of the ACIII operon (a six-gene cluster, Rcas_1462 to Rcas_1467), in which the gene encoding the component G of the Cfl. aurantiacus C p ACIII complex is missing) ( Figure 3B), and one auraA-like gene (Rcas_3112).
(III) NADH:quinone oxidoreductase
Two operons encoding the enzymes for NADH:quinone oxidoreductase (Complex I, EC 1.6.5.3) are present in the genome. Complex I catalyzes electron transport in the oxidative phosphorylation pathway. Many bacteria have 14 genes (nuoA to nuoN) encoding Complex I, and some photosynthetic bacteria, such as the purple photosynthetic proteobacteria Rhodobacter sphaeroides and Rhodopseudomonas palustris, contain two Complex I gene clusters. In Cfl. aurantiacus, two putative Complex I gene clusters have been identified, one with all of the 14 gene subunits arranging in order (nuoA to nuoN, Caur_2896 -2909), and the other has genes loosely arranged (with nuoE and nuoF 800 genes apart), duplicated nuoM genes, and the lack of nuoG (Table 2). It is possible that either nuoG is shared with the two putative Complex I gene clusters or an alternative gene with less sequence similarity functions as nuoG. For example, two gene loci (Caur_0184 and Caur_2214) encoding gene products that have~24% sequence identity with NuoG, which is a molybdopterin oxidoreductase. To date, there have been no biochemical studies on the Complex I from Cfl. aurantiacus or any FAP bacteria.
(IV) Other electron transport proteins
In addition to the electron transport proteins described above, the sequence has been determined of cytochrome c 554 , which is also a subunit of the reaction center of Cfl. aurantiacus [52][53][54]. The sequence of the cytochrome c subunit in the Roseiflexus castenholzii RC has also been reported [55]. The gene encoding cytochrome c 554 (pufC, Caur_2089) is in an operon flanked with two genes encoding the bacteriochlorophyll biosynthesis enzymes, bchP (Caur_2087) and bchG (Caur_2088) at the 5'-end, and two genes encoding the B808-866 complex (Caur_2090 (α-subunit) and Caur_2091 (β-subunit)) at the 3'-end ( Figure 4C).
(a) Cobalamin
The gene replacements during the anaerobic to aerobic transitions are best known in the biosynthesis of cobalamin, in which the genes in the anaerobic pathway up to cobalt insertion into the corrin ring are completely replaced in the aerobic pathway [57]. Different strategies are used to generate cobyrinate diamide, the end product of both anaerobic and aerobic pathways, in which cobalt is introduced into the corrin ring at the dihydroisobacteriochlorin stage (early stage) of the anaerobic pathway and at the late stage of the aerobic pathway. The genomic information of Cfl. aurantiacus reveals a large cobalamin biosynthesis and cobalt transporter operon (Caur_2560 -2580), containing genes in both aerobic and anaerobic biosynthesis pathways, suggesting that Cfl. aurantiacus can synthesize cobalamin under various growth conditions. Genes encoding anaerobic (Tables 3 and 4). The aerobic cobalt chelatase (EC 6.6.1.2), containing three subunits (CobN, CobS and CobT), is a close analog to Mg-chelatase (also containing three subunits, BchH, BchI and BchD) that catalyzes the Mg-insertion in the chlorin ring in chlorophyll biosynthesis. It is known that aerobic cobalt chelatase subunits CobN and CobS are homologous to Mg-chelatase subunits BchH and BchI, respectively, and that CobT has also been found to be remotely related to the third subunit of Mg-chelatase, BchD. Compared to other strictly aerobic and anaerobic photosynthetic bacteria, the aerobic anoxygenic phototrophic proteobacterium Roseobacter denitrificans only carries the cobNST genes [27], and the strictly anaerobic bacterium Heliobacterium modesticaldum has only the cbiK gene [58]. The presence of cobNST and cbiK gene pairs in the Cfl. aurantiacus genome suggests a gene replacement in cobalamin biosynthesis by Cfl. aurantiacus under different growth conditions.
(b) Heme
The heme operon (Caur_2593 can be synthesized respectively by the gene products of Caur_0029 (encoding protoheme IX farnesyltransferase) and Caur_1010 (encoding a cytochrome aa3 biosynthesis protein), consistent with the spectral evidence provided by Pierson that protoheme and heme derivatives can be identified [59].
(c) (Bacterio)chlorophylls
The anaerobic to aerobic transitions are particularly intriguing in chlorophyll (Chl) biosynthesis and photosynthesis, in which molecular oxygen is lethal for photosynthetic anaerobes but is required for the life of aerobic phototrophs. Contrary to the cobalamin and heme biosynthesis, no gene cluster for (B)Chl biosynthesis is recognized in the Cfl. aurantiacus genome, whereas photosynthesis gene clusters are present in purple photosynthetic proteobacteria [27,60,61] ( Figure 4B) and heliobacteria [58]. The photosynthetic genes of Cfl. aurantiacus are rather spread out in the chromosome, similar to the distribution of photosynthetic genes in Cba. tepidum ( Figure 4A and Additional file 1: Table S1). Both aerobic and anaerobic genes, acsF (Caur_2590) and bchE (Caur_3676), have been identified in the Cfl. aurantiacus genome ( Other anaerobic to aerobic transitions may also be found in the biosynthesis of BChls in Cfl. aurantiacus. For example, Mg-insertion to the porphyrin ring is the first committed step of BChl biosynthesis, and three bchH (Caur_2591, Caur_3151, and Caur_3371), three bchI (Caur_0117, Caur_0419 and Caur_1255) and one bchD (Caur_0420) have been annotated for the Mg-chelatase (BchHID) of Cfl. aurantiacus, whereas one each for Hbt. modesticaldum [58], Rsb. denitrificans [27] and several strictly anaerobic and aerobic bacteria. On the other hand, three bchH, and one copy of bchD and bchI, have been identified in the green sulfur bacterium Cba. tepidum (Additional file 1: Table S1), and Eisen et al. proposed [56] that different BchH gene products may contribute to synthesize different isoforms of BChl (BChl a, BChl c, and Chl a can be synthesized in Cba. tepidum). In comparison, two types of BChls, BChl a and BChl c can be synthesized by Cfl. aurantiacus. It is also possible that different bchH and bchI genes catalyze Mg-chelation to the BChl in various growth conditions of Cfl. aurantiacus.
Two bchG-like genes (bchG and bchK) encoding chlorophyll synthases that attach the tail into (bacterio) chlorophylls are present in the genome, as shown in Additional file 1: Table S1. Because the tails of BChl a (mostly phytyl-or geranylgeranyl-substituted) and BChl c (mainly stearyl-substituted) are rather distinct, it was suggested that one bchG-like gene encodes the enzyme synthesizing BChl a and the other homolog synthesizes BChl c [65]. The bchG gene sequence reported by Lopez et al. [65] was proposed to be BChl a synthase, since the encoding protein sequence is analogous to the sequence of chlorophyll synthase from Rhodobacter capsulatus. The proposed gene function was later verified [66]. The bchK gene encoding BChl c synthase was later confirmed with the bchK-knockout Cba. tepidum mutant [67]. Thus, bchG (Caur_2088) and bchK (Caur_0138) encode enzymes synthesizing BChl a and BChl c, respectively, in Cfl. aurantiacus. Although genes responsible for chlorophyll biosynthesis are rather spread out in the Cfl. aurantiacus genome, two genes responsible for BChl c biosynthesis, bchU, encoding C-20 methyltransferase [68], and bchK, are clustered with the genes encoding chlorosome proteins, and two BChl a biosynthesis genes, bchP, encoding geranylgeranyl hydrogenase [69], and bchG, are in the operon containing genes encoding cytochrome c 554 (pufC) and the B808-866 light-harvesting complex ( Figure 4C).
(II) Nucleic acids
The level of oxygen tolerance in Cfl. aurantiacus may be suggested from the presence of genes encoding ribonucleotide reductase (RNR), which is essential for DNA synthesis. Three classes of RNR have been reported, in which the class I is a diiron oxygen-dependent (NrdB, EC 1.17.4.1), class II is coenzyme B 12 -dependent (NrdJ, EC 1.17.4.2), and class III is S-adenosyl-L-methionine/ [4Fe-4S] cluster-dependent (NrdG, EC 1.17.7.1). It has been suggested that biosynthesis of dNTP is catalyzed by NrdB, NrdJ, and NrdG in aerobic, aerobic and anaerobic, and strictly anaerobic environments, respectively. The activity of NrdJ in Cfl. aurantiacus has been reported [70]. Genes encoding NrdB and NrdJ, but not NrdG, have been found in the Cfl. aurantiacus genome, suggesting that NrdG and NrdJ produce dNTP for Cfl. aurantiacus in response to the oxygen level (Table 3). Moreover, in the fourth step of the biosynthesis of pyrimidine, the conversion of dihydroorotoate to orotate, dihydroorotate oxidase (EC 1.3.3.1, aerobic) versus dihydroorotate dehydrogenase (EC 1.3.99.11, anaerobic) are expressed in aerobic versus anaerobic conditions and genes encoding these enzymes have been identified (Table 3). Together, different classes of RNR and dihydroorotate oxidoreductase in the nucleic acid biosynthesis also suggest adaptation or evolution from anaerobic to aerobic conditions. (Table 3). PFOR and KFOR, which are essential for pyruvate metabolism and energy metabolism through the TCA cycle, are commonly found in anaerobic organisms, whereas PDH and KDH are more widely spread and have been found in all aerobic organisms.
D. CO 2 assimilation and carbohydrate, nitrogen and sulfur metabolisms (I) Carbon fixation and metabolism
Genes encoding carbon monoxide dehydrogenase (coxG and coxSML) have been found in the genome, suggesting that Cfl. aurantiacus can use CO as an electron source during aerobic or semi-aerobic growth. A similar mechanism has been suggested in the aerobic anoxygenic phototrophic proteobacterium Rsb. denitrificans [27]. CO 2 generated from CO oxidation can be assimilated by Cfl. aurantiacus via the autotrophic carbon fixation cycle and/or the CO 2 -anaplerotic pathways. Under autotrophic growth conditions Cfl. aurantiacus is known to use a unique carbon fixation pathway: the 3hydroxypropionate (3HOP) autotrophic cycle [4,5,[71][72][73][74]. Three inorganic carbon molecules are assimilated into the 3HOP cycle to produce one molecule of pyruvate ( Figure 5). A similar carbon fixation pathway called 3-hydroxypropionate/4-hydroxybutyrate (3HOP/4HOB) cycle was reported recently in archaea (Crenarchaeota) [75][76][77][78]. Several enzymes responsible for the 3HOP and 3HOP/4HOB cycles, including CO 2fixing enzymes (e.g., acetyl-CoA carboxylase and propionyl-CoA carboxylase), are common to the two pathways.
Additionally, genes encoding malic enzyme (tme), phosphoenolpyruvate (PEP) carboxykinase (pckA) and PEP carboxylase (ppc) have been identified, suggesting that Cfl. aurantiacus can assimilate some CO 2 and replenish the metabolites in the TCA cycle through the CO 2 -anaplerotic pathways. The active CO 2 -anaplerotic pathways have been identified experimentally in other anoxygenic phototrophs during autotrophic, mixotrophic and heterotrophic growth [82][83][84][85][86][87], and the activities of PEP carboxylase and malic enzyme have also been detected in cell extracts during photoheterotrophic growth of Cfl. aurantiacus (Tang and Blankenship, unpublished results). Moreover, all of the genes in the TCA cycle are present in the Cfl. aurantiacus genome.
In central carbon metabolism, all of the genes in the TCA cycle as well as the glyoxylate cycle have been identified. The glyoxylate cycle is one of the anaplerotic pathways for assimilating acetyl-CoA, thus lipids can be converted to carbohydrates. Glyoxylate is synthesized and also assimilated in the 3HOP cycle ( Figure 5), and is also produced by isocitrate lyase (EC 4.1.3.1) (icl, Caur_3889) and consumed by malate synthase (EC 2.3.3.9) (mas, Caur_2969) in the glyoxylate cycle. With acetate as the sole organic carbon source to support the photoheterotrophic growth of Cfl. aurantiacus, higher activities of isocitrate lyase and malate synthase have been reported [88]. Further, some FAP bacteria have been shown to assimilate glycolate from their habitat [89]. As glycolate can be converted to glyoxylate by glycolate oxidase (glcDEF, EC 1.1.3.15) and glyoxylate reductase (glyr, EC 1.1.1.26), the glyoxylate shunt and the 3HOP cycle can be employed by Cfl. aurantiacus for assimilating glycolate. Together, genes encoding central carbon metabolism, 3HOP cycle, glycolate assimilation, the glyoxylate shunt and CO oxidation are listed in Table 4.
(II) Carbohydrate metabolism
Three carbohydrate metabolism pathways are utilized by various bacteria: the Embden-Meyerhof-Parnas (EMP) pathway (glycolysis), the Entner-Doudoroff (ED) pathway, and the pentose phosphate (PP) pathway. Cfl. aurantiacus does not have genes in the ED pathway, but has genes in the oxidative PP pathway, in agreement with the activities reported for the essential enzymes (glucose-6-phosphate dehydrogenase and 6-phosphogluconate dehydrogenase) in the oxidative PP pathway [90]. Genes in the non-oxidative pathway have also been found. The gene encoding fructose-1,6-bisphosphate (FBP) aldolase (EC 4.1.2.13), catalyzing the reaction of D-fructose-1,6-bisphosphate (FBP) ↔ glyceraldehyde-3phosphate (GAP) + dihydroxyacetone phosphate (DHAP) in the EMP/gluconeogenic pathway, is missing in the genomes of Chloroflexi species (e.g., Cfl. aurantiacus, Chloroflexus sp. Y-400-fl and Chloroflexus aggregans). If Cfl. aurantiacus were unable to synthesize FBA, an active pentose phosphate pathway would be required for the interconversion of D-glucose-6-phosphate and GAP, so glucose and other sugars can be converted to pyruvate and other energy-rich species, and vice versa. However, Cfl. aurantiacus has been reported to grow well in glucose and a number of sugars during aerobic respiration [91], and uses the EMP pathway for carbohydrate catabolism [90]. Also, higher activities of phosphofructokinase and FBP aldolase have been found in the cells grown with glucose than with acetate [1,92,93]. Note that Roseiflexi species (e.g., Roseiflexus sp. RS-1 and Roseiflexus castenholzii DSM 13941), which are closely related to Chloroflexi species, have a putative bifunctional FBP aldolase/phosphatase gene identified [94], and genes encoding various types of aldolase have been found in the Cfl. aurantiacus genome. Taken together, Cfl. aurantiacus and other Chloroflexi species may employ either a novel FBP aldolase or more than one carbohydrate metabolic pathway. Further efforts will be needed to clarify this picture.
(III) Nitrogen metabolism and amino acid biosynthesis
Cfl. aurantiacus is known to use ammonia as the sole nitrogen source, and several amino acids (nitrogenous compounds), but not nitrate, can enhance the growth. Neither nitrogenase nor nitrogen fixation has been reported in Cfl. aurantiacus [95]. Consistent with the physiological studies, genes encoding enzymes in ammonia production, such as histidine ammonia lyase (hal), tyrosine phenol-lyase (tpl), asparaginase (aspg), glutamate dehydrogenase (glud1), and glutamate ammonialyase (glul), but not nitrogen metabolism (nifDHK) and nitrate reduction, are present in the genome. Note that Cfl. aurantiacus has genes encoding a copper-containing nitrite reductase (EC 1.7.2.1) (Caur_1570) and the αsubunit (narG, Caur_3201), but not other subunits (nar-HIJ) and the catalytic subunit (nasA), of nitrate reductase. Two threonine/serine dehydratases (EC 4.3.1.19), one of which is inhibited by isoleucine may be related to the isoleucine biosynthesis, and other key enzymes in isoleucine biosynthesis have been reported [96]. Consistent with the biochemical studies, two ilvA genes (Caur_2585 and Caur_3892) encoding threonine dehydratases, and genes in the isoleucine/leucine/valine biosynthesis pathway have been identified ( Table 4). The biosynthesis of isoleucine has recently been discovered through the citramalate pathway in several microbes [97,98], while no gene encoded citramalate synthase (CimA, EC 2.3.1.182), required for the citramalate pathway, has been found in the Cfl. aurantiacus genome.
(IV) Sulfur assimilation and sulfate reduction
Cfl. aurantiacus can use a variety of compounds as sulfur sources, including cysteine, glutathione, methionine, sulfide and sulfate, during photoheterotrophic or photoautotrophic growth [99,100]. When Cfl. aurantiacus uses sulfate as the sulfur source, high activity of ATP sulfurylase has been reported [99]. Sulfate is reduced to sulfide during photoautotrophic and photoheterotrophic growth for synthesizing cysteine and cofactors. Consistent with the experimental data, a complete sulfur reduction pathway with a sulfur reduction operon (Caur_0686 -0692) has been identified (Table 4). Genes encoding two ATP sulfurylases (ATP + sulfate adenosine 5'-phosphosulfate (APS) + PPi) can be identified: sulfate adenylyltransferase (EC 2.7.7.4, Caur_0690) and a bifunctional sulfate adenylyltransferase/adenylylsulfate kinase (Caur_2113). Pyrophosphate (PPi) produced in the reaction of ATP sulfurylase is hydrolyzed to inorganic phosphate (Pi) via inorganic diphosphatase (EC 3.6.1.1) (Caur_3321). The bifunctional enzyme or/and adenylylsulfate kinase (EC 2.7.1.25, Caur_0692) converts APS to 3'-phosphoadenosine 5'-phosphosulfate (PAPS), which is reduced to sulfite and PAP (adenosine 3',5'diphosphate) by PAPS reductase (EC 1.8.4.8, cysH, Caur_0691). While many organisms reduce APS instead of PAPS to sulfite, it is unknown if Cfl. aurantiacus carries out this reaction as genes encoding APS reductase (EC 1.8.99.2) have not been identified in the genome. In addition to the proposed pathway, sulfotransferase (Caur_2114) can also transfer the sulfate group from PAPS, which serves as a sulfur donor, to an alcohol or amine acceptor for generating various cellular sulfate compounds. PAP is generated as a by-product in the reactions catalyzed by PAPS reductase and sulfotransferase, and has no known functions in metabolism and is likely hydrolyzed to AMP and Pi via PAP phosphatase (unidentified yet). Sulfite reductase (EC 1.8.1.2, Caur_0686) reduces sulfite to sulfide, which is incorporated into cysteine by cysteine synthase A (cysK, Caur_1341) or cysteine synthase B (cysM, Caur_3489). The overall proposed sulfur reduction and assimilation pathways are illustrated in Figure 6.
Other than using sulfide as the sulfur source during photoheterophic growth, Cfl. aurantiacus grows photoautotrophically in the presence of sufficient sulfide [100][101][102]. Under these circumstances, sulfide likely functions as electron donor by replacing organic carbon sources contributed from cyanobacteria. In agreement with these physiological and ecological studies, the gene encoding a type II sulfide:quinone oxidoreductase (SQR) (sqr, Caur_3894), has been found in the genome. SQRs belong to the members of the disulfide oxidoreductase flavoprotein superfamily. Other than type II SQRs, type I and type III SQRs with distinct sequences and structures and cofactor requirements have also been reported [103]. Although all of the characterized SQRs catalyze oxidization of sulfide to elemental sulfur (E(ox) + H 2 S E H2 (red) + S°), different types of SQR have been identified in various classes of photosynthetic bacteria [103]: type I, purple non-sulfur anoxygenic photosynthetic proteobacteria (such as Rhodobacter capsulatus ) [104]; type II, Cfl. aurantiacus and cyanobacteria (such as Synechocystis PCC 6803) [103]; and type III, green sulfur bacteria [105]. In addition to being characterized in phototrophic microbes, type II SQRs have also been identified in various non-photosynthetic bacteria as well as in the mitochondria of animals, and are suggested to be involved in sulfide detoxification, heavy metal tolerance, sulfide signaling, and other essential cellular processes [103].
E. Evolution perspectives
Our paper reports numerous aerobic/anaerobic gene pairs and oxygenic/anoxygenic metabolic pathways in the Cfl. aurantiacus genome. As suggested by phylogenetic analyses [6][7][8] and comparisons to the genome and reports of other photosynthetic bacteria, one can propose lateral or horizontal gene transfers between Cfl. aurantiacus and other photosynthetic bacteria. Some proposed lateral gene transfers are listed below and also illustrated in Figure 7. Note that horizontal/lateral gene transfers suggested below are important in the evolution of photosynthesis. It is important to remember that it has not yet been generally accepted which organisms were donors and which were acceptors during gene transfers. The proposed gene transfers below remain to be verified with more sequenced genomes and biochemical studies in photosynthetic organisms.
(I) Photosynthetic components
Chlorosomes were transferred between Cfl. aurantiacus and the green sulfur bacteria (GSBs). The GSBs have larger chlorosomes and more genes encoding chlorosome proteins [21,29]. The integral membrane core antenna complex and a type II (quinone-type) reaction center were transferred either to or from the purple photosynthetic bacteria.
(II) (Bacterio)chlorophyll biosynthesis
AcsF and BchE are proposed to be responsible for biosynthesis of the isocyclic ring of (bacterio)chlorophylls under aerobic and anaerobic growth conditions, respectively [28,31,32,[62][63][64]. The acsF gene was transferred either to or from purple bacteria and cyanobacteria, and the bchE gene either to or from heliobacteria, purple bacteria and GSBs.
(III) Electron transfer complexes
Two gene clusters of the complex I genes were transferred to (some) purple bacteria. Genes encoding auracyanin may have been transferred either to or from cyanobacteria where the type I blue copper protein plastocyanin is found. Alternative complex III (ACIII) may have evolved from or to the cytochrome bc 1 or b 6 /f complex.
(IV) Central carbon metabolism
Genes encoding pyruvate dehydrogenase and α-ketoglutarate dehydrogenase were transferred either to or from purple bacteria and cyanobacteria, and genes encoding PFOR (or pyruvate synthase) and KFOR (or αketoglutarate synthase) to or from heliobacteria and GSBs. GSBs may have acquired the ATP citrate lyase gene to complete the reductive TCA (RTCA) cycle for CO 2 fixation, and heliobacteria obtained the gene encoding (Re)-citrate synthase for synthesizing citrate and operating the partial oxidative TCA (OTCA) cycle [84]. Since Cfl. aurantiacus operates the OTCA cycle, the RTCA cycle in GSBs may have evolved from the OTCA cycle [106].
Conclusions
The filamentous anoxygenic phototrophic (FAP) bacteria (or the green non-sulfur bacteria) have been suggested to be a critical group in the evolution of photosynthesis. As the first characterized FAP bacterium, the thermophilic bacterium Chloroflexus aurantiacus is an amazing organism. It has a chimerical photosystem that comprises characteristic types of green sulfur bacteria and purple photosynthetic bacteria. It is metabolically versatile, and can grow photoautotrophically and photoheterotrophically under anaerobic growth conditions, and chemotrophically under aerobic growth conditions. Consistent with these physiological and ecological studies, the Cfl. aurantiacus genome has duplicated genes and aerobic/anaerobic enzyme pairs in (photosynthetic) electron transport chain, central carbon metabolism, and biosynthesis of tetrapyrroles and nucleic acids. In particular, duplicate genes and gene clusters for two unique proteins and protein complexes in Cfl. aurantiacus and several FAP bacteria, the alternative complex III (ACIII) and type I blue copper protein auracyanin, have been identified in Cfl. aurantiacus genome. The genomic information and previous biochemical studies also suggest that Cfl. aurantiacus operates diverse carbon assimilation pathways. In contrast to the purple photosynthetic bacteria, the photosynthetic genes are rather spread out in the Cfl. aurantiacus chromosome. Overall, the genomic analyses presented in this report, along with previous physiological, ecological and biochemical studies, indicate that Cfl. aurantiacus has many interesting and certain unique features in its metabolic pathways.
Gene sequencing
The genome of Chloroflexus aurantiacus J-10-fl was sequenced at the Joint Genome Institute (JGI) using a combination of 8-kb and 14-kb DNA libraries. All general Figure 7 Proposed lateral/horizontal gene transfers between Cfl. aurantiacus and other phototrophic bacteria. Proposed gene transfers are shown in double-headed arrows. Genes in Cfl. aurantiacus may have been transferred either to or from other phototrophic bacteria. Genes encoding core antenna complex, type II reaction center (RC), pyruvate/α-ketoglutarate dehydrogenase, Complex I, AcsF and BchE may have been transferred from or to purple bacteria; pyruvate/α-ketoglutarate dehydrogenase, auracyanin and AcsF may have been transferred from or to cyanobacteria. Auracyanin may have been evolved from or to plastocyanin; chlorosomes, pyruvate/α-ketoglutarate synthase, BchE may have been transferred from or to green sulfur bacteria; and pyruvate/α-ketoglutarate synthase and BchE may have been transferred from or to heliobacteria. aspects of library construction and sequencing performed at the JGI can be found at http://www.jgi.doe.gov/. Draft assemblies were based on 58246 total reads. Both libraries provided 10x coverage of the genome. The Phred/Phrap/ Consed software package (http://www.phrap.com) was used for sequence assembly and quality assessment [107][108][109]. After the shotgun stage, reads were assembled with parallel phrap (High Performance Software, LLC). Possible mis-assemblies were corrected with Dupfinisher [110] or transposon bombing of bridging clones (Epicentre Biotechnologies, Madison, WI). Gaps between contigs were closed by editing in Consed, custom primer walk or PCR amplification (Roche Applied Science, Indianapolis, IN). A total of 1893 additional reactions were necessary to close gaps and to raise the quality of the finished sequence. The completed Cfl. aurantiacus genome sequence contains 61248 reads, achieving an average of 11-fold sequence coverage per base with an error rate less than 1 in 100,000.
Annotation
The genome of Chloroflexus aurantiacus J-10-fl has been annotated by the default JGI annotation pipeline. Genes were identified using two gene modeling programs, Glimmer [111] and Critica [112] as part of the Oak Ridge National Laboratory genome annotation pipeline. The two sets of gene calls were combined using Critica as the preferred start call for genes with the same stop codon. Briefly, structural RNAs were predicted using BLASTn and tRNAscan-SE [113] with default prokaryotic settings. Protein-coding genes were identified using gene modeling program Prodigal [114]. Genes with less than 80 amino acids which were predicted by only one of the gene callers and had no Blast hit in the KEGG database at 1e-05 were deleted. Predicted gene models were analyzed using GenePRIMP pipeline [115], and erroneous gene models were manually curated. The revised gene-protein set was searched by BLASTp against the KEGG GENES database [116] and GenBank NR using e-value of 1.0e-05, a minimum of 50% identity and alignment length of at least 80% of both the query and subject protein. These BLASTp hits were used to perform the initial automated functional assignments. In addition, protein sequences were searched against Pfam [117] and TIGRFAM [118] databases using HMMER2 package and trusted cutoffs for each model. Protein sequences were also searched against COG database [119] using RPS-BLAST search with e-value of 1.0e-05 and retaining the best hit. These data sources were combined to assert a product description for each predicted protein. Non-coding genes and miscellaneous features were predicted using tRNAscan-SE [113], TMHMM [120], and signalP [121]. The annotated genome sequence was submitted to GenBank and loaded into the Integrated Microbial Genomes (IMG) database [122].
Phylogenetic analyses
The 16S rRNA gene sequences of various photosynthetic bacteria were obtained from NCBI. The sequences of 16S rRNA genes were aligned using the program Bioedit [123], and the phylogenetic tree was constructed using the program MEGA 4.1 [124]. The tree is an unrooted neighbor joining tree.
Abbreviations of phototrophic bacteria
Three-letter abbreviation for the generic name of phototrophic bacteria follows the information listed on LPSN (List of Prokaryotic names with Standing in Nomenclature), an on-line database curated by professor Jean P. Euzéby (http://www.bacterio.cict.fr/index.html).
Additional material
Additional file 1: Table S1. Annotation of photosynthetic genes in Cfl. aurantiacus and Cba. tepidum. | 8,787.8 | 2011-06-29T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Theonella: A Treasure Trove of Structurally Unique and Biologically Active Sterols
The marine environment is considered a vast source in the discovery of structurally unique bioactive secondary metabolites. Among marine invertebrates, the sponge Theonella spp. represents an arsenal of novel compounds ranging from peptides, alkaloids, terpenes, macrolides, and sterols. In this review, we summarize the recent reports on sterols isolated from this amazing sponge, describing their structural features and peculiar biological activities. We also discuss the total syntheses of solomonsterols A and B and the medicinal chemistry modifications on theonellasterol and conicasterol, focusing on the effect of chemical transformations on the biological activity of this class of metabolites. The promising compounds identified from Theonella spp. possess pronounced biological activity on nuclear receptors or cytotoxicity and result in promising candidates for extended preclinical evaluations. The identification of naturally occurring and semisynthetic marine bioactive sterols reaffirms the utility of examining natural product libraries for the discovery of new therapeutical approach to human diseases.
Introduction
"Who finds a Theonella, finds a treasure". It is amazing that even today, worldwide, there is a deep interest in the chemistry and biology of the sponge of the genus Theonella (Lithistida, Theonellidae). In the early 1980s, Kashman managed to recognize the potential of this sponge as a source of secondary metabolites with unique chemical structures and very interesting pharmacological activities [1]. This review provides an update on the isolation and semisynthesis of sterols from different species of marine sponges of the genus Theonella (order Lithistida, class Demospongiae).
The chemical diversity found in several secondary metabolites from Theonella spp. has been ascribed in part to the presence of symbiotic microorganisms [14,15], recognized as the "real chemical factories".
The modifications in the side chains are less common and mainly regard the presence of additional double bonds. are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
A keto group at C-3 in theonellasterone (23) and conicasterone (40); are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The presence of additional hydroxyl groups at C-7, C-8, C-9, C-14, or C-15; Mar. Drugs 2023, 21, 291 2 of 17 are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The presence of oxygenated functions at C-7 or C-15; Mar. Drugs 2023, 21, 291 2 of 17 are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
Among 4-exo-methylene sterols, theonellasterol (1) and conicasterol (2), reported for the first time by Djerassi et al. [1], are also 24-alkylated derivatives, and specifically theonellasterol (1) with the (24S)-ethyl group represents the biomarker of the species T. swinhoei, whereas conicasterol (2), with the (24R)-methyl group, is the biomarker of the T. conica species. As structural features, both molecules share the same tetracyclic core bearing the β-hydroxyl group at C-3, the unusual exo-methylene functionality at C-4, and the rare ∆ 8, 14 double bond ( Figure 1). are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
Among 4-exo-methylene sterols, theonellasterol (1) and conicasterol (2), reported for the first time by Djerassi et al. [1], are also 24-alkylated derivatives, and specifically theonellasterol (1) with the (24S)-ethyl group represents the biomarker of the species T. swinhoei, whereas conicasterol (2), with the (24R)-methyl group, is the biomarker of the T. conica species. As structural features, both molecules share the same tetracyclic core bearing the β-hydroxyl group at C-3, the unusual exo-methylene functionality at C-4, and the rare ∆ 8, 14 double bond ( Figure 1). are characterized by potent and interesting biological activities such as antimicrobial [17], cytotoxic [9], and in some cases by modulating activity towards metabolic nuclear receptors, mainly the pregnane X receptor (PXR) and the farnesoid X receptor (FXR). In particular, the modulation ranges from the selective antagonism on FXR of theonellasterol (1) [18] and selective agonism on PXR of the truncated-sulfated derivatives, solomonsterol A (47) and B (48) [11], to dual modulation on FXR/PXR of other 4-methylene steroids [19]. The translation potential of these natural compounds was also validated by extensive in vivo pharmacological exploration [19,20].
4-Exo-Methylene Sterols
The main class of sterols isolated from sponges of the genus Theonella, and particularly from Theonella swinhoei and Theonella conica, are 4-exo-methylene sterols, relatively rare metabolites in nature. The biosynthetically unusual 4-exo-methylene group arises from the oxidative demethylation of the 4,4-dimethyl precursor followed by oxidation and dehydration of the primary alcohol affording the 4-exo-double bond [1].
Among 4-exo-methylene sterols, theonellasterol (1) and conicasterol (2), reported for the first time by Djerassi et al. [1], are also 24-alkylated derivatives, and specifically theonellasterol (1) with the (24S)-ethyl group represents the biomarker of the species T. swinhoei, whereas conicasterol (2), with the (24R)-methyl group, is the biomarker of the T. conica species. As structural features, both molecules share the same tetracyclic core bearing the β-hydroxyl group at C-3, the unusual exo-methylene functionality at C-4, and the rare ∆ 8, 14 double bond ( Figure 1). The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
Nuclear receptors (NRs), together with rhodopsin-like GPCRs, are well-recognized molecular targets in drug discovery and are unique to the animal kingdom [23,24]. Indeed, the presence of an ancestral NR has been demonstrated in sponges, the simplest animal organisms, and it is well recognized that there is a close relationship between the complexity of the organism and the diversification of the genes encoding for NR. Moreover, during the evolution along the metazoan tree, both changes in the structural organization of the receptors and their corresponding ligands occurred [25].
NRs are ligand-activated transcription factors that regulate the expression of genes involved in several physiological and physio-pathological processes, including reproduction, metabolism of xeno-and endobiotics, and inflammation [26]. They are characterized by a common organization of several domains, with the most conserved DNA-binding domain (DBD) and ligand-binding one (LBD). LBD accommodates ligands and undergoes conformational changes. NRs are generally found as monomers but function as heterodimer complexes with another nuclear receptor, the retinoid X receptor (RXR), in binding to DNA. In the absence of a ligand, this complex is associated with several corepressors while the binding of a ligand allows the release of the corepressors and the recruitment of coactivators and, consequently, the activation of the transcription machine.
The potentialities of this type of pharmacological targets lie in different factors, such as their ability to respond to specific small molecules, including intracellular metabolites and xenobiotics, their pleiotropic nature that allows a single receptor to influence the expression of many genes, and their involvement in the regulation of several metabolic and inflammatory diseases, including diabetes, dyslipidemia, cirrhosis, and fibrosis.
Among the NRs, the pregnane X receptor (PXR), also known as xenobiotic sensor, is mainly involved in bile acid homeostasis and nowadays is considered as a key factor in bile acid detoxification in liver and in guts. PXR also plays important roles in various pathophysiological processes, such as lipid metabolism, glucose homeostasis, and inflammatory response [27,28], including liver disease and inflammatory bowel diseases (IBD) [29,30]. PXR LBD is larger compared to ones of other NRs and is characterized by hydrophobic residues, allowing the binding of many structurally different ligands, some of them isolated from marine organisms [31].
Nuclear receptors (NRs), together with rhodopsin-like GPCRs, are well-recognized molecular targets in drug discovery and are unique to the animal kingdom [23,24]. Indeed, the presence of an ancestral NR has been demonstrated in sponges, the simplest animal organisms, and it is well recognized that there is a close relationship between the complexity of the organism and the diversification of the genes encoding for NR. Moreover, during the evolution along the metazoan tree, both changes in the structural organization of the receptors and their corresponding ligands occurred [25].
NRs are ligand-activated transcription factors that regulate the expression of genes involved in several physiological and physio-pathological processes, including reproduction, metabolism of xeno-and endobiotics, and inflammation [26]. They are characterized by a common organization of several domains, with the most conserved DNA-binding domain (DBD) and ligand-binding one (LBD). LBD accommodates ligands and undergoes conformational changes. NRs are generally found as monomers but function as heterodimer complexes with another nuclear receptor, the retinoid X receptor (RXR), in binding to DNA. In the absence of a ligand, this complex is associated with several corepressors while the binding of a ligand allows the release of the corepressors and the recruitment of coactivators and, consequently, the activation of the transcription machine.
The potentialities of this type of pharmacological targets lie in different factors, such as their ability to respond to specific small molecules, including intracellular metabolites and xenobiotics, their pleiotropic nature that allows a single receptor to influence the expression of many genes, and their involvement in the regulation of several metabolic and inflammatory diseases, including diabetes, dyslipidemia, cirrhosis, and fibrosis.
Among the NRs, the pregnane X receptor (PXR), also known as xenobiotic sensor, is mainly involved in bile acid homeostasis and nowadays is considered as a key factor in bile acid detoxification in liver and in guts. PXR also plays important roles in various pathophysiological processes, such as lipid metabolism, glucose homeostasis, and inflammatory response [27,28], including liver disease and inflammatory bowel diseases (IBD) [29,30]. PXR LBD is larger compared to ones of other NRs and is characterized by hydrophobic residues, allowing the binding of many structurally different ligands, some of them isolated from marine organisms [31].
The farnesoid X receptor (FXR) is a bile acid sensor, regulating bile acid homeostasis and lipid and glucose metabolism. FXR is highly expressed in the liver, intestine, kidneys, and adrenal glands [32,33] and is activated by bile acids, with chenodeoxycholic acid (CDCA, 41) or 6α-ethyl-chenodeoxycholic acid (6-ECDCA or OCA, 42) as the most potent endogenous and semisynthetic ligands, respectively. FXR also has an important effect on inflammation. Ligands of this receptor have become promising therapeutic agents for different diseases, such as primary biliary cirrhosis (PBC) and nonalcoholic fatty liver disease (NASH) [34].
Among the 4-methylene steroids, theonellasterol (1) represents the first example of a natural highly selective FXR antagonist [18], unlike the most promiscuous guggulsterone [35,36]. Theonellasterol (1) has been proven to antagonize FXR transactivation caused by CDCA, reversing the effect of CDCA on the expression of canonical FXR target genes including OSTα, BSEP, SHP, and MRP4. Moreover, theonellasterol (1) stabilizes the recruitment of the nuclear corepressor NCoR, thus inhibiting the expression of FXRregulated genes.
The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The orientation of the hydroxyl group at C-3; The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The unsaturation between C-8 and C-14 in theonellasterol (1); rare ∆ 8, 14 double bond (Figure 1). Different specimens of Theonella, collected in different geographic areas, allowed the isolation of a large library of 4-methylene sterols (Figures 2 and 3) featuring more complex functionalizations in the steroidal nucleus, such as: A keto group at C-3 in theonellasterone (23) The modifications in the side chains are less common and mainly regard the presence of additional double bonds.
The lack of the carboxylic group at C-24 and the presence of an aliphatic side chain.
Of interest, in mammals, the LBD of FXR has a curved shape suitable for binding the bent steroidal core of 5β-bile acids ( Figure 5), and the identification of a flat-shape steroidal molecule as a highly selective FXR antagonist represented a cornerstone in the decodification of the mechanism of FXR modulation.
Docking studies, elucidating the binding mode of theonellasterol (1) in FXR LBD, confirmed that, even if the A/B ring trans junction causes a different spatial arrangement, the marine sterol competes with 6-ECDCA (42), establishing several hydrophobic interactions within the LDB [18].
The farnesoid X receptor (FXR) is a bile acid sensor, regulating bile acid homeostasis and lipid and glucose metabolism. FXR is highly expressed in the liver, intestine, kidneys, and adrenal glands [32,33] and is activated by bile acids, with chenodeoxycholic acid (CDCA, 41) or 6α-ethyl-chenodeoxycholic acid (6-ECDCA or OCA, 42) as the most potent endogenous and semisynthetic ligands, respectively. FXR also has an important effect on inflammation. Ligands of this receptor have become promising therapeutic agents for different diseases, such as primary biliary cirrhosis (PBC) and nonalcoholic fatty liver disease (NASH) [34].
Among the 4-methylene steroids, theonellasterol (1) represents the first example of a natural highly selective FXR antagonist [18], unlike the most promiscuous guggulsterone [35,36]. Theonellasterol (1) has been proven to antagonize FXR transactivation caused by CDCA, reversing the effect of CDCA on the expression of canonical FXR target genes including OSTα, BSEP, SHP, and MRP4. Moreover, theonellasterol (1) stabilizes the recruitment of the nuclear corepressor NCoR, thus inhibiting the expression of FXR-regulated genes.
From a chemical point of view, theonellasterol (1) profoundly differs from the endogenous ligand of FXR, CDCA, ( Figure 5) mainly in: The orientation of the hydroxyl group at C-3; The A/B ring junction, which is trans in theonellasterol (1) and cis in CDCA; The unsaturation between C-8 and C-14 in theonellasterol (1); The lack of the carboxylic group at C-24 and the presence of an aliphatic side chain.
Of interest, in mammals, the LBD of FXR has a curved shape suitable for binding the bent steroidal core of 5β-bile acids ( Figure 5), and the identification of a flat-shape steroidal molecule as a highly selective FXR antagonist represented a cornerstone in the decodification of the mechanism of FXR modulation.
Docking studies, elucidating the binding mode of theonellasterol (1) in FXR LBD, confirmed that, even if the A/B ring trans junction causes a different spatial arrangement, the marine sterol competes with 6-ECDCA (42), establishing several hydrophobic interactions within the LDB [18]. In addition, theonellasterol (1) attenuates liver injury caused by bile duct ligation, according to the measurement of serum alanine aminotransferase levels and the extent of liver necrosis at histopathology [18]. Analysis of genes involved in bile acid uptake and excretion by hepatocytes in this model revealed that theonellasterol (1) increases liver expression of MRP4, which, in contrast, is negatively regulated by FXR agonists. In summary, these studies demonstrate that FXR antagonism in vivo is feasible and results in positive modulation of liver MRP4 in rodent models of cholestasis [18]. This highlights the potential of marine organisms as a source of novel lead compounds for the treatment of human diseases.
Further pharmacological investigation of the secondary metabolites from Theonella swinhoei, collected at the Solomon Islands, allowed the identification of several 4-exomethylene sterols as potent agonists of PXR and modulators of FXR [19]. In 2011, a library of polyhydroxysterols (theonellasterols B-H (6-10, 13, 14) and conicasterols B-D (28-30), Figures 2 and 3) was isolated. Among these, theonellasterol G (13) increased the FXR In addition, theonellasterol (1) attenuates liver injury caused by bile duct ligation, according to the measurement of serum alanine aminotransferase levels and the extent of liver necrosis at histopathology [18]. Analysis of genes involved in bile acid uptake and excretion by hepatocytes in this model revealed that theonellasterol (1) increases liver expression of MRP4, which, in contrast, is negatively regulated by FXR agonists. In summary, these studies demonstrate that FXR antagonism in vivo is feasible and results in positive modulation of liver MRP4 in rodent models of cholestasis [18]. This highlights the potential of marine organisms as a source of novel lead compounds for the treatment of human diseases.
Further pharmacological investigation of the secondary metabolites from Theonella swinhoei, collected at the Solomon Islands, allowed the identification of several 4-exomethylene sterols as potent agonists of PXR and modulators of FXR [19]. In 2011, a library of polyhydroxysterols (theonellasterols B-H (6-10, 13, 14) and conicasterols B-D (28-30), Figures 2 and 3) was isolated. Among these, theonellasterol G (13) increased the FXR target OSTα and simultaneously PXR target genes SULT2A1 and MDR1, resulting in the first example of FXR modulator and PXR agonist and, thus, a potential lead in the treatment of inflammatory bowel disease [19].
Docking calculations showed that, in addition to several hydrophobic interactions in the LDB of FXR, the β-orientation of the hydroxyl group at C-11 of theonellasterol G (13) is essential for the antagonistic activity [19]. With regard to the PXR agonistic activity, particularly crucial for the activation is the interaction between the 15α-OH group and Ser247 and the presence of the ethyl group at position C-24, engaging key interactions in the LDB [19]. This study disclosed, for the first time, marine steroids as dual modulators of PXR and FXR both involved in intestinal inflammation, paving the way towards the potential utility in the treatment of inflammatory bowel diseases.
Pursuing the systematic study on the chemical diversity of secondary metabolites from Theonella swinhoei, Sepe et al. isolated conicasterol E (31), a 7α,15β-dihydroxyconicasterol analogue. The pharmacological characterization of this sterol disclosed its activity as dual FXR/PXR modulator, able to induce gene expression of bile acids detoxification, such as BSEP and OSTα, without inducing SHP [37]. For the structural characterization of conicasterol F (32) and theonellasterol I (15), two other examples of dual FXR/PXR ligands, the traditional NMR analysis was not enough to uniquely establish their stereochemistry that has required the application of combined ROE-distance analysis and DFT calculations of the NMR chemical shifts [38].
By applying a chemoproteomic approach, in 2015, Margarucci and coauthors demonstrated that theonellasterone (23), 2-oxo-4-methylene-24-ethyl steroid, together with its antagonistic activity on FXR, is able to interact with peroxiredoxin-1 and to reduce enzyme cysteine overoxidation induced by H 2 O 2 in both in vitro and in vivo living cells [39].
Unconventional Sterols as NRs Ligands
Steroids isolated from sponge are often characterized by the presence of unusual structural chemical features, such as additional oxygenation on the tetracyclic nucleus and on the side chain, sulfate esterification, alkylation or truncation of the side chain, unsaturation in the ring D, or secostructures with cleavage in the rings of the tetracyclic core [40]. This is the case of swinhosterols (43)(44)(45), unconventional steroids with the 4-exomethylene and the 8-14 seco-8,14-dione functions ( Figure 6). The structural modification of the basic carbon skeleton, with the cleavage of the six membered ring C, arises from the oxidation of the double bond between C-8 and C-14 [41]. Sulfated sterols, often isolated from marine sponge, are characterized by 2β,3α,6αtri-O-sulfate groups and different patterns of substitution in the side chain. Festa et al. [11] isolated solomonsterols A (47) and B (48) from the butanol extract of a specimens of Theonella swinhoei, as the first example of truncated C-24 and C-23 side chains sulfated sterols of marine origin ( Figure 6). These molecules, characterized by the presence of three sulfate groups (2 secondary and 1 primary) and a truncated C-24 or C-23 side chain, were demonstrated to be PXR agonists with a potency even higher than rifaximin, and therefore potential leads for the treatment of human disorders characterized by dysregulation of innate immunity [11]. Docking calculation showed that PXR allows accommodation of solomonsterols in its LBD, establishing several favorable hydrophobic interactions, hydrogen bonds between the C2-O-sulfate group and Cys284 and the sulfate on the side chain with Lys210 and electrostatic interactions with Ser247 (2-O-sulfate) and His407 (3-O-sulfate). All the above interactions contribute to the binding of the steroidal nucleus in the pocket of the nuclear receptor [11].
Sterols with Potential Anticancer Activity
The 4-exo-methylene sterols from Theonella have also attracted considerable attention for their cytotoxic activity.
The chemical analysis of Theonella swinhoei collected in the Philippines allowed the identification of the novel 7α-hydroxytheonellasterol (4), which showed, in vitro, an IC50 value (29.5 μM) higher than that of theonellasterol (IC50 > 100 μM), probably due to the presence of the additional 7α-OH group (Figure 2) [45].
In 2012, theonellasterol K (17), acetyltheonellasterol (3), and acetyldehydroconicasterol (26) were isolated from the specimens Theonella swinhoei collected from coral reefs off the coast of Pingtung in Taiwan (Figures 2 and 3) (Figures 2 and 3) together with some already known polyhydroxylated steroids. Among all isolated molecules showing dual PXR/FXR behavior, swinhosterol B (44) was selected as a potent PXR agonist/FXR antagonist. The ability of this marine sterol to induce the expression of target genes for PXR and FXR and to counter-regulate the induction of proinflammatory cytokines in a PXR-dependent manner was demonstrated [42].
Swinhosterols A (43) and B (44), together with the already reported theonellasterol (1) and conicasterol (2), showed also antagonistic activity towards ERRβ (estrogen-related receptor), another member of the nuclear receptor family, inhibiting the expression of the canonical target gene NKCC1 induced by genistein, similarly to diethylstilbestrol, a well-known ERR antagonist. Docking studies on swinhosterols within ERRs-LBD furnished the structural requirements for the interaction with the target [43].
Malaitasterol A (46), a potent PXR agonist isolated from a Solomon collection of Theonella swinhoei [44], presents a profound rearrangement in its steroidal core ( Figure 6). Even if the 4-methylene group is already present, malaitasterol A (46) is characterized by the unprecedented 11,12-13,14-bis-secosteroid structure deduced by the analysis of spectroscopic data and arising from theonellasterol-like skeleton through the breaking of bonds in the C and D rings of the steroidal nucleus. The configuration at C-15 was established by DFT 13 C chemical shift calculations.
Sulfated sterols, often isolated from marine sponge, are characterized by 2β,3α,6α-tri-O-sulfate groups and different patterns of substitution in the side chain. Festa et al. [11] isolated solomonsterols A (47) and B (48) from the butanol extract of a specimens of Theonella swinhoei, as the first example of truncated C-24 and C-23 side chains sulfated sterols of marine origin ( Figure 6). These molecules, characterized by the presence of three sulfate groups (2 secondary and 1 primary) and a truncated C-24 or C-23 side chain, were demonstrated to be PXR agonists with a potency even higher than rifaximin, and therefore potential leads for the treatment of human disorders characterized by dysregulation of innate immunity [11]. Docking calculation showed that PXR allows accommodation of solomonsterols in its LBD, establishing several favorable hydrophobic interactions, hydrogen bonds between the C2-O-sulfate group and Cys284 and the sulfate on the side chain with Lys210 and electrostatic interactions with Ser247 (2-O-sulfate) and His407 (3-Osulfate). All the above interactions contribute to the binding of the steroidal nucleus in the pocket of the nuclear receptor [11].
Sterols with Potential Anticancer Activity
The 4-exo-methylene sterols from Theonella have also attracted considerable attention for their cytotoxic activity.
The chemical analysis of Theonella swinhoei collected in the Philippines allowed the identification of the novel 7α-hydroxytheonellasterol (4), which showed, in vitro, an IC 50 value (29.5 µM) higher than that of theonellasterol (IC 50 > 100 µM), probably due to the presence of the additional 7α-OH group (Figure 2) [45].
In 2021, Lai et al. reported the isolation of theonellasterol L (11), together with three known 4-methylene sterols, two nucleosides, and one macrolide (Figure 2). The comparison of the cytotoxic activities of 4-methylene sterols reported in this paper with the previous reported ones showed that only derivatives highly functionalized and especially with oxygenated functions at position C-14 or C-15 are endowed with cytotoxic activity [47].
Swinhoeisterols A (49) and B (50) from Theonella swinhoei collected off the cost of Xisha Island featured the unprecedented 6/6/5/7 ring system, expanding the family of sterols with rearranged carbon skeletons (Figure 7) [12]. As a consequence of an inverse virtual screening campaign, the biological activity of sterols from Theonella spp. was also expanded, demonstrating swinhoeisterols A (49) and B (50) as a new chemotype of (h)p300 inhibitors, a molecular target involved in several pathologies, mainly cancers.
with rearranged carbon skeletons (Figure 7) [12]. As a consequence of an inverse virtual screening campaign, the biological activity of sterols from Theonella spp. was also expanded, demonstrating swinhoeisterols A (49) and B (50) as a new chemotype of (h)p300 inhibitors, a molecular target involved in several pathologies, mainly cancers.
Encouraged by the results obtained by swinhoeisterol A (49) (IC50 3.3 μM vs. (h)p300), Zhan and collaborators reanalyzed the Xisha sponge Theonella swinhoei, [13] isolating four new swinhoeisterols, C-F (51-54) (Figure 7), with swinhoeisterol C (52) showing an inhibitory effect (IC50 8.8 μM) towards (h)p300 like that of swinhoeisterol A (49). The biological results allowed delineation of a structure-activity relationship (SAR), suggesting the double bond or the epoxide function at C-8/C-9 to be essential for the activity towards (h)p300. On the contrary, the presence of an additional hydroxyl group at C-7 or a Δ 7 double bond, as in swinhoeisterols D (53) and E (51), leads to the loss of activity.
Total Synthesis of Solomonsterols and Their Analogues
One of the main drawbacks of bioactive natural compounds is often the scarcity of isolated substances, hampering future developments. Unfortunately, even if marine natural products possess interesting and specific pharmacological activities, they often are obtained in insufficient amounts for preclinical and clinical testing.
The process to sample rare natural compounds, harvested from their natural source, can be laborious and in some cases, the total synthesis offers an alternative access. This is the case for solomonsterols A (47) and B (48) (Figure 6), the first examples of marine sterols as PXR agonists [11]. Total synthesis of solomonsterols A (47) and B (48) was accomplished, furnishing the two natural compounds in large amounts for deeper pharmacological investigation and opening the way towards the development of a small library of derivatives. Structure-activity relationship studies facilitated information on the interaction between these leads and PXR and on their binding mode at atomic level [49].
As depicted in Scheme 1, the key steps of the synthetic protocol are the modification of the functionalities on A/B rings to afford the desired trans junction and the installation of (Figure 7), with swinhoeisterol C (52) showing an inhibitory effect (IC 50 8.8 µM) towards (h)p300 like that of swinhoeisterol A (49). The biological results allowed delineation of a structure-activity relationship (SAR), suggesting the double bond or the epoxide function at C-8/C-9 to be essential for the activity towards (h)p300. On the contrary, the presence of an additional hydroxyl group at C-7 or a ∆ 7 double bond, as in swinhoeisterols D (53) and E (51), leads to the loss of activity.
Total Synthesis of Solomonsterols and Their Analogues
One of the main drawbacks of bioactive natural compounds is often the scarcity of isolated substances, hampering future developments. Unfortunately, even if marine natural products possess interesting and specific pharmacological activities, they often are obtained in insufficient amounts for preclinical and clinical testing.
The process to sample rare natural compounds, harvested from their natural source, can be laborious and in some cases, the total synthesis offers an alternative access. This is the case for solomonsterols A (47) and B (48) (Figure 6), the first examples of marine sterols as PXR agonists [11]. Total synthesis of solomonsterols A (47) and B (48) was accomplished, furnishing the two natural compounds in large amounts for deeper pharmacological investigation and opening the way towards the development of a small library of derivatives. Structure-activity relationship studies facilitated information on the interaction between these leads and PXR and on their binding mode at atomic level [49].
As depicted in Scheme 1, the key steps of the synthetic protocol are the modification of the functionalities on A/B rings to afford the desired trans junction and the installation of the hydroxyl groups at C2-β and C3-α. The required A/B trans ring junction was obtained through tosylation and simultaneous inversion at C-3 and elimination at C-6 (intermediates 60 and 61). The introduction of the two hydroxyl groups at C-2 and C-3 on ring A was achieved through the introduction of ∆ 2 (intermediates 62 and 63), epoxidation of the double bond (intermediates 64 and 65), and subsequent epoxide opening providing the desired 2β,3α-diols (intermediates 66 and 67). Finally, reduction at the methyl ester on the side chain and exhaustive sulfation of the alcohol functionalities afforded the desired molecules. mechanism, and these findings make this compound a promising lead in the treatment of inflammatory bowel diseases (IBDs) [20].
In addition, solomonsterol A (47) was proven to be effective in attenuating systemic inflammation and immune dysfunction in a mouse model of rheumatoid arthritis [49].
However, the use of solomonsterol A (47) could cause severe systemic effects due to PXR activation in the liver. To overcome this limitation in clinical settings, a small library of derivatives was designed and prepared. Starting from the intermediates 66 and 67 (Scheme 1), the sulfation of C-2/C-3 diols followed by reduction or hydrolysis and coupling afforded the C-24 or C-23 alcohol derivatives (72 and 73) or the conjugate derivatives of solomonsterol A (74, 76, and 77) with 5-aminosalicylic acid, glycine, or taurine [51]. HO These synthetic routes were completed in 10 steps (31% yield) for solomonsterol A (47) and in a total of 13 steps (10% yield) for solomonsterol B (48), affording enough amounts for further pharmacological evaluation.
Tested in in vivo animal models of colitis, synthetic solomonsterol A (47) modulated the expression of proinflammatory cytokines TGFβ and IL10 by an NF-kB-dependent mechanism, and these findings make this compound a promising lead in the treatment of inflammatory bowel diseases (IBDs) [20].
In addition, solomonsterol A (47) was proven to be effective in attenuating systemic inflammation and immune dysfunction in a mouse model of rheumatoid arthritis [49].
However, the use of solomonsterol A (47) could cause severe systemic effects due to PXR activation in the liver. To overcome this limitation in clinical settings, a small library of derivatives was designed and prepared. Starting from the intermediates 66 and 67 (Scheme 1), the sulfation of C-2/C-3 diols followed by reduction or hydrolysis and coupling afforded the C-24 or C-23 alcohol derivatives (72 and 73) or the conjugate derivatives of solomonsterol A (74, 76, and 77) with 5-aminosalicylic acid, glycine, or taurine [51].
Similar modifications were made on intermediate 61 to speculate on the pharmacophoric role played by the functionalities on ring A in compounds 78 and 81 (Scheme 2), featuring the lack of the sulfate group at C-2 and bearing a 3β-or 3α-sulfate function, respectively. Starting from cholesterol, the same synthetic route used for the total synthesis of solomonsterols afforded cholestane disulfate (84) (Scheme 3) characterized by a hydrophobic side chain [51].
Cholestane disulfate 84 (Scheme 3), a simplified analogue of solomonsterol A (47), resulted to be the most promising compound coming from this medicinal chemistry campaign. This compound resulted to be a potent PXR agonist, able to increase the expression of the target gene CYP3A4 in HepG2 cells, similarly to the parent compound solomonsterol A (47). Further in vitro pharmacological evaluation demonstrated that compound 84 was able to modulate the immune response triggered by bacterial endotoxin in human macrophages and to reduce hepatic stellate cell transdifferentiation, affecting the basal expression of α-smooth muscle actin (αSMA) [51]. The above effects stated cholestane disulfate 84 as a new lead in the treatment of IBD [51] and liver fibrosis disease.
The synthesis of these compounds allowed definition of an SAR (Figure 8). In particular, the length of the side chain bearing the sulfate group had no influence on the binding with PXR, whereas the alcohol derivatives in the side chain lost the ability (decreased the activity) to induce the expression of PXR target genes as well as the absence or inversion of the sulfate group at C-2 in ring A of the steroidal nucleus.
Theonellasterol Series
Chemical modifications on theonellasterol (1), a selective FXR antagonist [18], afforded a series of semisynthetic derivatives (Figure 9) [52]. In particular, the authors investigated the effect of chemical modifications on ring A of steroidal nucleus, regarding, particularly, the exo-methylene group at C-4 and the hydroxy group at C-3. The ∆ 8,14 bond was proven to be poor responsive to chemical modifications, therefore all derivatives maintained the above functionality.
Theonellasterol Series
Chemical modifications on theonellasterol (1), a selective FXR antagonist [18], afforded a series of semisynthetic derivatives (Figure 9) [52]. In particular, the authors investigated the effect of chemical modifications on ring A of steroidal nucleus, regarding, particularly, the exo-methylene group at C-4 and the hydroxy group at C-3. The Δ 8,14 bond was proven to be poor responsive to chemical modifications, therefore all derivatives maintained the above functionality. This medicinal chemistry campaign allowed identification of compounds 87, 88, and 91 as the most promising leads and obtaining of fundamental information, also at atomic level using molecular docking studies, on the requirements necessary to maintain or lose activity towards FXR ( Figure 10).
Conicasterol Series
Starting from conicasterol (2), with a significant PXR activating effect in HepG2 transfected cells [19,42], some modification on ring A and on the side chain afforded several 24-methyl semisynthetic derivatives ( Figure 11) [21]. The combination of these chemical modifications, biological evaluation, and docking studies provided the molecular bases of ligand/PXR interaction, useful to delineate a preliminary structure-activity relationship. This medicinal chemistry campaign allowed identification of compounds 87, 88, and 91 as the most promising leads and obtaining of fundamental information, also at atomic level using molecular docking studies, on the requirements necessary to maintain or lose activity towards FXR ( Figure 10).
Theonellasterol Series
Chemical modifications on theonellasterol (1), a selective FXR antagonist forded a series of semisynthetic derivatives (Figure 9) [52]. In particular, the aut vestigated the effect of chemical modifications on ring A of steroidal nucleus, reg particularly, the exo-methylene group at C-4 and the hydroxy group at C-3. The Δ was proven to be poor responsive to chemical modifications, therefore all der maintained the above functionality. This medicinal chemistry campaign allowed identification of compounds 87 91 as the most promising leads and obtaining of fundamental information, also a level using molecular docking studies, on the requirements necessary to maintain activity towards FXR ( Figure 10).
Conicasterol Series
Starting from conicasterol (2), with a significant PXR activating effect in HepG fected cells [19,42], some modification on ring A and on the side chain afforded 24-methyl semisynthetic derivatives ( Figure 11) [21]. The combination of these c modifications, biological evaluation, and docking studies provided the molecular ligand/PXR interaction, useful to delineate a preliminary structure-activity relati
Conicasterol Series
Starting from conicasterol (2), with a significant PXR activating effect in HepG2 transfected cells [19,42], some modification on ring A and on the side chain afforded several 24-methyl semisynthetic derivatives ( Figure 11) [21]. The combination of these chemical modifications, biological evaluation, and docking studies provided the molecular bases of ligand/PXR interaction, useful to delineate a preliminary structure-activity relationship. Figure 11. Semisynthetic conicasterol analogues [21].
In particular, the first series of modifications was made on the exo-methylene function at C-4, from the perspective of a total synthesis of more simplified and accessible PXR modulators inspired by conicasterol scaffold (Figure 11). This function was reduced affording the 4-α-Me derivative 95 or subjected to ozonolysis giving compound 96. Oxidation of the hydroxyl group at C-3 followed by reduction of the C-4 exo-methylene functionality by catalytic hydrogenation and the ketone group by NaBH4 allowed access to compounds 97 and 98, differing in the relative configuration of the substituents at C-3 and C-4. Finally, starting from dehydroconicasterol (25), the reduction of the double bond in the side chain afforded compound 99, useful in exploring the importance of the configuration of the methyl group at C-24.
To delineate a SAR in PXR modulation, the above conicasterol semisynthetic derivatives, together with other natural sterols such as preconicasterol (27), 24-dehydroconicasterol D (24), and 25-dehydrotheonellasterol (5) (Figures 2 and 3), were evaluated for their activity towards PXR. As a general trend, the substitution of 4-exo-methylene functionality with a methyl group (compounds 95, 97, and 99) or the introduction of a keto group at C-4, as in 96, causes a loss of activity towards PXR, except for compound 98, featuring both substituents on ring A in α-configuration and retaining PXR agonistic activity. In addition, modifications on the side chain impacted on PXR activity, with a negative effect when the 24-ethyl or 24-exo-methylene groups were present, such as in 25-dehydrotheonellasterol (5) or 24-dehydroconicasterol D (24), respectively, while preconicasterol (27), bearing a cholestane-like side chain, maintained PXR agonistic activity.
Total Synthesis of Swinhoeisterol A (49) and Its Analogue (105)
In 2019, Duecker et al. reported, for the first time, the synthesis of swinhoeisterol A (49) from ergosterol by exploiting a radical framework reconstruction [53]. In addition, in 2020, the same authors described in detail the synthetic efforts (Scheme 4) towards the successful route of this unusual sterol and its analogue, Δ 22 -24-epi-swinhoeisterol A (105) [54]. As reported in Scheme 4, the key steps are the conversion of ergostane skeleton into 13(14→8)diabeo framework using a radical rearrangement of 14-hydroxy intermediate 101, the introduction of campestane-like side chain in derivative 106, and the installation of the 4-exo-methylene moiety via 4-hydroxy-methyl derivative 108, followed by elimination. Figure 11. Semisynthetic conicasterol analogues [21].
In particular, the first series of modifications was made on the exo-methylene function at C-4, from the perspective of a total synthesis of more simplified and accessible PXR modulators inspired by conicasterol scaffold (Figure 11). This function was reduced affording the 4-α-Me derivative 95 or subjected to ozonolysis giving compound 96. Oxidation of the hydroxyl group at C-3 followed by reduction of the C-4 exo-methylene functionality by catalytic hydrogenation and the ketone group by NaBH 4 allowed access to compounds 97 and 98, differing in the relative configuration of the substituents at C-3 and C-4. Finally, starting from dehydroconicasterol (25), the reduction of the double bond in the side chain afforded compound 99, useful in exploring the importance of the configuration of the methyl group at C-24.
To delineate a SAR in PXR modulation, the above conicasterol semisynthetic derivatives, together with other natural sterols such as preconicasterol (27), 24-dehydroconicasterol D (24), and 25-dehydrotheonellasterol (5) (Figures 2 and 3), were evaluated for their activity towards PXR. As a general trend, the substitution of 4-exo-methylene functionality with a methyl group (compounds 95, 97, and 99) or the introduction of a keto group at C-4, as in 96, causes a loss of activity towards PXR, except for compound 98, featuring both substituents on ring A in α-configuration and retaining PXR agonistic activity. In addition, modifications on the side chain impacted on PXR activity, with a negative effect when the 24-ethyl or 24-exo-methylene groups were present, such as in 25-dehydrotheonellasterol (5) or 24-dehydroconicasterol D (24), respectively, while preconicasterol (27), bearing a cholestane-like side chain, maintained PXR agonistic activity.
Total Synthesis of Swinhoeisterol A (49) and Its Analogue (105)
In 2019, Duecker et al. reported, for the first time, the synthesis of swinhoeisterol A (49) from ergosterol by exploiting a radical framework reconstruction [53]. In addition, in 2020, the same authors described in detail the synthetic efforts (Scheme 4) towards the successful route of this unusual sterol and its analogue, ∆ 22 -24-epi-swinhoeisterol A (105) [54]. As reported in Scheme 4, the key steps are the conversion of ergostane skeleton into 13(14→8)diabeo framework using a radical rearrangement of 14-hydroxy intermediate 101, the introduction of campestane-like side chain in derivative 106, and the installation of the 4-exo-methylene moiety via 4-hydroxy-methyl derivative 108, followed by elimination.
The investigation of these molecules reaffirms the role of natural products as essential chemical probes in today's research arsenal, to shed light on complex biological processes and biochemical pathways, and in the identification of new therapeutical approaches to human diseases. | 10,217.2 | 2023-05-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Human Gait Recognition System Based on Support Vector Machine Algorithm and Using Wearable Sensors
Human gait recognition is very important for controlling exoskeletons and achieving smooth transformations. Gait information must be obtained accurately. Therefore, in order to accurately control the exoskeleton movement, a multisensor fusion gait recognition system was developed in this study. The system acquires plantar pressure and acceleration signals of human legs. In the experiment, we collected the pressure signals of both feet and the movement data of the waist, left thigh, left calf, right thigh, and right calf of five test subjects. We investigated the gaits of standing, level walking, going up the stairs, going down the stairs, going up the slope, and going down the slope. The gait recognition accuracy of support vector machine (SVM), back propagation (BP) neural network and radial basis function (RBF) neural network were compared. The different sliding window sizes of SVM algorithm were analyzed. The results showed that the recognition rate was higher for the SVM algorithm with an average recognition accuracy of 96.5%. The accurate recognition of the human gait provides a good theoretical basis for the design of an exoskeleton robot control strategy.
Introduction
Lower extremity exoskeleton robots can help patients with mobility problems to walk normally, thereby greatly improving their ability to exercise.At present, most exoskeletons are energy-passive; although they are widely used, they have inevitable disadvantages, such as high metabolic energy consumption and walking asymmetry. (1)Therefore, the study of exoskeletons of the lower extremities has attracted the attention of an increasing number of researchers.Unlike an energy-passive exoskeleton, an active power exoskeleton can recognize the human gait and appropriate parameters can be used to achieve safe and effective motion control. (2)herefore, gait recognition plays an important role in the control of exoskeleton robots.
Many methods have been developed for the pattern recognition of exoskeleton movements in the lower extremities.The pattern recognition is achieved by identifying the signals and biosignals measured by mechanical sensors.Donath et al. identified three gaits of standing, sitting, and walking by collecting data such as the joint angle and angular velocity signals of the knee and ankle joints, sagittal moment, and plantar pressure. (3)However, there is a half-second delay in this identification method.Young et al. developed a gait recognition method that allows the lower extremity exoskeleton to complete a flat walk, walking up and down the slope, and walking up and down the stairs. (4)The system identifies the signals collected by the mechanical sensors mounted on the exoskeleton.Although the method is simple, the recognition accuracy was only 90.9%.Huang et al. identified six motion patterns and motion transitions by measuring the electromyographic (EMG) signals from the lower extremities and the signals measured by a six-degree-of-freedom pressure sensor. (5)However, this method was only suitable for offline testing.
In this paper, a new gait recognition system and an algorithm based on multisensor fusion are proposed.The foot pressure signal and the acceleration signals of the back and legs of the human body are fused using a support vector machine (SVM) algorithm, a back propagation (BP) neural network algorithm, and a radical basis function (RBF) neural network algorithm.The average recognition rate of the SVM algorithm is 95%; therefore, this method is used for the recognition algorithm of the system and the sliding window size is optimized to achieve a higher recognition rate.The experimental results show that the proposed SVM algorithm recognition method can be applied to the lower extremity exoskeleton and accurately recognizes the human gait.
Hardware system
The inertial sensor consists of an accelerometer, a gyroscope, and a magnetometer, as shown in Fig. 1.The design uses an MPU-6000 board with the combined accelerometer and gyroscope.The magnetometer is a surface-mounted modular chip that is suitable for applications with low magnetic inductance and digital disturbances.ATMEGA328 is the microcontrol unit (MCU) used for data acquisition on the inertial measurement unit (IMU) board. (6)The data obtained by the gyroscope is used to evaluate the direction of the IMU board and the data obtained by the magnetometer and accelerometer is used to compensate for the direction deviation.The data output by the IMU board includes the helix angle, the roll angle, and the acceleration along the two axes.
To measure as much useful motion information as possible, the location of the sensor on the subject is important, as shown in Fig. 2. Five inertial sensors were placed on the subject's waist, left thigh, left calf, right thigh, and right calf to measure the acceleration of the lower limb in the sagittal plane.Two pressure insoles were placed in both shoes to detect the gait pattern during the standing phase and to record the plantar pressure.
Foot pressure sensor and position selection
The plantar pressure detection system consists of a sensor that measures the foot pressure, a control portion that receives the signal and detects the gait phase, and a communication device that transmits/receives data.The system is shown in Fig. 3.
The pressure sensor in the gait information acquisition system is FlexiForce A401, as shown in Fig. 4, which is a force-sensitive sensor that is inversely proportional to the force acting on its surface. (7)The sensor has the advantages of small size, high sensing accuracy, and pressure measuring range of 0-110 N, which can meet the gait testing requirements. (8)The characteristics of the FlexiForce A401 pressure sensor are shown in Table 1.In the walking phase, it is important to choose a suitable location for the sensor to collect the plantar pressure data.During walking, the heel is the initial contact part and the toe contact represents the end of the supporting phase.Therefore, the four key points selected for the location of the sensor are the big toe, little toe, second toe, and heel, (9) as shown in Fig. 5.
Measuring system
The system configuration diagram of the gait recognition system is shown in Fig. 6.The gait data acquisition system obtains the leg acceleration data of the human body from the inertial sensors and the foot pressure acquisition system obtains the pressure information of the foot; the data are transmitted to the upper computer and then to the host computer through wireless transmission.The role of the host computer is to display and store the data, and perform preprocessing and feature extraction.The extracted features are input into the gait recognition algorithm for training and detection, the recognition accuracies of several different recognition algorithms are compared, and the optimal algorithm for the gait recognition result is determined.
The human gait data acquisition system consists of sensor signal acquisition and signal transmission.The acquisition part of the sensor signal is divided into seven controller area network(CAN) bus communication nodes, namely, the sensor signal acquisition nodes of the left thigh, left calf, right thigh, right calf, waist, left foot, and right foot.Each IMU module collects the acceleration data of a group of test subjects and the collected data is transmitted to the MCU module of the corresponding slave.When the host sends a data request through the CAN bus, the seven slaves transmit the sensor data to the MCU of the host through the CAN bus.The host transmits the data to the upper computer through the wireless serial port .The data acquisition system is shown in Fig. 7.
Test subjects
Five subjects were used in this experiment (age: 26 ± 2 years old, height: 170 ± 6 cm, weight: 56 ± 4 kg, shoe size: 24.5 ± 2 cm).The specific information (age, height, weight, and shoe size) of the test subjects is shown in Table 2. Everyone was asked to perform a series of exercises; all subjects were healthy, were able to walk normally, and did not have gait disorders.For accuracy, the material of the shoes was the same and the size of the insole sensor was adapted to the size of the foot.This satisfies the requirements of the experiment.The subjects did not perform strenuous exercise one week before the experiment.A consent form was signed.
Testing protocol
The testers wore inertial sensors and a plantar pressure device, and five inertial sensors were attached to the left thigh, left calf, right thigh, right calf, and waist.The test lasted 3 min and consisted of standing, going up the stairs, going down the stairs, going up the slope, going down the slope, and walking on level ground.The test conditions are shown in Fig. 8.The walking pace was about 1 m/s.The order of the motions was standing, going up the stairs, going down the stairs, going up the slope, going down the slope, and walking on level ground.Each test was conducted 6 times for each subject.
The frequency of the daily activity of the human body is below 10 Hz. (10) According to the Nyquist sampling theorem, the frequency of the acquisition system of the inertial sensor and plantar pressure device was set at 25 Hz. Figure 9 shows the diagram of the direction of movement.The X-axis direction indicates the forward direction, the Z-axis indicates up and down movements, and the Y-axis indicates left and right movements.
After the data was collected by the MCU, it was sent to the host computer via the CAN bus.The host computer wrote the received data to a TXT file.In the experiment, some of the data were used as training samples for the classification algorithm and as test samples.
Data processing
The raw data consisted of the signal obtained from the microcontroller and the unprocessed sensor data. 1) After the original data were obtained, they were preprocessed using various methods, including filtering, data normalization, and principal component analysis, to minimize noise and prevent redundant data from adversely affecting the subsequent processing.
2) The gait analysis data are time series data and the relevant data features had to be extracted to be used as inputs for the classification algorithm.3) We used the training data in the algorithm to establish the relationship between the features and the gait type.The classifier was corrected to achieve the highest accuracy and the test dataset was used to test the classification accuracy.A wavelet refers to an attenuating wave that is used to analyze and process the time and frequency features of a signal. (11)A wavelet transform can be used to perform a frequency analysis while representing time and the details of the frequency bands in the signal can be determined.A wavelet transform effectively overcomes the problems of the Fourier transform.Wavelet denoising is a denoising method based on the wavelet transform and has been used extensively in many applications. (12)The following three steps were performed: 1) We selected a wavelet base type, set the decomposition level as N, and performed wavelet decomposition transformation on the signal containing noise.
2) The high-frequency coefficients obtained after wavelet decomposition were processed through threshold quantization.3) Wavelet reconstruction was performed using the high-frequency coefficients processed in the previous step and the low-frequency coefficients of the lowest layer after wavelet decomposition.
Sensor data
The gait data were analyzed using MATLAB software.The results of analyzing the X-axis data of the right thigh acceleration sensor are shown in Fig. 10.The movement data represent the data after preprocessing.The data of the six movements/gaits (standing, walking, going up the stairs, going down the stairs, going up the slope, and going down the slope) indicate that the waveforms have periodicity, except for the standing, in which the output of the acceleration sensor is only the gravitational acceleration.The acceleration of each gait cycle in the walking state shows a trend of initial increase, followed by a decrease with the start and end of the movement.There is more complexity when going up and down the stairs and slope, and there are more fluctuations.
Gait recognition based on neural network
The BP neural network is a neural network in which the signals are propagated from the front to the back and the error is only fed back when there is an error in the output results. (13)A schematic diagram of the BP neural network algorithm topology is shown in Fig. 11.
X 1 , X 2 , ..., X m are the inputs of the BP neural network algorithm; ω 1 and ω 2 are the connection weights (percentage); Y 1 , Y 2 , ..., Y m are the outputs of the neural network.
The RBF neural network algorithm belongs to the forward neural network.This function is a real value function that examines the distance between a given value and a central point in space.When applied to the neural network, the approximation degree between the output and expected values of the neural network can be inspected. (14) flowchart of the neural network classification and recognition process is shown in Fig. 12.Using 20-dimensional human gait data, BP and RBF neural network algorithms are realized by using the neural network toolbox in the software of MATLAB 2014.There are 20 input features and 6 output features.The following model built by the MATLAB 2014a neural network toolbox can achieve gait recognition: 1) The input layer has 20 neurons.2) We set the number of hidden layer nodes to 12 using the number of training samples and the input and output layer dimensions.3) In the output layer, an output of 1 means that the respective gait was detected and the other values are 0.There are 6 output classes, namely, standing [1, 0, 0, 0, 0, 0], walking on level ground [0, 1, 0, 0 , 0, 0], walking up the stairs [0, 0, 1, 0, 0, 0], walking down the stairs [0, 0, 0, 1, 0, 0], walking up the slope [0, 0, 0, 0, 1, 0], and walking down the slope [0, 0, 0, 0, 0, 1].4) A tansig-logsig function is used as the transfer function of the error from the input layer to the hidden layer and the hidden layer to the output layer.5) The learning function selects trainlm.6) The number of learning sessions on the network is set to 3000, the learning rate is set to 0.005, and the training target is set to 0.004.The network is trained with 180 groups of samples and tested with 180 groups of samples.The classification accuracy and mean square error of the test results are used to assess the performance.
The classification models based on the BP and RBF neural network algorithms are established in MATLAB.The accuracy of the gait recognition classification is shown in Table 3.
As can be seen from Table 3, the recognition rate of the BP neural network algorithm is obviously higher than that of the RBF neural network algorithm.The average recognition rate of the BP neural network algorithm is 93.3%, and the average recognition rate of the RBF neural network algorithm is 91.2%.12. Flowchart of neural network classification and identification.
Gait recognition based on SVM
The support vector is the point closest to the hyperplane and the objective is to maximize the distance from the support vector to the hyperplane to distinguish the two categories.The two-class SVM algorithm can be extended to a multiclass SVM algorithm.For the k-category problem, an objective function is used with the objective to maximize the boundary distance between each category and all other categories. (15)The optimization function used in the SVM algorithm method is defined as where w and b are the hyperplane parameters, c is the penalty factor, y i is the label of the first category, and ϕ(x i ) is the kernel function.
The optimization problem can be converted into a constrained problem, as shown in the equations subject to * , ( ) , , ( ) + -y , , 0, where c is the penalty factor, , i w x 〈 〉 is the data dot product, (x i , y i ) are the input and output data, * , i i ξ ξ and * , i i ξ ξ are the slack variables, and ϕ is the kernel function.N-dimensional data are nonlinearly separable and it is necessary to use a low-dimensional feature space and map it to a high-dimensional feature space.In this case, it is necessary to use a kernel function to complete the transformation, thereby achieving linear separability in a high dimension.In this study, linear and polynomial kernel functions are used for the gait classification.
The SVM algorithm classifier is defined as , The linear kernel function is defined as ( ) The polynomial kernel function is defined as , , The sliding window size is 50 and the cell size is 1 s is; 180 samples are used as training samples and 180 samples are used as test samples.The absolute acceleration characteristics in the 2D time domain and the peak characteristics of the angular acceleration of a fast Fourier transform in the Z-direction in the 4D frequency domain are used for preliminary testing.The classification accuracies of the two kernel functions are shown in Table 4.
It is observed in Table 4 that the recognition rate is higher for the linear kernel function than for the polynomial function.Therefore, the linear function is selected as the kernel function of the SVM for the subsequent control variable experiment.To achieve real-time gait recognition, the gait should be detected within one-third of the gait cycle of the subject under ideal conditions.Therefore, the sliding window size is used as a variable to assess the accuracy of the gait recognition.The results are shown in Table 5.Table 5 shows that with the increase in sliding window size, the recognition rate increases gradually.The recognition delay time is within the range of 0-1.5 ms, all within the acceptable range.When the sliding window is 30, the average recognition rate is the highest, reaching more than 85%.To achieve gait recognition with good accuracy, a sliding window size of 30 is chosen.The recognition feature is extended from the 6-dimensional feature space to the absolute acceleration feature in the 4D time domain and to the peak feature of the 12-dimensional feature space in the frequency domain of the angular acceleration of the fast Fourier transform, resulting in a total of 16-dimensional features.The recognition accuracy is shown in Table 6.
As shown in Table 6, the SVM algorithm uses a linear kernel function, the sliding window size is 30, the data have a 16-dimensional feature space, and the recognition accuracy rate is 93.94%.The classification accuracy is excellent.To achieve a more accurate recognition rate, we reduce the sliding window size to 15 and the dimensions to 16, and maintain the other variables constant.The recognition accuracy results are shown in Table 7.
The classification accuracy is higher for the sliding window size of 15.Therefore, this window size is used for the final gait classification.A comparison is conducted between the correct and predicted labels for the classes, and the classification accuracy is calculated.The results are shown in Table 8.
Classification results
According to the above results, human gait recognition is achieved using the SVM algorithm and BP and RBF neural network algorithms.As shown in Table 3, the accuracy of the BP neural network algorithm is higher than that of the RBF neural network algorithm.Therefore, we only compare the SVM algorithm with the BP neural network algorithm.The classification results for the gait recognition using the SVM algorithm and BP neural network algorithm are shown in Table 9.
Table 9 shows the classification accuracies for detecting the gaits of standing, walking, going up the stairs, going down the stairs, going up the slope, and going down the slope.It is evident that when the sliding window is 15, the accuracy is significantly higher for the SVM algorithm than for the BP neural network algorithm.Both algorithms exhibit good accuracy for detecting the standing phase and the accuracy is 98% for the SVM algorithm.In the walking phase, the accuracy of the SVM algorithm is 97% and that of the BP neural network algorithm is 96%.The accuracy of the SVM algorithm is significantly higher than that of the BP neural network algorithm in the going up the stairs, going down the stairs, going up the slope, and going down the slope, and the accuracy of these stages is higher than 95%.The average recognition rate of the SVM algorithm is 96.5% and that of the BP neural network algorithm is 93.3%.Therefore, the gait recognition algorithm using the SVM algorithm is very suitable for gait detection and the recognition rate is high, making it highly suitable for controlling exoskeleton robots.
Conclusions
The accurate identification of human movements and gaits is of great significance for the control of exoskeleton robots.In this study, we developed a human gait recognition system, with the acquisition and analyses of human leg movement data and plantar pressure information, so as to recognize human gait.With the fusion of acceleration and plantar pressure signals, the system achieves high-precision gait recognition.The gait recognition accuracy of SVM algorithm and neural network algorithm were compared.When the sliding window of the SVM algorithm is 15, the gait recognition rate is 96.5%.The recognition rate of the BP neural network is 93.3%, and that of the RBF neural network is 91.2%.The recognition rate of the SVM algorithm is obviously higher than those of the neural network algorithms.The experimental results show that the gait recognition system based on the SVM algorithm is capable of detecting the human gait with high accuracy, demonstrating that the movement of an exoskeleton robot can be effectively controlled.
Table 2
Test subject information.
Table 3
Classification accuracy of gait recognition using the BP and RBF neural network algorithms.
Table 4
Accuracy of the classification based on different kernel functions.
Table 6
Accuracy of the SVM classification based on different feature space dimensions.
Table 7
Accuracy of the SVM algorithm classification based on different sizes of moving windows.
Table 8
Gait recognition classification accuracy using the SVM algorithm classifier.
Table 9
Classification accuracy for gain recognition using SVM algorithm and BP neural network algorithm. | 5,037.2 | 2019-04-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Hidden biodiversity of Amazonian white-sand ecosystems: two distinctive new species of Utricularia (Lentibulariaceae) from Pará, Brazil
Abstract As deforestation and fire move forward over pristine vegetation in the Amazon, many species remain undiscovered and may be threatened with extinction before being described. Here, we describe two new species of Utricularia (Lentibulariaceae) collected during recent fieldwork in an area of white-sand vegetation in the eastern Amazon Basin named Campos do Ariramba. Further herbarium revision revealed that both species were first collected over 60 years ago in the same area, remaining unnamed until now. The new species, named U. arirambasp. nov. and U. jaramacarusp. nov., are placed in U. sect. Aranella and U. sect. Setiscapella, respectively. We provide full descriptions, illustrations, photographs, a distribution map, and taxonomic discussion for both species. Additionally, we provide a preliminary list of Lentibulariaceae from the Campos do Ariramba. Both new species are assessed as Vulnerable, however, yet known only from a few collections each, highlighting the urgency and importance of fieldwork and taxonomic revisions in the Amazon biogeographic region in order to provide essential data for the conservation of both known and still unknown biodiversity.
introduction
Brazil is an extremely diverse country, home to the greatest floristic diversity in the world, in addition to being one of the best documented tropical countries in terms of its flora (Forzza et al. 2012;Flora do Brasil 2020 under construction). However, Brazil leads the number of new plant species described yearly (RBG Kew 2016;Cheek et al. 2020), showing that its vast territory still needs to be explored and studied if we are to attain a better understanding of the true dimension of its biodiversity.
Large remote areas of Brazil, especially those difficult to access, still lack taxonomic surveys and are in their majority concentrated in the Amazon Rainforest biome (Oliveira et al. 2016;BFG 2018). Regarded as the most biodiverse rainforest in the world, this region has fewer scientific collections in relation to other Brazilian biomes, with a strong bias of collection effort around large urban centers (Nelson et al. 1990) and along navigable rivers, while over 40% of its total area remains under-sampled and poorly studied (Schulman et al. 2007). Knowledge is even scarcer if one considers the herbaceous plants that grow in open Amazonian vegetation, as the majority of inventories still focus on woody plants (Miranda 1993;Miranda et al. 2002Miranda et al. , 2006Devecchi et al. 2020). Much faster than we are able to provide suitable studies regarding the Amazonian biodiversity, the rapid increase of the deforestation reaching these unexplored areas is potentially causing the extinction of a considerable proportion of undescribed plant species (Stroop et al. 2020).
All of the above-mentioned open vegetation areas in the Amazon have oligotrophic, acidic soils (consisting of bare sandstone, ferruginous or granite escarpments, or alluvial plains of white sands) with the presence of seasonally or perennially wet to flooded areas (García-Villacorta et al. 2016). In contrast to the surrounding Amazon lowland forests, these habitats have a scattered vegetation cover, often herbaceous or at the most shrubby, and comprise very exposed sites with lower vegetation cover and competition, representing "islands" within the Amazon forest (Prance 1996). These conditions, especially low availability of nitrogen and phosphorus, favor the occurrence of carnivorous plants (Givnish et al. 2018), and indeed several species of Drosera L. (Droseraceae), Genlisea A.St.-Hil. and Utricularia L. (Lentibulariaceae) can be found in those areas Fleischmann 2012a;Fleischmann et al. 2017).
The genus Utricularia is the most diverse of three genera of the carnivorous plant family Lentibulariaceae (Lamiales, Eudicots), with over 240 species currently accepted, presenting centers of diversity in the Neotropics and northern Australia, where most of its species are associated with seasonally wet areas of savanna vegetation (Taylor 1989;Fleischmann 2012bFleischmann , 2015Jobson et al. 2018). In Brazil, Utricularia is represented to date by 67 species (18 endemics), being most diverse in the Cerrado (Central Brazilian Savanna) and the Amazon Rainforest, both with 45 species each (Flora do Brasil 2020 under construction).
Utricularia is composed of small to medium-sized herbs, usually associated with wetlands, that can be recognized by the atypical morphology, lacking true roots, presence of leaf-like shoots (phylloclades), and bladder-like structures of foliar origin, the utricles, that inspired its generic epithet. The inflorescences are bracteose racemes, the flowers have a bilobate calyx (except in the early-branching U. sect. Polypompholyx (Lehm.) P.Taylor, and a few members of other lineages, such as U. flaccida A.DC. from U. sect. Setiscapella (Barnhart) P.Taylor, which can have a tetramerous calyx), a bilabiate personate corolla (snapdragon flower-type), with a spur, two stamens and ovary with central-free placentation (Taylor 1989;Jobson et al. 2003). Utricularia shows a great diversity of habitat types and life forms, occurring as aquatics (affixed or freefloating), terrestrials, lithophytes, rheophytes, and epiphytes (Taylor 1989). Taylor (1989) presented the most comprehensive revision of the genus to date, classifying the accepted species into subgenera and sections. About 30-40 species (depending on species concepts) have been described after Taylor's monograph (for compiling works, see Fleischmann 2012bFleischmann , 2015Jobson et al. 2018), and the infrageneric classification has undergone a few changes based on molecular phylogenetic data (Jobson et al. 2003;Müller and Borsch 2004). In recent years, several new species have been described or reestablished for the genus in Brazil (Bove 2008;Fleischmann and Rivadavia 2009;Souza and Bove 2011;Baleeiro et al. 2015Baleeiro et al. , 2019Gonella and Baleeiro 2018;Guedes et al. 2019;Baleeiro et al. in prep.), revealing the potential for the discovery of new species even in a genus that had been thoroughly revised taxonomically in the late 20 th century (Taylor 1989). The late botanist Peter Taylor meticulously studied Utricularia for over 40 years, culminating in his elaborate monograph that considered ca. 600 published names for 214 accepted species (see Fleischmann 2012b).
During a field trip to perform a floristic inventory of the Campos do Ariramba, an area of campinarana and savanna at the westernmost point of the state of Pará, several new records of Lentibulariaceae were made, including two collections of Utricularia that did not fit any of the currently recognized species. Here we describe these two new taxa and provide comments on their taxonomy, habitat, distribution, and their conservation status. We also provide a list of the species of the family registered in the area to contribute to the knowledge of the Amazonian grassland biodiversity, still so underestimated.
Material and methods
An expedition to Campos do Ariramba region (Municipality of Óbidos) was carried out in the period between 5-10 June 2019. Specimens were collected and deposited in the herbarium MG with duplicates sent to SPF. Specimens of the herbaria ALCB, B, BHCB, BM, DIAM, ESA, ESAL, F, HUEFS, HUFSJ, HUFU, HURB, IAN, INPA, IPA, K, M, MG, MBM, MBML, MO, NY, OUPR, P, R, RB, SP, SPF, UB, UEC, UFRN, US, and VIES were also studied as part of the ongoing taxonomic study of the family for the Flora of Brazil 2020 project. The online databases Reflora Virtual Herbarium (REFLORA 2020), SpeciesLink (INCT 2018), and Global Biodiversity Information Facility (GBIF.org 2020) were also searched for further specimens of these taxa and for other specimens from Campos do Ariramba, Jaramacaru (including orthographic variants), Óbidos and Oriximiná. The descriptions were based on live material and dry specimens, which were analyzed using a stereomicroscope. Herbarium acronyms follow Thiers (2020 continuously updated).
For SEM photography, seeds from herbarium specimens were mounted on a carbon sticker-covered SEM stub, coated with platinum for 240 sec. under vacuum in a SCD 050 sputter coater (Bal-Tec, Germany), and imaged under SEM (Leo, Germany) at 25 mm working distance and 15.00 kV.
The distribution map was generated with the software QGIS (QGIS Development Team 2020) using layers available from IBGE (2020). The TerraBrasilis platform (Assis et al. 2019), which uses the Sistema de Detecção do Desmatamento em Tempo Real (DETER) of the Instituto Nacional de Pesquisas Espaciais (INPE), was used to acquire recent data on deforestation and fire records in the area. Coordinates were obtained in the field using GPS tracking. Given that the two species are known from a few locations each and lack population data, their conservation status were inferred based on criteria for the Area of Occupancy (AOO) of IUCN (2012), which was calculated by employing the IUCN standard 4 km 2 cell in the GeoCAT Tool (Bachman et al. 2011).
Morphological terminology and description structure are adapted with modifications from Taylor (1989). Diagnosis. Utricularia ariramba belongs to U. sect Aranella (Barnhart) P.Taylor, being most similar to U. costata P.Taylor, but distinguished by its taller inflorescences 5.3-12.0 cm long (vs. 2-7 cm long), the less conspicuous nerves on the calyx lobes, the upper calyx lobe with acute apex (vs. obtuse and obscurely denticulate), spur swollen and dorsiventrally flattened in the apical 2/3 (vs. cylindrical with apex narrowing towards the tip), lower corolla lip trapezoid with margin entire or finely denticulate (vs. transversely oblong with margin entire or shallowly 3-lobed), and upper corolla lip narrowly ovate with acute apex (vs. ovate, with apex rounded or subacute).
Etymology. The epithet "ariramba" is a name in apposition, referring to the Campos de Ariramba, where this new species was discovered. The word "ariramba" comes from the Tupi language "uarirámba" and refers to the birds of the Galbulidae family, which are commonly found in the area.
Phenology. The species was collected in full bloom at the end of the rainy season, in May and June.
Distribution and habitat. So far, only known from two subpopulations at the margins of the Jaramacaru River, in the Campos do Ariramba region. The area lies within the conservation unit of the Floresta Estadual de Trombetas (FLOTA Trombetas), in western Pará state, N Brazil. The species occurs on white sandy soils on a flat sandstone outcrop in campinarana (white sand vegetation).
Conservation status. Vulnerable: VU D2. Utricularia ariramba is known from only three collections, one of which was made over 60 years ago and lacked georeferenced data. The recently collected specimens were found ca. 3 km distant from each other, near the border of the FLOTA Trombetas (Fig. 1). Areas under active deforestation were observed just outside the conservation unit (Fig. 1), and during the fieldwork activities in 2019, fire was observed in a waterfall 6 km upriver from the Jaramacaru community, suggesting environmental disturbance generated by cattle farming and other human activities. In fact, large areas were recently impacted by fires less than 1 km away from the subpopulations ( Fig. 1; data from Assis et al. 2019). In addition, the site is sought by tourists from nearby municipalities because of the Jaramacaru waterfall, therefore leading to negative anthropogenic impact on the populations. Although the Campos do Ariramba are a botanically unexplored region with vast areas of habitats similar to those where the species was collected and the population size has not been ascertained, the available data suggests that the species is restricted to a The map to the right shows the records of Utricularia ariramba (squares) and Utricularia jaramacaru (triangle), which are near the FLOTA limits, as well the as threats to the area, including recent fires, full deforestation, and selective deforestation of timber species. few locations. An AOO of just 8 km 2 was calculated for this species, and we observed threats that might impact its habitat quality and AOO in the short term and lead to a reduction in population size and area of occupancy. Therefore, we assign the species to the Vulnerable category based on criterion D2 of IUCN (2012).
Taxonomic notes. Utricularia ariramba is placed in U. sect. Aranella based on its characteristic trap morphology (a single subulate dorsal appendage and a deeply bifid ventral appendage), and the presence of a clearly defined basal sac in the upper corolla lip.
Utricularia ariramba is the eleventh species of Utricularia sect. Aranella (Taylor 1989;Fleischmann and Rivadavia 2009), a section almost completely endemic to tropical South America, with a single species (U. simulans Pilger) extending to Central America, the Caribbean and tropical Africa (Taylor 1989).
Utricularia ariramba is most similar to U. costata, sharing similar bract and bracteole morphology, and the lavender (to white) corolla with darker violet venation in the lower lobe. It is distinguished by the relatively larger inflorescences 5.3-12.0 cm tall (vs. 2-7 cm tall), the less prominent nerves on the calyx lobes, the upper calyx lobe with acute apex (vs. obtuse and obscurely denticulate), the swollen spur in the apical 2/3 (vs. apex tapering towards the tip), the lower corolla lip trapezoid with margin entire or finely denticulate (vs. transversely oblong with margin entire or shallowly 3-lobed), and the upper corolla lip narrowly ovate with acute apex (vs. ovate, with apex rounded or subacute). For photos of U. costata, see Costa et al. (2016: 11, Fig. 3D, E) and Mota and Zappi (2018: 126, Fig. 3a-c).
Utricularia costata occurs in Venezuela and Brazil, where it is recorded from the states of Roraima, Pará, Mato Grosso, Goiás, Bahia, Sergipe and Alagoas (Taylor 1989(Taylor , 1999Fleischmann and Rivadavia 2009;Carregosa and Costa 2014;Costa et al. 2016;Guedes et al. 2018;Flora do Brasil 2020 under construction). In Pará, the species is recorded from the southeast region of the state, in the Serra dos Carajás .
Variation in corolla color and spur morphology was observed in the studied specimens of U. ariramba, where it varies from white to lavender, and the different corolla colors are associated with different spur shapes: both variants show a dorsoventrally flattened spur in the apical 2/3, but the white variant (represented by the type specimen) shows a higher degree of flattening, a ventral concavity (Fig. 2c), and a truncate apex (Fig. 3a), while the lavender variant (represented by the paratypes) has a spur only slightly flattened, without the concavity, and with an acute apex (Figs 2i, 3b, d). Furthermore, in the white morphotype, the apex of the lower corolla lip is reflexed (Figs 2h, 3a). Despite these differences in corolla color and shape, both morphotypes are considered conspecific as the specimens share similar morphology in all other characters. Variation in corolla color and shape is common in other species of U. sect. (Taylor 1989;Rivadavia 2000).
Seeds were not available for study, however, Taylor (1989: 237) notes that seed morphology is rather uniform in U. sect. Aranella, hence not having great taxonomic significance (compared to other sections of the genus, where some species might be identified by a single seed grain alone; Taylor 1964Taylor , 1989
Diagnosis.
Utricularia jaramacaru belongs to U. sect. Setiscapella (Barnhart) P.Taylor but is distinct from all other members of this section by the traps with reduced, denticulate appendages (vs. subulate, branched), white corolla (vs. yellow or lilac), the upper corolla lip with bilobate apex (vs. obtuse, rounded, truncate or retuse), and the lower corolla lip narrowly rhombic (vs. cuneate, trullate, rhombic to very broadly rhombic in outline).
Description. Small-sized, probably annual, terrestrial. Rhizoids 2-4, from the base of peduncle, terete, with short papillose branches, up to 1 cm long, c. 0.25 mm in diameter. Stolons numerous, capillary, sparsely branched, up to 1 cm long (in the available material), up to 0.1 mm in diameter. Leaves numerous, at the base of the peduncle and on the stolons, lamina narrowly linear, simple, the base narrowing gradually into a short petiole, apex obtuse to acute, green to reddish, 1-nerved, 2-6 × 0.2-0.5 mm. Traps numerous on the stolons and leaves, ovate, stalked, 0.1-0.2 mm long, the mouth lateral with two dorsal and very short denticulate, simple appendages. Inflorescence a bracteose raceme, erect, solitary, 60-130 mm tall. Peduncle capillary, terete, simple or eventually laterally simple-branched, glabrous, 0.2-0.3 mm in diameter, wine red. Scales numerous, peltate, ovate to narrowly ovate, inferior apex rounded to obtuse, superior apex acute, 0.5-0.9 mm long, similar to the bracts. Bracts ovate, basisolute, peltate, 0.5-0.7 × 0.4-0.5 mm, amplexicaul, the inferior apex rounded, the superior apex rounded to obtuse. Bracteoles absent. Flowers 4-13, the rhachis elongate, flexuous, without sterile bracts; pedicels ascending, capillary, terete, 3-9 mm long (longer towards the base of the inflorescence), pedicels with a mucilage droplet at their base in living specimens. Calyx lobes unequal, glabrous, nerves inconspicuous, simple, not extending to the margin; upper lobe ovate, with apex obtuse, convex, 0.9-1.1 mm long in flower, up to 1.3 mm in fruit; lower lobe obovate, with apex emarginate to rounded, convex, equal in length with the upper lobe in flower, slightly longer in fruit, up to 1.7 mm in fruit. Corolla 5 mm long, lower lip white with a pale yellow mark on the gibbose palate, spur pale yellow, upper lip pale yellow with reddish marks; upper lip oblong with apex bilobed, the basal sac with an eglandular pubescent marginal rim, the pubescence spreading towards the apex, c. 1.5 mm long; lower lip limb narrowly rhombic in outline, the base with a very prominent bilobed swelling, the apex 3-lobed, 0.3-4.5 mm; palate pubescent; spur cylindrical, apex rounded, equal to or slightly longer or shorter than the lower lip, 0.35-0.40 mm long. Filaments curved, 0.8-1.0 mm long, the anther thecae sub-distinct, anther 0.4-0.5 mm long. Ovary globose, 0.8-0.9 mm long; style very short; stigma lower lip nearly circular, upper lip obsolete. Capsule globose, c. 1.2 mm in diam., shorter than the calyx lobes, dehiscing by an elliptic ventral pore. Seeds obovoid to angulate-ellipsoid, 0.20-0.25 mm long, 0.13-0.20 mm wide, testa cells c. 0.01 mm wide, elongate, anticlinal boundaries deeply sunken and more or less straight, periclinal walls convex, smooth.
Etymology. The epithet "jaramacaru" is a noun in apposition (hence it is invariant), referring to the Jaramacarú river, where the new species was discovered. "Jaramacarú" comes from the Tupi language "iamandakarú", referring to species of the genus Cereus Mill. (Cactaceae). However, no cactus of this genus was located during the field trip undertaken by COA, RGBS, and DCZ in 2019.
Phenology. Utricularia jaramacaru was collected with flowers in April, May, and June. Distribution and habitat. So far, only known from two very close localities near the Jaramacaru waterfall, in the Campos do Ariramba, part of the FLOTA Trombetas, western Pará, N Brazil. The species occurs on white sandy soils with outcrops of sandstone, in campinarana vegetation.
Conservation status. Vulnerable: VU D2. Similarly to that described for U. ariramba, U. jaramacaru is known from only two localities (AOO=8 km 2 ) near the limits of FLOTA Trombetas and the threats the populations are subject to are fully explained in the above species. Therefore, based on available data, U. jaramacaru is to be assigned to the category of Vulnerable based on criterion D2 of IUCN (2012).
Taxonomic notes. The basisolute, peltate scales and bracts, and the calyx and seed morphology (Figs 4-6) undoubtedly place this species in U. sect. Setiscapella, representing the tenth species of the section (following the species circumscriptions of Taylor 1989). However, based on morphology alone, it is not possible to assign the closest affinity of U. jaramacaru, as it bears several apomorphic characteristics, most remarkably regarding its trap and corolla morphology.
Up to now, U. sect. Setiscapella was composed of nine species (Taylor 1989), of which eight have yellow corollas (regarding the phylogenetic switch from lilac to yellow corolla color in Utricularia and Genlisea, see Fleischmann et al. 2010). One exception in terms of color is U. physoceras P.Taylor, also endemic to the state of Pará, but with larger (7-10 mm long vs. 5 mm) pink to lilac corolla. The whitish corolla of U. jaramacaru is, therefore, a second exception among the species of the section. Utricularia physoceras also shares the short spur with rounded apex with U. jaramacaru and similar seed morphology. Utricularia physoceras occurs in the cangas (ferruginous campo rupestre) of the Serra dos Carajás (Taylor 1989;Giulietti et al. 2019), distant ca. 815 km to the southeast from the area where U. jaramacaru was collected. For photos of U. physoceras, see Mota and Zappi (2018: 129, Fig. 4a-e) and Giulietti et al. (2019: 369, fig. 7 bottom three images).
Traps of U. jaramacaru are unlike any other species of U. sect. Setiscapella in that the appendages are reduced to two denticulate structures (Fig. 4d). All other species of the section bear subulate or filiform appendages near the trap door that are sparsely to copiously branched. Reduced appendages are found in different sections of Utricularia, which suggests it is a homoplastic character in the genus. Taylor (1989) enumerates a few species with reduced trap appendages, such as U. cornuta Michx. and U. juncea The presence of a droplet of mucilage at the insertion of the pedicel in the peduncle (Fig. 5b) is shared with U. flaccida, U. nigrescens Sylvén and U. pusilla Vahl (P.M. Gonella and A. Fleischmann, pers. obs.), and its function remains unclear.
Discussion and concluding remarks
Both species described here were first collected over 60 years ago, and the specimens remained undetermined until the preparation of this work. This is not an isolated event since more than half of the new species described in recent years were published decades after being first collected and deposited in herbarium collections (Bebber et al. 2010). In the case of these new Utricularia species, a few factors can be listed to explain the lag between first collection and description, which are common in other plant groups. First, Amazonian herbaria are quite distant from most botanical research centers, receiving fewer resources, both human and financial, therefore being less visited by specialists (Hopkins 2019;de Gasper et al. 2020). This is also reflected by the still low number of imaged specimens in the collections housed in these herbaria, hindering the remote access by specialists and leaving a considerable number of these specimens 'undetermined'. This means that the average number of species not yet de-scribed deposited in these herbaria tends to be higher, accentuating the urgent debate on taxonomy in describing diversity in the current biodiversity crisis (Dubois 2003;Giam et al. 2011;Tedesco et al. 2014). Also, it is worth highlighting that, in this case, the identification of the historical herbarium specimens as new species could only be confirmed after new collections were made, therefore reinforcing both the importance of funding for fieldwork in remote areas and the relevance of such herbaria collections as sources for identification of the still undescribed diversity (for further examples of similar cases, see: Ferreira et al. 2016;Barbosa-Silva et al. 2018;Farroñay et al. 2018).
Similarly to several areas of open vegetation in the Amazon and the Amazon rainforest itself, the vegetation of the Campos do Ariramba is poorly understood, and, until recently, few botanical expeditions were carried out to the area. The most significant botanical contributions to the area were conducted by Adolpho Ducke in 1905 and1906, resulting in the description of several new species for the region, such as Dyckia duckei L.B.Sm. (Smith 1958), Ouratea duckei Huber (Huber 1913), and Caraipa myrcioides Ducke (Ducke 1922). Later, another expedition conducted by Walter A. Egler and George Alexander Black resulted in the first and only preliminary floristic list published for the area (Egler 1960). Egler (1960) makes it clear that the list is unfinished because Black died tragically after the expedition, and his material was not found, disappearing together with his valuable observations and field records that possibly resulted in the aforementioned gaps in Egler's study, which was dedicated posthumously to Black. An example of these gaps are the Lentibulariaceae, which are cited in the text as important elements of the wetlands but not listed in the work (Egler 1960). This further justifies the presentation of a full list for the family herewith.
The access to the Campos do Ariramba, currently only possible by dirt road, is the result of a failed attempt to connect the site to the savannas in the northern limit of Pará state (called by Ducke and Egler as Campos Gerais; currently the Tumucumaque Indigenous Park), intending to create areas for livestock (Ducke 1910;Egler 1960). The construction of this road has impacted the area through deforestation and modification into cattle pastures adjacent to the road to the FLONA (Fig. 1). This is not an isolated occurrence in the municipalities of Óbidos and Oriximiná, as roads intensify deforestation in the Amazon, while protected areas mitigate this impact (Pfaff et al. 2007;Barber et al. 2014). This scenario, coupled with the current environmental policy that is incapable or unwilling to preserve the Amazon Rainforest biome is increasing deforestation and accelerating climate change (Rajão et al. 2020), representing a poor prospect especially for range-restricted, inconspicuous species that might be extinct even before being collected and identified. | 5,675.6 | 2020-12-04T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
BARD1 Pathogenic Variants Are Associated with Triple-Negative Breast Cancer in a Spanish Hereditary Breast and Ovarian Cancer Cohort
Only a small fraction of hereditary breast and/or ovarian cancer (HBOC) cases are caused by germline variants in the high-penetrance breast cancer 1 and 2 genes (BRCA1 and BRCA2). BRCA1-associated ring domain 1 (BARD1), nuclear partner of BRCA1, has been suggested as a potential HBOC risk gene, although its prevalence and penetrance are variable according to populations and type of tumor. We aimed to investigate the prevalence of BARD1 truncating variants in a cohort of patients with clinical suspicion of HBOC. A comprehensive BARD1 screening by multigene panel analysis was performed in 4015 unrelated patients according to our regional guidelines for genetic testing in hereditary cancer. In addition, 51,202 Genome Aggregation Database (gnomAD) non-Finnish, non-cancer European individuals were used as a control population. In our patient cohort, we identified 19 patients with heterozygous BARD1 truncating variants (0.47%), whereas the frequency observed in the gnomAD controls was 0.12%. We found a statistically significant association of truncating BARD1 variants with overall risk (odds ratio (OR) = 3.78; CI = 2.10–6.48; p = 1.16 × 10−5). This association remained significant in the hereditary breast cancer (HBC) group (OR = 4.18; CI = 2.10–7.70; p = 5.45 × 10−5). Furthermore, deleterious BARD1 variants were enriched among triple-negative BC patients (OR = 5.40; CI = 1.77–18.15; p = 0.001) compared to other BC subtypes. Our results support the role of BARD1 as a moderate penetrance BC predisposing gene and highlight a stronger association with triple-negative tumors.
Introduction
Hereditary breast and ovarian cancer (HBOC) risk has been traditionally linked to germline pathogenic variants (PVs) in breast cancer 1 and 2 genes (BRCA1 and BRCA2). However, only 20-30% of high-risk families carry PVs in these genes [1]. Gradually, PVs in various other genes with different degrees of penetrance have also been associated with breast cancer (BC) and/or ovarian cancer (OC) risk [2]. Several genes that are either interacting with BRCA1/2 or involved in DNA damage response pathways have also emerged as potential candidates that may account for some of the missing heritability of these so-called BRCAX families, although their associated risks have not been fully established [2]. BRCA1-associated ring domain 1 (BARD1) was first discovered in 1996 as the nuclear partner of BRCA1 and became one of the earliest candidates investigated [3]. It is localized on chromosome 2 at position 2q35 and encodes a protein of 777 amino acids that contains one N-terminal Really Interesting New Gene (RING)-finger domain, four Ankyrin (Ank) repeats and two C-terminal tandem BRCA1 C Terminus (BRCT) domains [4,5]. BARD1 shows structural homology with BRCA1 and they directly interact through their RING domains. The BARD1-BRCA1 obligate heterodimer functions as both an E3 ubiquitin ligase and as a direct mediator of homologous recombination for the recruitment of RAD51 to the sites of DNA double-strand break (DSB) [3,6,7]. Furthermore, BARD1 is also involved in other BRCA1-independent functions, including p53-mediated apoptosis [8].
To date, the role of BARD1 in cancer predisposition remains inconclusive. Several case-control studies have reported a higher prevalence of deleterious BARD1 variants among BC patients, supporting its role as a moderate risk predisposing gene [9][10][11]. An enrichment of BARD1 PVs among triple-negative breast cancer (TNBC) cases has also been evidenced [12][13][14]. Contrarily, some studies have been unable to detect a significant association of BARD1 with breast cancer risk [15,16]. Likewise, the association between BARD1 and overall OC risk has shown controversial results [17][18][19]. Taken together, there is still insufficient evidence to elucidate the role of BARD1 in breast and/or ovarian cancer predisposition. In the present study, we have investigated the prevalence of deleterious germline BARD1 variants in a cohort of 4015 patients with clinical suspicion of hereditary breast and/or ovarian cancer, with the aim of elucidating the role of BARD1 in cancer predisposition in the Spanish population.
Patients and Controls
A total of 4015 index patients with a personal or family history suggestive of hereditary BC and/or OC referring at genetic counseling units of the Catalan Institute of Oncology (ICO) and Vall d'Hebron (HVH) hospitals were included in the present study. Clinical characteristics for all enrolled patients were the following: patients with BC before 40 years; patients with TNBC before 60 years; male BC patients; patients with non-mucinous OC; patients with a family history of two cases of BC before age 50; patients with three or more cases of first-degree BC; patients with a case of bilateral BC associated with another case of BC in the family. Informed written consent for both diagnostic and research purposes was obtained from all patients, and the study protocol was approved by the ethics committee of Bellvitge Biomedical Research Institute (IDIBELL; PR278/19) and Vall d'Hebron Hospital (PRAG102-2016). A set of 194 Spanish cancer-free individuals from the Genomes For Life-Cohort Study of the Genomes of Catalonia (GCAT) cohort [20] were screened with the same cancer panel as ICO patients.
NGS Panel Testing
In the ICO cohort, genetic testing was performed on genomic DNA using the nextgeneration sequencing (NGS) custom panel I2HCP, which comprises 122-135 hereditary cancer (HC)-associated genes, depending on the version used [21]. Copy number analysis was performed from NGS data using DECoN [22] with parameter optimization [23]. Copy number variants (CNVs) in BARD1 were validated using custom multiplex ligationdependent probe amplification (MLPA) probes designed according to the instructions provided by MRC-Holland. Likewise, 26 HC-associated genes were included in the HVH NGS panel (BRCA Hereditary Cancer MASTR Plus kit, Agilent Technologies, Santa Clara, CA, USA). Copy number analysis was performed from NGS data using MASTR Reporter (Agilent Technologies, Santa Clara, CA, USA) and putative CNVs were validated by RT-PCR analysis [24]. For this study, we considered any variant that originates a premature stop codon or affects canonical splice site positions (+1,+2,−1,−2) as a pathogenic or likely pathogenic variant (pathogenic variant hereinafter); all of them were classified as (likely) pathogenic following the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) guidelines [25] and were confirmed by Sanger sequencing.
Variant Nomenclature
Human Genome Variation Society (HGVS)-approved guidelines [26] were used for BARD1 variant nomenclature using NM_000465.2 (LRG_297). For variant numbering, nucleotide 1 is the A of the ATG translation initiation codon.
Co-Segregation Analysis and Loss of Heterozygosity (LOH)
Both analyses were performed by Sanger sequencing when samples from relatives or tumor DNA were available.
gnomAD Analysis
The Genome Aggregation Database (gnomAD) non-Finnish European population, non-cancer dataset (v2.1.1) [27] was used as a control population. Variants were downloaded and filtered to identify predicted loss-of-function variants in BARD1. CNV screening was performed in the gnomAD SVs v2.1 dataset.
Statistical Analysis
Differences in allele frequency between cases and controls were determined by the Fisher exact test. Odds ratios (OR) and the corresponding 95% confidence intervals (CI) were determined for two-by-two comparisons. Statistical tests were carried out using R v.3.5.1.
Results
In our study cohort of 4015 unrelated patients with hereditary breast and/or ovarian cancer, 476 PVs were identified as per clinical gene panel analysis (Table 1), representing 11.86% patients harboring PVs in high-to moderate-penetrance BC/OC-associated genes. In addition, with the aim of investigating the role of PVs in the BARD1 gene, we performed an exhaustive analysis of truncating, splicing and CNVs in this gene. Nineteen patients carried heterozygous germline PVs in BARD1, resulting in a carrier frequency of 0.47%. Among them, one patient additionally carried a PV in the HBOC-predisposing gene BRCA2 (patient 10; BRCA2 c.3264dupT; p.(Gln1089Serfs*10)) ( Table 2). The remaining 18 BARD1mutated index patients tested negative for PVs in other BC/OC genes (for more details of the genes analyzed according to the phenotype, refer to Feliubadaló et al., 2019 [24]). Thus, after excluding carriers of other HBOC PVs, the global BARD1 carrier frequency throughout our cohort of patients was 0.45%. The percentage of deleterious BARD1 variants in the subset of patients with hereditary breast cancer (HBC) was 0.50%, 0.42% in hereditary ovarian cancer (HOC) cases and 0.33% in patients with HBOC (Table 1). No BARD1 PVs were identified in our set of 194 cancer-free individuals. In order to increase the control cohort, loss-of-function BARD1 variants were screened in the non-Finnish European gnomAD 2.1.1 (non-cancer) population, identifying a total of 61 heterozygous carriers out of 51,202 individuals (0.12%). The comparison of carrier frequencies between the patient and control cohorts revealed an overall significant association of BARD1 PVs (OR = 3.78; CI = 2.10-6.48; p = 1.16 × 10 −5 ). This association was also significant in the HBC group (OR = 4.18; CI = 2.10-7.70; p = 5.45 × 10 −5 ). Moreover, deleterious BARD1 variants demonstrated an increased risk in the HOC and HBOC groups, although the differences did not reach statistical significance (OR = 3.53, CI = 0.71-10.86, p = 0.06 and OR = 2.77, CI = 0.33-10.47, p = 0.17, respectively) ( Table 1).
The clinical phenotype of BARD1-mutated patients is depicted in Table 2. Sixteen developed BC at a median age of 41 years , younger than the general population (median age at diagnosis 62 years old in females, according to NCI's SEER 21 2013-2017 Program). Of these, 10 were diagnosed with at least one TNBC. We compared the prevalence of deleterious BARD1 variants between women diagnosed with TNBC and other BC subtypes and found significant differences according to the triple-negative status of carriers. deleterious BARD1 variants were enriched in HBC families where the index case developed TNBC (OR = 5.40; CI = 1.77-18.15; p = 0.001) ( Table 3). Regarding OC cases, three patients were diagnosed at a median age of 62 years (59-62)-two were diagnosed with high-grade ovarian serous carcinoma (HGOSC) and one with endometrioid carcinoma (EC). Two recurrent variants were identified in our set of samples. BARD1 c.1921C > T; p.(Arg641*) was found in eight unrelated patients, thus representing the most frequent variant in our cohort. Besides, two unrelated patients harbored the BARD1 c.157del; p.(Cys53Valfs*5) variant. The nine remaining variants were identified in one index case each ( Figure 1). It is worth mentioning that we performed RT-PCR analysis of the splicing variant c.1314+1G > A, which causes skipping of exons 3 and 4 (r.216_1314del; p.(Ser72Argfs*37)) (data not shown). Interestingly, we identified two copy number variants (CNVs) ( Table 2). One consisted in the deletion of exons 7 and 8, which was experimentally validated by RT-PCR analysis in the proband's cDNA (data not shown). This variant causes an out-offrame deletion predicted to generate a truncated protein. The other CNV involved the loss of exons 7 to 11 and was validated using an MLPA custom probe. This deletion would presumably result in a BARD1 protein lacking both BRCT domains and the C-terminal region of the Ank domain. The screening of CNVs in the Genome Aggregation Database (gnomAD) splicing variants (SVs) dataset did not identify any CNV in the control population. Regarding co-segregation and LOH studies, in a previous publication by our group, we reported the results of the co-segregation of family 16 [29]: the proband's mother, diagnosed with BC, as well as the sister and the maternal cousin, both affected by BC, had the same BARD1 variant; the variant was also found in the proband's 39-year-old daughter, although she was asymptomatic. In the rest of the families, the co-segregation study was scarcely informative. In family 1, the proband's cousin was diagnosed with peritoneal carcinoma (PC) at age 73 and harbored the same BARD1 PV. In families 4 and 15, the probands inherited the BARD1 PV from their respective mothers, also affected by BC. However, in families 13 and 14, the probands inherited the BARD1 PV from asymptomatic mothers. LOH analysis could only be performed in a tumor sample from the proband in family 14, but there was no evidence of LOH.
Discussion
In the present study, we performed a comprehensive analysis of the BARD1 gene in a cohort of 4015 hereditary BC/OC patients. The screening for germline PVs evidenced that BARD1 heterozygous carriers have an overall increased risk (OR = 3.78; CI = 2.10-6.48; p = 1.16 × 10 −5 ). When stratified by clinical suspicion, the estimated risk for HBC patients resulted in a significant OR = 4.18 (CI = 2.1-7.7; p = 5.45 × 10 −5 ). These results are comparable to those previously reported by several case-control studies. The largest analysis to date was performed by Couch [11] in 2134 and 4469 familial BC patients, respectively. Besides, a recent meta-analysis by Suszynska and Kozlowski collected data from a total of 123 published studies and consistently reported an OR = 2.90 (CI = 2.25-3.75; p < 0.0001) over a cumulative cohort of~48,700 BC patients [30]. However, there are some studies that failed to identify a significant association with BC risk, such as those published by Castéra et al. and Lu et al. [15,16].
An increase in the prevalence of PVs in BARD1 among TNBC patients has been repeatedly suggested [12,13,31,32]. In agreement with this hypothesis, we identified ten BARD1 [14], whereas a surprisingly high OR = 11.27 (CI = 3.37-25.01) was reported by Castéra et al. [15]. Despite the reduced sample size of our subset of TNBC patients, our results support that deleterious BARD1 variants were enriched in TNBC cases. Further studies in larger cohorts will be necessary to more precisely assess the BARD1-associated risk with this tumor phenotype.
Our results also showed a trend, although non-significant, for HOC patients (OR = 3.53). Previous studies focusing on BARD1 as an OC-predisposing gene have shown inconsistent results. Only Norquist et al. revealed a significant OR = 4.2 (CI = 1.4-12.5; p = 0.02) in 1915 OC cases [18], similar to that reported in our set of samples. Contrarily, the analysis of 3261 epithelial OC cases by Ramus et al. and 6294 OC cases by Lilyquist et al. resulted in non-significant associations of deleterious BARD1 variants with OC risk [17,19]. The meta-analysis by Suszynska and Kozlowski could not detect an association of BARD1 with OC risk in a cumulative set of~20,800 OC cases either [30].
Unraveling the contribution of moderate-penetrance genes to HC predisposition is challenging, as the low incidence of PVs detected in these genes results in inaccurate estimates of their associated risks. Due to the limited number of carriers identified, increasing the study size is mandatory to improve the statistical power. Besides, case-control studies usually rely on controls from publicly available databases to reach statistical power instead of using geographically matched controls (GMCs), potentially causing an overestimation of the calculated ORs [9]. Multi-centric international studies could potentially reduce this heterogeneity by defining common inclusion criteria for patients and harmonizing the methodological features. It is also very likely that the true prevalence of BARD1 PVs has been underrated. As a consequence of the lack of functional assays, we have not contemplated missense, synonymous and intronic variants in the risk calculations, as we cannot be certain of their pathogenicity.
It is worth emphasizing that we have performed a screening of CNVs in our cohort of HC patients, resulting in the identification of two large deletions (exons 7 to 8 and exons 7 to 11), accounting for 10.5% of the PVs. To our knowledge, only a small fraction of published studies have also performed this analysis and only seven CNVs have been identified so far: exon 1 deletion [33], exon 2 deletion [34], exon 1 to 6 deletion [35], exon 5 to 7 deletion [36], exon 8 to 11 deletion [37] and two whole-gene deletions [37,38]. While no CNVs were identified in the gnomAD SV control population dataset, analysis of BARD1 CNVs in HC cohorts is strongly recommended considering the significant contribution in our series of this kind of variant.
BARD1 has been included in multi-gene panels since it was regarded as a potential cancer-predisposing gene [39], despite the lack of robust risk estimates. The identification of BARD1 PV carriers should be taken with caution, as inherited PVs in moderate-to low-penetrance genes may not necessarily be responsible for all the cancer diagnoses in a family. Nevertheless, although the clinical evidence available to date is still insufficient to impact risk management, continued testing of BARD1 will permit access to the carrier status once recommendations for BARD1 PV carriers become available in the future.
Taken together, our results confirm BARD1 as a BC susceptibility gene and highlight a stronger association with triple-negative tumors. Future studies aimed at screening larger cohorts and refining the classification of BARD1 variants will help to elucidate its role as a breast and/or ovarian cancer gene as well as define medical recommendations for BARD1 PV carriers. | 3,844.4 | 2021-01-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
Modelling of the cathodic and anodic photocurrents from Rhodobacter sphaeroides reaction centres immobilized on titanium dioxide
As one of a number of new technologies for the harnessing of solar energy, there is interest in the development of photoelectrochemical cells based on reaction centres (RCs) from photosynthetic organisms such as the bacterium Rhodobacter (Rba.) sphaeroides. The cell architecture explored in this report is similar to that of a dye-sensitized solar cell but with delivery of electrons to a mesoporous layer of TiO2 by natural pigment-protein complexes rather than an artificial dye. Rba. sphaeroides RCs were bound to the deposited TiO2 via an engineered extramembrane peptide tag. Using TMPD (N,N,N′,N′-tetramethyl-p-phenylenediamine) as an electrolyte, these biohybrid photoactive electrodes produced an output that was the net product of cathodic and anodic photocurrents. To explain the observed photocurrents, a kinetic model is proposed that includes (1) an anodic current attributed to injection of electrons from the triplet state of the RC primary electron donor (PT) to the TiO2 conduction band, (2) a cathodic current attributed to reduction of the photooxidized RC primary electron donor (P+) by surface states of the TiO2 and (3) transient cathodic and anodic current spikes due to oxidation/reduction of TMPD/TMPD+ at the conductive glass (FTO) substrate. This model explains the origin of the photocurrent spikes that appear in this system after turning illumination on or off, the reason for the appearance of net positive or negative stable photocurrents depending on experimental conditions, and the overall efficiency of the constructed cell. The model may be a used as a guide for improvement of the photocurrent efficiency of the presented system as well as, after appropriate adjustments, other biohybrid photoelectrodes. Electronic supplementary material The online version of this article (10.1007/s11120-018-0550-8) contains supplementary material, which is available to authorized users.
SECTION 4 -ABSORBANCE PROPERTIES OF BOUND RCS
Binding of RCs to the surface of deposited TiO2 pastes caused some changes to the native absorbance spectrum of the bacteriochlorin cofactors ( Figure S4A). The band around 865 nm attributable to the primary donor BChls was missing, an often-observed effect that can be attributed to oxidation of P in the air-dried sample (Moss et al. 1991). In addition, the 800-nm band attributable to the two accessory BChls was reduced in intensity relative to the 760-nm band attributable to the two RC BPhes. There are two possible reasons for such a change. The first one is a damage to the RC protein such that a BChl is detached from its native binding pocket -such a change would be expected to cause a reduction in the absorbance bands at 865 and/or 800 nm and an increase in absorbance around 760 nm due to the appearance of "free" BChl. The second is pheophytinization of a proportion of the RC BChls such that their central Mg 2+ metal is replaced by two protons but they are retained in the protein scaffold in their binding pocket(s). To investigate this change, bacteriochlorin pigments were extracted using methanol from RCs in solution and immobilized on TiO2. For extraction of pigments with methanol (Avantor) a 2 µL aliquot of RC stock solution or a RC-coated TiO2 slide was immersed in 500 µL methanol and vigorously mixed for 1 min. The resulting solution was centrifuged at 12,100 g and the absorbance spectrum of the supernatant recorded using a Hitachi U-2800A spectrophotometer. The absorbance spectrum of pigments extracted from RCs immobilized on TiO2 showed a blue-shift of the longest-wavelength absorbance band relative to that of pigments extracted from RCs in solution ( Figure 2B). This spectral change is characteristic of a higher amount of BPhe relative to BChl, as the absorption maximum of BPhe in methanol is blue-shifted relative to that of BChl . This indicated that pheophytinization and not dissociation of BChl was responsible for the spectral changes shown in Figure 2A. In addition a band appeared at 680 nm in the spectrum of the TiO2 RC extract that was probably attributable to a small amount of BChl decomposition products other than BPhe, such as 3acetyl chlorophyll (Clayton 1966;Makhneva et al. 2016). With the assumption that a BPhe in a binding pocket normally occupied by a monomeric BChl has the same absorption spectrum as a BPhe in its native binding pocket, the spectrum of RCs deposited on TiO2 in Figure 2A was normalized to the same total number of four BPhe and monomeric BChl molecules as for native RCs in solution.
S7
This approach allowed estimation of a ratio of BPhe:monomeric-BChl of 2.8:1.2 in the TiO2-bound RCs, which means that an average of 0.8 monomeric BChls per RC had undergone pheophytinization (see next section for a full account of the derivation of these values and the normalization method).
Further evidence that the spectral change undergone by RCs on binding to the TiO2 electrode was due to in situ pheophytinization came from the similarity the ratio of amplitudes of the bands at 760 nm and 800 nm between the IPCE action spectrum for an I-50 electrode and the absorbance spectrum of the electrode. As free BChl pigments released from their binding pockets due to photodamage would not be expected to contribute to a cathodic photocurrent (Tsui et al. 2014), this matching supports the conclusion that RCs on the electrode surface had undergone some conversion of monomeric BChls to BPhe, producing the observed absorbance change. Furthermore, it suggested that the BChl being pheophytinized was BB (see inset in Figure 1), as it does not take part in the electron transfer process (Kamran et al. 2015). A pheophytinization of BA would be expected to suppress electron transfer in RC. The vulnerability of BB to pheophytinization could be related to the fact that it takes part in photoprotection against triplet states, so there is a bigger risk for it to be damaged (Frank et al. 1996). A final point to note is that the IPCE action spectrum had a band at 865 nm attributable to the P BChls, supporting the conclusion that this band is bleached due to P oxidation when RCs are adhered to the TiO2 electrode in air, but this bleaching is reversible after immersion in a solution of suitable redox potential.
SECTION 5 -ANALYSIS OF PHEOPHYTINIZATION
For normalization of absorption spectra of RCs bound to the surface of TiO2 the following assumptions were made: (1) absorbance at 800 nm comes only from the accessory BChls, while that at 760 nm comes only from BPhe; (2) the only change in spectrum comes from transformation of BChl into BPhe; (3) the extinction coefficient of BPhe is the same for both natural positions of BPhe in the RC (i.e. HA and HB) and for BPhe at a location normally occupied by a monomeric BChl (i.e. BPhe formed after pheophytinization of BA or BB). These assumptions lead to the following set of equations: Where: ( ) is the absorbance at xxx nm for RCs in solution, is the absorbance at xxx nm for RCs on TiO2 after normalization, is the absorbance at xxx nm for RCs on TiO2 directly from measurements, is a dimensionless quantity proportional to the extinction coefficient for species yyy, and is the experimental ratio of measured absorbances. Equations (S3) and (S4) take into account that there are normally two BPhes and two accessory BChls per RC molecule.
After solving equations S1 to S4, one obtains: The absorbance spectrum of RCs on TiO2 presented in Figure 2 was normalized to this value at the maximum around 760 nm. The normalized absorption spectrum was then used to calculate an average number (n) of BPhes and accessory BChls per RC molecule using the equations:
SECTION 6 -ELECTROCHEMICAL PROPERTIES OF THE ELECTROLYTE
The supporting electrolyte was 20 mM Tris-HCl (pH 8.0). TMPD, which has previously been used as a component of solar cells based on Rba. sphaeroides RCs (Tan et al. 2012;Ravi et al. 2017), has two steps of oxidation ( Figure S5). However only the first occurring at a formal potential of +260 mV vs SHE is of use because the doubly oxidized form, present at potentials over 700 mV vs SHE, undergoes decomposition with displacement of dimethylamine (Brownson and Banks 2014). The contribution of particular redox states to the electroactive species can be calculated by analysis of the values of stable currents in cyclic voltammetry (CV) scans on the right and left side of the formal redox potential ( Figure S5), as these currents depend on the bulk concentrations of either reduced or oxidized species (assuming similar diffusion coefficients for reduced and oxidized form) (Zoski 2007).
In the 1.2 mM solution the TMPD (neutral) and TMPD + (monocationic) forms dominated in a ~1:1 ratio (see lengths of A and B line segments in Figure S5 and next section for the derivation of the method; the diffusion coefficients of TMPD and TMPD + do not differ by more than 15 %) (Wang et al. 1997). The OCP of a freshly prepared solution of TMPD oscillated around +225 mV vs SHE, and so this potential was applied in all subsequent photocurrent measurements in order to minimize the dark current. It is also visible as the potential near which the CV curve in Figure S5 crosses zero current line. Figure S5. Electrochemical properties of the electrolyte. Cyclic voltammogram of 1.2 mM TMPD in 20 mM Tris-HCl (pH 8.0) on a 25 µm platinum disc microelectrode at a scan rate of 100 mV s -1 . The structures of the three redox forms of TMPD (neutral, mono-and bicathionic) are presented. In this and similar solution used for photocurrent measurements the bulk concentrations of the neutral and monocationic forms are similar, as estimated from similar positive and negative currents at ~+480 mV (B) and ~+110 mV (A) potentials (vs SHE), respectively (see text for details). The two long vertical lines indicate the formal redox potentials of neutral/monocationic and monocationic/bicationic TMPD pairs at +260 mV and +700 mV, respectively. S10
SECTION 7 -DETERMINATION OF THE TMPD/TMPD + RATIO
The steady-state current in cyclic voltammetry measurements on a microelectrode is given by (Bard and Faulkner 2001): Where: is the number of electrons transferred within reaction, is Faraday's constant, is the diffusion coefficient of the reacting species (either the oxidized or reduced form), and * is the bulk concentration of the reacting species.
This equation is valid only for microelectrode, as it allows probing bulk concentration of species. It is because of the low current flow, which does not much affect the local concentration of species as well as diffusion around microelectrode is closer to the spherical one than that on the macroelectrode. Spherical diffusion ensures efficient mass transport of species to the surface of electrode from the bulk volume (Bard and Faulkner 2001).
Using values for the cathodic and anodic steady-state currents (lines A and B in Figure S5) one can determine the concentration ratio of the oxidized and reduced forms of the redox mediator from: * * = ℎ • (S9) S11
SECTION 8 -DESCRIPTION OF KINETIC MODEL
Physical details Figure S6 shows the processes and associated rate constants included in the model used to simulate experimental photocurrent data. For simplicity only the RC states involving the primary electron donor P and terminal quinone acceptor Q are considered due to the much shorter lifetimes of other RC states (Blankenship et al. 1995). It is assumed that only a fraction (1 − ) of RCs are fully functional meaning that they can efficiently conduct electron transfer between TiO2 and the mediator. The remaining fraction ( ) can absorb light but undergo wasteful charge recombination and dissipate energy without contribution to the photocurrent, and therefore represent parasitic absorption. The number of photons absorbed per second per area unit is counted from the absorbance of the whole system (see Equations S19 and S20). The triplet state P T can be formed only in RCs in a closed state (PQ -) as a result of P + HA -→P T HA charge recombination, with a quantum yield Ф that is used as a parameter.
Diffusion of TMPD is, for simplicity, simulated as the exchange of reduced and oxidized forms of the mediator near the working electrode (TMPD, TMPD + ) with the bulk volume (TMPDbulk, TMPD + bulk) and is characterized by rate constant . Thus, the flux of diffusion is proportional to the concentration difference between bulk and the region in immediate proximity of the working electrode (see Equation S17), which in general doesn't have to be strictly correct, especially while the concentrations of species are changing rapidly. It is a place for future possible improvements of the model. Figure S6. Schematic of the processes included in the kinetic model. The colors of arrows correspond to those indicating processes in Figure 5. Six RC states are considered: PQ, P + QA -, P + QA, PQA -, P T QA -, and P T QA. Four of these states may exchange electrons with TMPD/TMPD + : P + QA -, P + QA, PQA -, P T QA -. Two of the states may inject the electron to TiO2: P T QAand P T QA. Two of the states may accept an electron from TiO2: P + QAand P + QA. Two of the states may be excited by light: PQA and PQA -.
Mathematical model
The differential equations presented below (S10 -S20) are the mathematical expression of the model presented in Figure S6. Most of the symbols are self-explanatory or are presented in Figure S6 (those with "n" at the beginning depict RCs inactive in photocurrent generation). The others are: [ ] -concentration of state/species x, (S10) (S12) [ The resulting current was calculated using Equation S30:
Model Parameters
Simulations of photocurrent transients for electrodes both treated with TiCl4 and not treated with TiCl4 were performed with the parameters presented in S31. The extinction coefficient is taken from literature . The thickness of the TiO2 layer is the value typical for DSSCs (Ito et al. 2007). Recombination rate constants are taken from literature (Blankenship et al. 1995;Frank et al. 1996). Values of − and − were taken from the literature for freely diffusing RCs with TMPD (Agalidis and Velthuys 1986). The concentration of RCs was calculated using the Beer-Lambert S14 law and the value of light absorption at the Qy maximum. The concentration of TMPD and the light intensity were as used in experiments. Two sets of values were used for the remaining parameters according to two models. The "inactive pool" (IP) model assumed that 90 % of RCs dissipate energy quickly and do not contribute to the photocurrent. The "RC-mediator interface limited" (RMIL) model assumed that 100 % of RCs undergo charge separation and do not dissipate energy, but the values of − and − are different from those available in the literature due to immobilization of RCs on the TiO2 surface which hinders access of the mediator to reduced and oxidized cofactors within protein. In the case of this second model the values of − and − were optimized to obtain the best fit to the photocurrent traces. The values for these parameters are shown in Table 1 in the main text.
Time-dependence of species concentrations
The time-dependence of the concentrations of all species are shown in Figures S9 and S10. Interpretation of these plots is presented in the main text. | 3,565.8 | 2018-07-03T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Engineering",
"Materials Science"
] |
Isolation and Structure Characterization of an Antioxidative Glycopeptide from Mycelial Culture Broth of a Medicinal Fungus
A novel glycopeptide (Cs-GP1) with an average molecular weight (Mw) of 6.0 kDa was isolated and purified by column chromatography from the lower Mw fraction of exopolysaccharide (EPS) produced by a medicinal fungus Cordyceps sinensis Cs-HK1. Its carbohydrate moiety was mainly composed of glucose and mannose at 3.2:1.0 mole ratio, indicating an O-linked glycopeptide. The peptide chain contained relatively high mole ratios of aspartic acid, glutamic acid and glycine (3.3–3.5 relative to arginine) but relatively low ratios of tyrosine and histidine. The peptide chain sequence analyzed after trypsin digestion by LC-MS was KNGIFQFGEDCAAGSISHELGGFREFREFLKQAGLE. Cs-GP1 exhibited remarkable antioxidant capacity with a Trolox equivalent antioxidant capacity of 1183.8 μmol/g and a ferric reducing ability of 611.1 μmol Fe(II)/g, and significant protective effect against H2O2-induced PC12 cell injury at a minimum dose of 10 μg/mL. This is the first report on the structure and bioactivity of an extracellular glycopeptide from the Cordyceps species.
Introduction
Polysaccharide and protein (PSP) complexes from edible and medicinal fungi have attracted increasing interest for their notable bioactivities such as immunomodulation, antitumor and antioxidant [1][2][3].
OPEN ACCESS
Cordyceps (Ophiocordyceps) sinensis, generally called the Chinese caterpillar fungus, is a well-known medicinal fungus in traditional Chinese medicine with a wide range of health promoting and therapeutic functions [4][5][6]. Because of the scarcity and high price of natural C. sinensis organisms, mycelial fermentation has become a favorable process for mass production of fungal biomass and polysaccharides. Although many previous studies on the bioactive molecules from C. sinensis fungi have focused on the polysaccharides either extracted from the fungal mycelia or isolated from the liquid medium, a few have attained proteins and peptides from C. sinensis or related species. Wong et al. [7] have recently isolated a peptide called Cordymin with a molecular weight of about 10 kDa from C. militaris fruit body with strong antifungal activity and antiproliferative activity. This peptide has also been isolated from the mycelia of a C. sinensis fungus and shown significant anti-inflammatory and antioxidant effects in animal models [8]. To the best of our knowledge, there is still no reported study on a bioactive peptide produced as an extracellular product by a Cordyceps fungus.
Cs-HK1 is a C. sinensis fungus isolated from natural fruiting body in our lab and has been applied to mycelial culture and liquid fermentation for production of exopolyssacharide (EPS) [9]. The crude EPS attained by ethanol precipitation from the Cs-HK1 fermentation medium had a protein content of up to 20%-25% (w/w), which was found to contribute more than the carbohydrate content did to the antioxidant effects of EPS [10]. In a recent study [11], the EPS from the Cs-HK1 fermentation medium was roughly fractionated into different ranges of molecular weight (Mw) by gradient ethanol precipitation. The lower Mw fraction attained at a higher ethanol volume ratio (2-5) had a higher protein content and stronger antioxidant activity. In a later study [12], the low Mw EPS was further fractionated into more pure fractions of PSPs, some of which showed notable antioxidant activities.
The present study was aimed at the purification and structural characterization of an antioxidative glycopeptide from the low Mw EPS fraction produced by the Cs-HK1 fungus in mycelial liquid culture, and evaluation of its antioxidant property through chemical and cell culture assays.
Isolation and Molecular Profiles of Cs-GP1 from EPS-2
EPS-2 was fractionated into five fractions (OF-I, II, III, IV and V) by gel filtration through the Superdex 75 column with RI detection. OF-IV and OF-V had much higher protein contents (30%-50%) than the other three fractions (6% or lower) [12]. Fraction OF-V as well as fraction OF-IV (but not OF-I, II and III) also showed the protein absorption peak on UV at 280 nm. Because of its remarkable antioxidant activity, OF-V was further purified by ion exchange chromatography on the DEAE column (Figure 1a), yielding the glycopeptide fraction Cs-GP1. Cs-GP1 had a protein content of 52.3% (Table 1) and a carbohydrate content of 30% (determined by phenol-sulfuric method, data not shown), and was recognized as a glycopeptide. The Cs-GP1 fraction showed a single peak on HPGPC (Figure 1b), which was calibrated to an average Mw of 6.0 kDa (Table 1) revealed a major peak at 6057 m/z and two small fragments at 4423 and 1634 m/z, which were probably derived from the hydrolysis of the fragment at 6057 m/z during ionization. The fragment with m/z at 1634 was most likely the glyco-chain, whereas the m/z at 4423 was the peptide chain. These analytical results all indicated the molecular homogeneity of Cs-GP1, suitable for further structure analysis.
Sugar and Amino Acid Constituents of Cs-GP1
Monosaccharide analysis indicated that Cs-GP1 was composed mainly of glucose (Glc) and mannose (Man) at 3:1 mole ratio and a small proportion of GalN and Gal (Table 1) (Supplemental data: Figure 2 HPLC profile of Cs-GP1). Furthermore, the high contents of Glc and Man are indicative of an O-linked glyco-chain. As shown by the amino acid analysis (Table 2), Cs-GP1 had high mass contents of glutamic acid (Glu), aspartic acid (Asp), glycine (Gly) and cysteine (Cys) (76.6-40.6 μg/mg), but relatively low contents of threonine (Thr), tyrosine (Tyr) and histidine (His). Between 1800 and 400 cm −1 are the characteristic bands of amino acids, i.e., the peak at 1650.9 cm −1 assigned to amide I band from the peptide, the peak at 1560.7 cm −1 to amide group II vibration [13], and the peak at 1396.0 cm −1 to high content of COO-, which was probably from Asp and Glu [14]. The peak at 1054 cm −1 is attributed to C-O-C stretching vibration, and 846 cm −1 to α conformation in the sugar units. The molecular structure and composition deduced from IR are consistent with the results of monosaccharide and amino acid analysis (Tables 1 and 2).
IR and NMR Spectra
As for the 1 H NMR spectrum (Figure 2b), the peaks between 8.1 and 8.5 ppm are assigned as reported previously [15,16] to the β-NH signals of amino acid, those between 6.5 and 7.5 ppm are assigned to α-NH signals, and between 4.8 and 5.5 ppm assigned to the anomeric signals of the sugar units. These peaks between 3.5 and 4.7 ppm are assigned to the C-H signals of both amino acids and sugar units, and those between 1.8 and 2.5 and 0.5 and 1.8 assigned to γ and λ C-H signals of the amino acid. The relatively strong signal between 4.0 and 4.6 and 3.6 and 3.9 may be attributed to the high contents of Gly, Ala and Asp; the peaks between 2.0 and 2.5 may be attributed to the H signals in Glu and Asp. These results are consistent with the above amino acid composition ( Table 2).
Amino Acid Sequence of Peptide Chain
After extensive in-gel trypsin digestion, Cs-GP1 was degraded into peptide fragments with m/z values ranging from 700-2800 ( Figure 3a). Table 3 shows the de novo sequences of the peptide fragments detected by LC-ESI-MS-MS. The overall peptide sequence was derived from the overlapped sequences among the fragments as follows, GKNGIFQFGEDCAAGSLSEHLGGFREFREFLKAGNLE. The total mass (4102) was 321 m/z lower than the mass (4423) of the native Cs-GP1 derived from MALDI-TOF-MS, which was probably attributed to a small amount of peptide residual retained in the glyco-chain after enzymatic digestion. Based on the results from the sequence analysis and the above monosaccharide analysis, we suggest that the glyco-chain was attached to serine (Ser) in the peptide chain by O-linkage. The peptide chain sequence contained relatively large number of Ala (3), Gly (6) and Glu (5), which was consistent with the high contents found from amino acid composition analysis. The peptide chain sequence was further confirmed by MALDI-TOF-MS-MS analysis of the main peptide fragment with m/z at 2470 (Figure 3b). The fragment sequence was identified as NGIFQFGEDCAAGSLSEHLGGFR, which matched closely with the peptide chain sequence derived from LC-ES-MS-MS. Figure 4a shows the scavenging (or inhibiting) effect on ABTS •+ radicals and Figure 4b the ferric reducing power of Cs-GP1, both exhibiting a linear correlation with concentration. From these activity versus concentration curves, the following activity indexes were derived: IC50 of 35 μg/mL for inhibition of ABTS •+ radicals, TEAC value of 1180 μmol Trolox/g, and FRAP value of 610 μmol Fe (II)/g. The activity indexes for fraction IV derived from these two assays were IC50 0.19 μg/mL on ABTS •+ radicals, 360 μmol Trolox/g and 43 μmol Fe (II)/g. In comparing these antioxidant activity indexes, Cs-GP1 had a much higher antioxidant capacity than OF-IV and the other three fractions (OF-I,II,III) which were composed mainly of carbohydrate as reported previously [12]. The strong antioxidant capacity of Cs-GP1 was also confirmed by the cell culture test (Figure 4c), showing a dose-dependent protecting effect against H2O2-induced cell viability loss of PC12 cells. The protective effect was statistically significant at p < 0.05 in the concentration range of 10-200 μg/mL and at 200 μg/mL maintained a cell viability of 63% after exposure to H2O2.
Discussion
Glycosylation is one of the most common posttranslational modifications of proteins in eukaryotic organisms and has significant influence on protein folding and intracellular trafficking [17,18]. The oligosaccharide moieties of glycoproteins are covalently bonded to the proteins in N-or O-linked form. Glycoproteins are involved in many important cellular communication processes associated with cell adhesion, host-pathogen interaction, and immune responses [19][20][21][22][23]. Therefore, the isolation and characterization of homogeneous glycoproteins and glycopeptides are needed for investigation of the biological functions and the structure-activity relationships.
Cs-GP1 isolated and fractionated from the low Mw EPS of Cs-HK1 fungus has been identified as an O-linked glycopeptide with relatively high contents of Glc and Man in the oligosaccharide portion and its peptide portion contained high contents of Ala, Gly and Asp amino acids. Cs-GP1 showed strong antioxidant activity in both chemical and cell culture assays. There is ample literature on the strong antioxidant properties of naturally-occurring peptides or produced by hydrolysis of food proteins from plants and animals [3]. Most of the antioxidant food peptides are in the Mw range of 500-1800 Da. The bioactivities as well as the properties of peptides are dependent on the amino acid composition and sequence. As free amino acids are not active in general, the amino acid sequences are crucial for the antioxidative activity of peptides. Some previous studies have suggested that high contents of some amino acid species such as Asp, Gly and Ala were significant factors for several antioxidative peptides from soybeans [24] and jumbo squids [25]. However, no general rule of thumbs has been established for the active amino acid composition and sequences.
Materials
The Cs-HK1 fungus used in this study was previously isolated in our lab from the fruiting body of a wild C. sinensis organism [10] Figure 5 illustrates the procedure for isolation and fractionation of EPS from the Cs-HK1 mycelial culture, and purification of the glycoprotein. The Cs-HK1 fungus was cultivated in 250 mL Erlenmeyer flasks each containing 50 mL of a liquid medium, shaken constantly at 150 rpm and 20 °C for 7 days [10]. The mycelial broth was then centrifuged and the supernatant liquid medium was collected for isolation of EPS by ethanol precipitation. The ethanol precipitation was performed in two steps, using 2-volume ratio of ethanol (96% grade) to the liquid medium in the first step to precipitate the high-Mw EPS, followed by another 3-volumes of ethanol in the second step to precipitate the remaining low-Mw EPS. The precipitate was washed with acetone, redissolved in water and lyophilized. EPS-2 (~0.3 g) was redissolved in 2 mL distilled water and loaded onto a Superdex 75 gel filtration column (2.6 × 60 cm), eluted with 0.3 M NH4HCO3 at a flow rate of 0.3 mL/min, and monitored by RI. The peak fractions collected were scanned by UV and the fraction (OF-V) exhibiting an absorption peak was collected as the glycopeptide fraction for the following experiments.
Isolation and Purification of EPS from Cs-HK1 Mycelial Culture
The OF-V fraction was dialyzed against distilled water and lyophilized. It was (~100 mg) then fractionated by anion-exchange chromatography on a DEAE-cellulose column (2.6 × 40 cm), eluted with NaCl on a linear gradient from 0 to 1.0 M (in 0.1 M sodium acetate solution at pH 5.0) for 500 min at 1.0 mL/min, and monitored with UV at 280 nm. The peak fraction was collected and dialyzed against distilled water and lyophilized, yielding the final glycopeptide Cs-GP1.
Monosaccharide, Amino Acid and Protein Contents
Monosaccharide composition was analyzed by HPLC as described by Chen et al. [16]. In brief, Cs-GP1 (~2 mg) was hydrolyzed with 2 M TFA at 110 °C in nitrogen atmosphere for 8 h, with lactose added as an internal standard. The hydrolysate was dried under vacuum, and then derivatized with 450 μL 1-phenyl-3-methyl-5-pyrazolone (PMP) solution (0.5 M, in methanol) and 450 μL of 0.3 M NaOH at 70 °C for 30 min. The reaction was stopped by neutralization with 450 μL of 0.3 M HCl, followed with chloroform extraction (1 mL, three times). The extract solution was analyzed by HPLC on a Waters 2870 instrument with an Agilent ZORBAX Eclipse XDB-C18 column (5 μm, 4.6 × 150 mm) at 25 °C with UV detection at 250 nm. The mobile phase was composed of 0.05 M KH2PO4 (pH 6.9) with 15% acetonitrile (solvent A) and 40% acetonitrile (solvent B) in water on a gradient from 8%-19% B in 25 min.
The amino acid composition was analyzed after hydrolysis of Cs-GP1 (with 6 M HCl under reduced pressure at 110 °C for 24 h) using a Hitachi 835-50 Amino Acid Analyzer (Hitachi, Tokyo, Japan). The protein content was determined by Lowry method using bovine serum albumin as a standard [26].
Average Molecular Weight
The average Mw of Cs-GP1 was analyzed by high pressure gel permeation chromatography (HPGPC) on a Waters instruments (Waters 1515 isocratic pump +2414 refractive index (RI) detector) and calibrated with dextran Mw standards, as reported previously [11,12]. The Mw distribution was also detected by SDS-PAGE (using 4% stacking gel and 12% separating gel) and staining with Coomassie Brilliant Blue in comparison with those of protein Mw markers of 6-66 kDa. The Cs-GP1 sample was dissolved at 1 mg/mL in distilled water and added at 1:3 volume ratio into a buffer solution of 0.5% SDS with 1%-mercaptoethanol, and then heated to boiling for 5 min. The gels were stained with Coomassie Brilliant Blue R-250 to visualize proteins.
The molecular weight was analyzed more accurately by the matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) on an ultrafleXtreme (Brukers, Germany), using CHCA as matrix. The Cs-GP1 sample solution was mixed with 1 volume of matrix solution (20 mg/mL of CHCA in acetonitrile/water, 50:50, v/v) in a final concentration of 50 μg/mL. Finally, 0.5 μL of the mixture was deposited onto the MALDI target plate. All spectra were the results of signal averaging of 200 shots. The instrument was operated at laser energy 20% (coarse) and 60% (fine), and resolution 1000.
NMR and IR Spectroscopy
NMR and IR spectroscopy were performed with the same procedures and instruments as described previously [12,15]. In brief, 1 H NMR was performed at 600 MHz and 13 C NMR at 150 MHz at room temperature on a Bruker AVANCE III 600 spectrometer with Topspin 3.0 software for data processing. The Cs-GP1 sample (~30 mg) was lyophilized with 500 μL D2O (99.8%) twice and then dissolved in 500 μL high quality D2O (99.96%) containing 0.1 μL acetone as an internal standard for the 1 H chemical shifts. Infrared (IR) spectrum was recorded on a Perkin-Elmer 1600 instrument at room temperature in wave number range of 4000-400 cm −1 .
In-Gel Digestion
SDS-PAGE was performed as described in Section 3.3.2. The gel was washed with Milli-Q water and then stained with 50 mL of Coomassie staining solution containing 45% (v/v) methanol, 10% (v/v) acetic acid, and 0.15% (w/v) Coomassie Brilliant Blue R350 for 1 h, then finally de-stained for 1-1.5 h using 100 mL of destaining solution containing 40% (v/v) methanol and 10% (v/v) acetic acid in Milli-Q water. The gel stained by Coomassie was sliced into four pieces of gel matrix, and then cut into about 1 mm cubes using a razor blade on a clean glass surface. The cubes were transferred into Eppendorf tubes for in-gel trypsin digestion. The in-gel trypsin digestion and the extraction of the tryptic fragments were performed according to well-established protocols [27]. In brief, acetonitrile (ACN) (25-35 μL) was added to each tube to dehydrate and shrink the gel pieces. After drying with Speed-VAc, the gel pieces were incubated in 25-35 μL of digestion buffer (0.1 μg/μL sequencing grade modified trypsin in 50 mM NH4HCO3) for 45 min in an ice water bath. The mixture was centrifuged at 10,000× g and the supernatant extract was collected. The remaining gel pieces were incubated in 20 μL of 20 mM NH4HCO3 for 10 min and the supernatant was collected and combined with previous extract. The remaining gel pieces were extracted with 20 μL extraction buffer (50% ACN, 5% formic acid) for 20 min and the extract was combined with the previous ones. Finally, the combined extract was dried by Speed-Vac and stored at −80 °C until MS analysis. The tryptic digestion product of Cs-GP1 was subject to MALDI-TOF analysis as described in Section 2.3.2 to monitor the degree of degradation and then applied to LC-MS/MS and MALDI-TOF-MS-MS for analysis of the peptide sequences.
Mass Spectrometry
For the LC-MS/MS analysis, the digested product samples were desalted using the ZipTip-C18 (Millipore) treatment. The samples were then loaded onto an analytical column (15 cm × 75 μm i.d.; Acclaim@PepMap100 C18, Dionex, Sunnyvale, CA, USA). The nano-flow was eluted at a flow rate of 300 nL/min with solvent A (2% ACN with 0.1% formic acid) and solvent B (95% ACN with 0.1% formic acid). LC analysis was performed on a 40 min staged gradient elution program, 0-4 min (5% B), 4-40 min (5%-35% B). The column outlet was coupled directly to a high voltage ESI source, which was interfaced to a Shimadzu UFLC-LTQ-Orbitrap HCD, operated at 1.7 kV spray voltage in the nES-LC-MS/MS mode in 200-2500 Da m/z range.
The peptide de novo sequences were derived by matching the acquired data with the National Center for Biotechnology Information (NCBI) non-redundant protein database (fungi) using the MASCOT software package (Version 2.3, Matrix Science, London, UK). The peptide mass and MS/MS tolerance were both 0.2 Da. The peptides have the allowance of one tryptic missed cleavage, one fixed modification with carb-amidomethyl (C) and one variable modification by oxidation.
MALDI-TOF-MS-MS analysis was performed as described in Section 2.3.2. The trypsin digested peptide was subjected to MS/MS analysis and the results were used to confirm the data from LC-ES-MS-MS.
Antioxidant Activity Assays
The antioxidant activities of Cs-GP1 as well as other EPS fractions attained from were evaluated using two chemical assays, the Trolox equivalent antioxidant capacity (TEAC) and the ferric reducing ability of plasma (FRAP) assay, and a cyto-protection test using H2O2-induced cell injury, as described previously [11]. In brief, the TEAC assay measures the ability of a compound to eliminate or scavenge ABTS •+ radicals using Trolox as a response reference [28]. The EPS sample solution in water was reacted with the ABTS •+ solution for 20 min at room temperature, followed by measurement of the absorbance at 734 nm. The scavenging activity of a sample was correlated with the absorbance decrease, and converted to a TEAC value in μmol Trolox/g sample by calibration with Trolex from 0-30 μM. The FRAP assay was performed according to Benzie and Strain [29]. The FRAP reagent was reacted with the EPS sample for 15 min at room temperature, followed by measurement of absorbance at 593 nm. The reducing power of a sample was correlated to the absorbance increase, and converted to a FRAP activity (μmol Fe(II)/g sample) by calibration with ferrous sulfate from 0-30 μM.
The cyto-protective activity of EPS fractions against oxidative cell damage was tested in rat pheochromocytoma PC12 cell culture, subjected to peroxide H2O2 treatment [11,12]. The EPS samples were pre-dissolved in phosphate buffered saline (PBS) at 10 mg/L. The PC12 cell culture was maintained in RPMI 1640 medium supplemented with 10% fetal bovine serum in a CO2 incubator at 37 °C. The activity test was performed on a 96 well-plate by treating the cells with 80 µM H2O2 and EPS sample solution at selected concentrations (0.001-200 μg/mL). The cell viability was measured by the MTT assay and represented in percentage relative to the native culture (N) to culture without any treatment.
Conclusions
An antioxidative glycopeptide Cs-GP1 has been isolated from the low Mw fraction of EPS produced and released by the Cs-HK1 fungus into the liquid medium. Its molecular composition and structure have been partially characterized including the monosaccharide, amino acid composition, and the peptide chain sequence through hydrolysis and analytical experiments, though its glyco-chain structure remains unknown. Another distinct feature of Cs-GP1 is its presence as an extracellular product which is favorable for mass production by liquid fermentation and efficient recovery from the liquid medium. In other words, the present study has demonstrated the application of a medicinal fungus as the source or producer of novel bioactive glycopeptides. | 4,949 | 0001-01-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science",
"Medicine"
] |
A SEMIMARTINGALE CHARACTERIZATION OF AVERAGE OPTIMAL STATIONARY POLICIES FOR MARKOV DECISION PROCESSES
This paper deals with discrete-time Markov decision processes with Borel state and action spaces. The criterion to be minimized is the average expected costs, and the costs may have neither upper nor lower bounds. In our former paper (to appear in Journal of Applied Probability), weaker conditions are proposed to ensure the existence of average optimal stationary policies. In this paper, we further study some properties of optimal policies. Under these weaker conditions, we not only obtain two necessary and sufficient conditions for optimal policies, but also give a “semimartingale characterization” of an average optimal stationary policy.
Introduction
The long-run average expected cost criterion in discrete-time Markov decision processes has been widely studied in the literature; for instance, see [3,[12][13][14], the survey paper [1], and their extensive references.As is well known, when the state and action spaces are both finite, the existence of average optimal stationary policies is indeed guaranteed [2,3,11,12].However, when a state space is countably infinite, an average optimal policy may not exist even though the action space is compact [3,12].Thus, many authors are interested in finding optimality conditions when a state space is not finite.We now simply describe some existing works.(I) When the costs/rewards are bounded, the minorant condition [3] or the ergodicity condition [5,6,8] ensures the existence of a bounded solution to the average optimality equation and of an average optimal stationary policy.Their common ways are via the Banach's fixed point theorem.(II) When the costs are nonnegative (or bounded below), the optimality inequality approach [1,9,10] is used to prove the existence of average optimal stationary policies.A key character of this approach is via the Abelian theorem which requires that the costs have to be nonnegative (or bounded below).In particular, Hernández-Lerma and Lasserre [9] also get the average optimality equation under the additional equi-continuity condition and give a "martingale characterization" of an average optimal stationary policy.(III) For the much more general case, when the costs have neither upper nor lower bounds, in order to establish the average optimality equation and then prove the existence of an average optimal stationary policy, the equi-continuity condition [4,9] or the irreducibility condition (e.g., [10,Assumption 10.3.5]) is required.But in [7], we propose weaker conditions under which we prove the existence of average optimal stationary policies by two optimality inequalities rather than the "optimality equality" in [4,9,10].Moreover, we remove the equi-continuity condition used in [4,9,10] and the irreducibility condition in [10].In this paper, we further study some properties of optimal policies.Under these weaker conditions, we not only obtain two necessary and sufficient conditions for optimal policies, but also give a semimartingale characterization of an average optimal stationary policy.
The rest of the paper is organized as follows.In Section 2, we introduce the control model and the optimality problem that we are concerned with.After optimality conditions and a technical preliminary lemma given in Section 3, we present a semimartingale characterization of an average optimal stationary policy in Section 4.
The optimal control problem
Notation 1.If X is a Borel space (i.e., a Borel subset of a complete and separable metric space), we denote by Ꮾ(X) its Borel σ-algebra.
In this section, we first introduce the control model where S and A are the state and the action spaces, respectively, which are assumed to be Borel spaces, and A(x) denotes the set of available actions at state x ∈ S. We suppose that the set is a Borel subset of S × A. Furthermore, Q(• | x,a) with (x,a) ∈ K, the transition law, is a stochastic kernel on S given K.
Finally, c(x,a), the cost function, is assumed to be a real-valued measurable function on K. (As c(x,a) is allowed to take positive and negative values, it can also be interpreted as a reward function rather than a "cost.") To introduce the optimal control problem that we are concerned with, we need to introduce the classes of admissible control policies.
For each t ≥ 0, let H t be the family of admissible histories up to time t, that is, H 0 := S, and (2.3) Q. X. Zhu and X. P. Guo 3 The class of all randomized history-dependent policies is denoted by Π.A randomized history-dependent policy π := (π t , t ≥ 0) ∈ Π is called (deterministic) stationary if there exists a measurable function f on S with f (x) ∈ A(x) for all x ∈ S, such that For simplicity, denote this policy by f .The class of all stationary policies is denoted by F, which means that F is the set of all measurable functions f on S with f (x) ∈ A(x) for all x ∈ S.
For each x ∈ S and π ∈ Π, by the well-known Tulcea's theorem [3,8,10], there exist a unique probability measure space (Ω,Ᏺ,P π x ) and a stochastic process {x t ,a t ,t ≥ 0} defined on Ω such that, for each D ∈ Ꮾ(S) and t ≥ 0, with h t = (x 0 ,a 0 ,...,x t−1 ,a t−1 ,x t ) ∈ H t , where x t and a t denote the state and action variables at time t ≥ 0, respectively.The expectation operator with respect to P π x is denoted by E π x .
The main goal of this paper is to give conditions for a semimartingale characterization of an average optimal stationary policy.
Optimality conditions
In this section, we state conditions for a semimartingale characterization of an average optimal stationary policy, and give a preliminary lemma that is needed to prove our main results.
We will first introduce two sets of hypotheses.The first one, Assumption 3.1, is a combination of a "Lyapunov-like inequality" condition together with a growth condition on the one-step cost c.The second set of hypotheses we need is the following standard continuity-compactness conditions; see, for instance, [7,12,13,15,16] and their references.To ensure the existence of average optimal stationary policies, in addition to Assumptions 3.1 and 3.3, we give a weaker condition (Assumption 3.5 below).To state this assumption, we introduce the following notation.
For the function w ≥ 1 in Assumption 3.1, we define the weighted supremum norm u w for real-valued functions u on S by and the Banach space B w (S) := {u : u w < ∞}.
hold. Then the following hold.
(a) There exist a unique constant g * , two functions h * k ∈ B w (S) (k = 1,2), and a stationary policy f * ∈ F satisfying the two optimality inequalities (c) Any stationary policy f in F realizing the minimum of (3.5) is average optimal, and so f * in (3.6) is an average optimal stationary policy.
(d) In addition, from the proof of part (b), it yields that for each h ∈ B w (S), x ∈ S, and Proof.See [7, Theorem .
A semimartingale characterization of average optimal stationary policies
In this section, we present our main results.To do this, we use the following notations.Let h * 1 , h * 2 , g * be as in Lemma 3.7, and define (a) A policy π * is AEC-optimal and V (x,π * ) = V * (x) = g * for all x ∈ S if and only if 6 Semimartingale characterization of optimal policies (b) A policy π * is AEC-optimal and V (x,π * ) = V * (x) = g * for all x ∈ S if and only if Proof.(a) For each π ∈ Π and x ∈ S, it follows from (4.1) that which together with (2.5) yields and so Multiplying by 1/n and letting n → ∞, from (3.7), we see that part (a) is satisfied.Similarly, combining (4.2) and (3.7), we see that part (b) is also true.
Theorem 4.2.Suppose that Assumptions 3.1, 3.3, and 3.5 hold.Then the following hold: (a) {M (1) n } is a P π x -submartingale for all π ∈ Π and x ∈ S; (b) let f * be the average optimal stationary policy obtained in Lemma 3.7, then {M (2) n } is a P f * x -supermartingale for all x ∈ S; (c) if {M (2) n } is a P π * x -supermartingale, then π * is AEC-optimal and V (x,π * ) = g * for all x ∈ S.
Remark 4.3.Theorems 4.1-4.2are our main results: Theorem 4.1 gives two necessary and sufficient conditions for AEC-optimal policies, whereas Theorem 4.2 further provides a semimartingale characterization of an average optimal stationary policy.
Assumption 3 . 3 . 1 . 3 . 4 .
(1) For each x ∈ S, A(x) is compact.(2)For each fixed x ∈ S, c(x,a) is lower semicontinuous in a ∈ A(x), and the functionS u(y)Q(dy | x,a) is continuous in a ∈ A(x)for each bounded measurable function u on S, and also for u =: w as in Assumption 3.Remark Assumption 3.3 is the same as in [10, Assumption 10.2.1].Obviously, Assumption 3.3 holds when A(x) is finite for each x ∈ S. | 2,112.6 | 2006-04-13T00:00:00.000 | [
"Mathematics"
] |
Paramagnetism in Bacillus spores: Opportunities for novel biotechnological applications
Abstract Spores of Bacillus megaterium, Bacillus cereus, and Bacillus subtilis were found to exhibit intrinsic paramagnetic properties as a result of the accumulation of manganese ions. All three Bacillus species displayed strong yet distinctive magnetic properties arising from differences in manganese quantity and valency. Manganese ions were found to accumulate both within the spore core as well as being associated with the surface of the spore. Bacillus megaterium spores accumulated up to 1 wt.% manganese (II) within, with a further 0.6 wt.% adsorbed onto the surface. At room temperature, Bacillus spores possess average magnetic susceptibilities in the range of 10−6 to 10−5. Three spore‐related biotechnological applications—magnetic sensing, magnetic separation and metal ion adsorption—were assessed subsequently, with the latter two considered as having the most potential for development.
| INTRODUCTION
Bacillus cells initiate sporulation as a protective mechanism in response to nutrient starvation. Successful formation of heat resistant, metabolically dormant spores typically requires mineral rich growth medium, including a requirement for manganese ions (Greene & Slepecky, 1972) (Charney, Fisher, & Hegarty, 1951). The importance of manganese for enzymatic activity in biological systems is wellestablished (Culotta, 2000); however, while most microorganisms only require trace amounts of manganese, spores are unusual in that they accumulate a much greater amount than vegetative cells (Curran, Brunstetter, & Myers, 1943). Warth, Ohye, and Murrell (1963) reported that Bacillus spores could contain as much as 1.5 wt.% manganese. Such high levels of uptake suggest that manganese has an important function within spores. Many authors have explored the role of manganese ions in several of the spore's resistance properties (Ghosh et al., 2011;Levinson & Hyatt, 1964), as well as in germination (Levinson & Sevag, 1954;Levinson, Sloan, & Hyatt, 1958). However, only one group has reported the intrinsic magnetic properties conferred by the high manganese content in spores (Melnik et al., 2007).
Manganese is a chemically versatile transition metal, which can exist in several oxidation states ranging from +1 to +7. This allows manganese to display a wide variety of magnetic behaviors, from ferromagnetic manganese oxides (Du, Zhang, Sun, & Yan, 2009) to diamagnetic permanganate compounds (Goldenberg, 1940). The exact chemical state of manganese in the spore is not known, however, evidence suggests that Mn 2+ ions compete with Ca 2+ ions to form a chelate with dipicolinic acid (DPA) inside the spore core (Bailey, Karp, & Sacks, 1965). In this case, bacterial spores are very likely to be paramagnetic at room temperature, exhibiting a small attractive force toward permanent magnets.
The magnetic properties of spores may be of biotechnological value, particularly in sensing, and separation related applications.
Current magnetic methods for the separation of spores require them to be tagged with antibody functionalized magnetic beads (Shields et al., 2012). However, with the exception of magnetotactic bacteria (Blakemore, 1975), most microorganisms tend to be diamagnetic, thus the intrinsic paramagnetism of spores may be sufficient to achieve efficient separation.
In sensory applications, most methods rely on a similar antibody functionalisation and tagging procedure (Gómez de la Torre et al., 2012;Wang et al., 2015) without exploiting the spore's intrinsic paramagnetic properties. Conceivably, potential cross-species variance in the latter may be detectable by a sensitive magnetometer and used to identify dangerous species such as B. anthracis. However, increased understanding of the magnetic properties of spores is required before any applications can be considered. Accordingly, the aim of this work was to fully characterize the magnetic properties of three species of Bacillus spores, determining the type of magnetism that they exhibit, and determining the role of manganese in conferring this magnetism. Possible alterations to the magnetic properties as well as novel biotechnological applications are also explored.
| Bacterial strains and sporulation conditions
Three lab strains of Bacillus were studied in the current work: B. megaterium QM B1551, provided by Prof. P. S. Vary (Illinois); B. cereus 569, provided by Prof. A. Moir (Sheffield, UK), and B. subtilis PS832, provided by Prof. P. Setlow (CT). The sporulation conditions and spore purification procedures followed were those described previously by Nicholson and Setlow (1990), Clements and Moir (1998) and Christie, Ustok, Lu, Packman, and Lowe (2010) for the sporulation of B. subtilis, B. cereus, and B. megaterium, respectively. The latter two species were additionally cultured in media containing manganese concentrations varying between 20 and 1,000 µM. In addition, the uptake of iron and cobalt ions in B. megaterium spores was investigated by culturing spores in media containing 100 µM of FeSO 4 and CoCl 2 , respectively.
| Sample preparation for magnetometer measurements
Clean spore suspensions, containing >95% spores (checked by optical microscopy), were concentrated to a paste-like density by centrifugation.
Between 100 and 150 µl of this paste was pipetted into the cap and body of a size 0 gelatine capsule, pre-frozen to −80°C. This achieves a quick freeze and prevents the gelatine from extracting water from the spore paste. The frozen spore capsule was then lyophilized at −20°C and 250 mTorr pressure for 8 hr. Capsules were sealed when dry and weighed to determine the dry mass of spores within. To minimize any background signal, spore capsules were suspended within a straw supplied by Quantum Design (San Diego, CA) with the aid of two low paramagnetic glass rods. The straw was then sealed with cotton at both ends and Kapton™ tape at the lower end before being placed into the magnetometer for measurement.
| Magnetic property measurements
The magnetic response of spore samples was measured in a superconducting quantum interference device (SQUID) (Quantum Design MPMS 2XL) for both field and temperature dependence. For field dependence, the temperature was set to 5 K and a full hysteresis sweep between 5 T and −5 T was conducted. In temperature dependence, zero field cooling measurements were conducted: the sample was first cooled to 2 K before a 0.1 T magnetic field was applied. The temperature was then increased to 300 K in 2 K intervals.
| Imaging and spectroscopy
B. megaterium spore samples were prepared for scanning electron microscopy (SEM) imaging and energy dispersive X-ray spectroscopy (EDS). Around 1 ml of a high density (OD 600 > 10) spore suspension was deposited on a Leit tab pre-coated with poly-L-lysine to increase spore adhesion. After 5 min, the tabs were gently rinsed with distilled water to remove any unwanted chemicals and subjected to a fast freeze in liquid ethane to preserve the spore's morphological features.
Frozen samples were lyophilized overnight at a fixed pressure of 10 −3 mbar and a temperature programme of initially −90°C for 2 hr then subsequently increasing by 10°per hour up to a final temperature of 30°C. Lyophilized samples were then sputter coated with a thin layer (25-50 nm) of conductive carbon and stored in a dry location, at room temperature, prior to imaging. SEM imaging and EDS were undertaken using a FEI XHR Verios 460 (Hillsboro, OR) mounted with an AMETEK EDAX detector device (Mahwah, NJ).
| Inductively coupled plasma-optical emission spectroscopy (ICP-OES)
The lyophilized spore capsule samples prepared for magnetic measurements were submitted for ICP-OES analysis to determine the manganese content in spores (Medac Ltd., Chobham, UK).
| Spore demineralisation
Spores were demineralized by acid titration as previously described by Marquis and Bender (1985). A solution of 5 wt.% spores was titrated with hydrochloric acid over approximately 4 hr until the pH stabilized at 4.0. To ensure that all minerals were removed, the suspension was washed three times by centrifugation and resuspension in deionized water and the titration process repeated. All acid treated spores were prepared from the same culture batch as their non-treated counterparts and were thoroughly washed in deionized water before conducting experiments.
| Surface adsorption experiments
Approximately 0.2 g of MnCl 2 or FeSO 4 salt crystals were added to a 25 ml suspension of 1 wt.% B. megaterium spores and the solution was mixed well. Once the crystals had dissolved, 1 ml of 1 M sodium hydroxide solution was pipetted into the suspension, forming a colored precipitate. The suspension was then vortexed until no further oxidation occurred and the color of the precipitate was stable. A total of 2 ml of 2 M HCl was subsequently added to the suspension, which was incubated on ice overnight. Spores were separated from inorganic debris by repeated rounds of centrifugation and resuspension in deionized water. Suspensions were cleaned to give over 95% phasebright spores before experiments were conducted.
| Theory
Paramagnetic materials possess at least one unpaired electron that can align with the externally applied magnetic field; however, the extent to which these electrons can align with the magnetic field is low as thermal excitation randomizes the spin orientation for most electrons and only a small proportion will align with the applied field.
In most cases, the mass magnetisation, M, of a paramagnetic material depends linearly on the applied field, H, according to, where χ m is the mass magnetic susceptibility (m 3 kg −1 ). According to Shevkoplyas, Siegel, Westervelt, Prentiss, and Whitesides (2007), the magnetic susceptibility of the material, following conversion from centimetre-gram-second units to SI units, may be expressed as, where χ spore is the dimensionless magnetic susceptibility of the spore and ρ spore is its density in kg m −3 . At low enough temperatures or high enough fields, however, saturation occurs and the field dependence can be expressed using the Langevin function, where M is the mass magnetisation of the sample (A m 2 kg −1 ), N is the number of magnetic entities per unit kg of spores, m is the magnetic moment of each entity in Bohr magnetons, μ B is the Bohr magneton constant (9.274 × 10 −24 J T −1 ), k is the Boltzmann constant (1.381 × 10 −23 m 2 kg s −2 K −1 ), µ 0 is the magnetic permeability of free space (4π × 10 −7 H m −1 ), and H is the applied magnetic field (A m −1 ).
The magnetic moment, m, should indicate the oxidation state of the manganese contained within the spores and the number of magnetic entities, N, can be converted into a weight percentage of manganese, w Mn , according to, The magnetic susceptibility of paramagnetic materials is found to obey Curie's law, where C is the Curie constant given by Cullity (1972) as, Siegel et al. (2006) developed a microfluidic device (Supplementary Figure S1) that was used to separate superparamagnetic beads using a weak electromagnetic field. The theoretical equations later derived by Shevkoplyas et al. (2007) were employed to assess the feasibility of exploiting the magnetic properties of spores for separation. The setup consists of a 40 µm wide channel sandwiched between two electromagnets. The distance between the surface of the channel and the center of the electromagnet is 70 µm. In their analysis, the electromagnet was modeled as a single electrical wire carrying current I. According to Shevkoplyas et al. (2007), the displacement time, t, required for a spore of radius R to move through a liquid of viscosity µ from its initial position, x i , to the final position, x f , is given by
| Statistics
Statistical analysis was performed using Microsoft Excel (Redmond, WA). The method of least squares was used to fit experimental data to the corresponding equations. At each temperature or magnetic field strength, the magnetisation value was evaluated as the average of three scans; the errors associated with each measurement were evaluated by the magnetometer's inbuilt software. These errors were then used to estimate the uncertainties associated with the least squares fitting and hence the error in the fitting parameters.
| Magnetic behavior of B. megaterium spores
It is well-known that bacterial spores contain a wide range of inorganic elements when cultured in media with an adequate metal balance (Murrell, 1969). The EDX spectrum in Figure 1a shows that B. megaterium spores grown in standard supplemented nutrient broth (SNB) possess Ca, P, Mg, K, and Mn as their main inorganic constituents. This result is in agreement with those reported by Murrell (1967) with the exception that the spectrum contains no trace of Fe or Na. These elements are most likely present in quantities below the detection limit for the EDX detector (typically 0.1 wt.%). In B.
subtilis, Granger, Gaidamakova, Matrosova, Daly, and Setlow (2011) used ICP-OES to find the spore's iron abundance to be around 0.005 wt.%, two orders of magnitude lower than the typical amount of manganese present. Additionally, good agreement is found between the relative size of the EDX spectra peaks and the relative proportions of each element determined by Curran, Brunstetter, and Myers (1943) and Murrell (1967) for a variety of Bacillus species and strains.
The larger Mn peaks in Figures 1b and 1c indicate that the developing spores had taken up additional manganese when produced in higher manganese concentration media. Furthermore, as the Mn peak becomes larger the Ca peak decreases while all other peaks remain similar in size. The former observation is in agreement with the hypothesis that calcium and manganese ions compete for sites in the spore, as suggested previously by Slepecky and Foster (1959) and Levinson and Hyatt (1964). When spores were cultured in the presence of other divalent transition metal salts, no uptake was detected by EDX (data not shown). This result suggests that the uptake mechanism is specific to manganese and if spores do accumulate other transition metal ions, the amounts are below the detection limit for EDX analyses. This suggests that the ionic state of the manganese is 2+ (Figgis & Lewis, 1964) and is very likely to be part of a tetrahedral or octahedral complex.
| Magnetic properties of spores of other species and vegetative cells
An observable paramagnetic response in the presence of a magnetic field gradient is not a property exclusive to B. megaterium spores. Melnik et al. (2007)
| Effect of culture medium inorganic salt concentration on spore paramagnetic properties
The concentration of manganese in culture media had a strong impact on the strength of the measured signal as it determines the amount of manganese that the mature spore contains (Supplementary Tables S1 and S2). As the manganese content increased, the corresponding magnetic moments decreased from 5.9 to 5.1 and from 5.1 to 3.8 for B. megaterium and B. cereus spores, respectively. According to Figgis and Lewis (1964), lower magnetic moments indicate that the predominant manganese valency in spores with enhanced manganese content is increased by one. As the drop in magnetic moment occurs gradually with manganese concentration, it indicates that more than one oxidation state of manganese is present in the spore and that the relative proportions of each is determined by the concentration of manganese during spore formation. Given the range of magnetic moments observed, it is likely that B. megaterium spores contain a combination of the 2+ and 3+ oxidation states and B. cereus spores a combination of 3+ and 4+, however, the presence of smaller quantities of other oxidation states cannot be ruled out. Figure 3 illustrates the effect of culture medium manganese concentration on the manganese content of B. megaterium and B. cereus spores. The accumulation of manganese in both acid treated and untreated spores can be adequately described by an exponential decay function of the form where [Mn] is the concentration of manganese in the culture medium, and a 1 and a 2 are constants which represent the saturation manganese weight percent and the culture concentration which yields 63% of the saturation value, respectively. At low manganese concentrations (<100 µM), the quantity of manganese uptake is linear and almost identical for both acid treated and untreated spores. At higher concentrations, the manganese content tends asymptotically toward 1.0 and 1.6 wt.% for acid treated and untreated spores, respectively.
The accuracy of values is confirmed by the agreement with ICP-OES measurements for selected samples, as observed in Figure 3. Although the Langmuir isotherm is equally adequate in modelling the data in Figure 3, the isotherm describes the surface adsorption of materials rather than the combined effects of adsorption and absorption.
Further analysis of the data using the Langmuir isotherm (Supplementary Figure S3) can be found in the discussion.
The consistently lower manganese content in acid-treated spores indicates that manganese is externally bound and removed by this treatment. Furthermore, it suggests that spores accumulate manganese through two mechanisms, giving rise to the two asymptotes observed in Figure 3a. The lower asymptote is likely to be determined by the manganese transport process that is active during sporulation, whereas the higher asymptote is a physical limit imposed by adsorption on to the surface of the spore. Figure S2). This suggests that developing B. megaterium spores have some form of specificity toward the uptake of manganese ions and neither adsorb nor absorb significant amounts of iron or cobalt. A similar observation for the uptake of iron ions was reported by Granger et al. (2011) in B. subtilis, but this result for cobalt contradicts the findings of Slepecky and Foster (1959). In the latter, B.
megaterium spores grown in media containing 680 µM Co 2+ possessed 0.14 wt.% cobalt. This difference may be caused by the lower concentration of cobalt (150 µM) used in this work.
| Magnetic susceptibility at room temperature
The spore's magnetic susceptibility at 300 K can be determined from the slope of the magnetisation chart. Figure 4 shows the magnetisation behavior of B. megaterium spores cultured in media with three different concentrations of MnCl 2 . All three data sets show linear behavior and the magnetic susceptibility can be calculated from Equation (2) (using SI units) as 1.0 × 10 −6 , 7.2 × 10 −6 , and 1.9 × 10 −5 for 20, 50, and 100 µM MnCl 2 , respectively. As the magnetic moments do not reach saturation at 300 K, these susceptibility values are indicative of spores of the Bacillus genus; therefore, at room temperature, the average magnetic susceptibility of Bacillus spores lies in the range 10 −6 and 10 −5 . The manganese coated spore sample possesses a strong magnetic signal at 5 K, displaying hysteresis and a very large coercive field of approximately 500 Oersted. To illustrate their magnetic strength, even at room temperature, Figure 5c contains an image of these spores aligning with the magnetic field lines between two circular neodymium magnets; a behavior that resembles that of iron filings. The shape of Figure 5c is uncommon as it derives from two contributions-a paramagnetic component from within the spore and a surface contribution that appears to be ferromagnetic. Figure 5d reveals that the surface manganese undergoes a transition at temperatures above 43 K, which corresponds to the Curie temperature of Mn 3 O 4 (Boucher, Buhl, & Perrin, 1971). These results strongly indicate that the manganese shell surrounding the spore exhibits bulk ferromagnetic properties at 5 K; nevertheless, an antiferromagnetic contribution, due to the presence of uncompensated spins on the surface, cannot be entirely excluded (Cooper et al., 2013). The iron coated spore sample demonstrates that the adsorbate is not limited to manganese. These spores bear the distinctive bright yellow color of a hydrated iron oxide-hydroxide (FeO(OH) · nH 2 O).
| Surface adsorption
Figure 5b reveals that the crystals form thin rods and aggregate in a foliated manner, fitting the description for goethite (Cornell & Schwertmann, 2004a). In this case, the iron coated spores would be antiferromagnetic with a Néel temperature of 400 K (Cornell & Schwertmann, 2004b); however, further analyses are required to confirm the identity of the iron compound.
| DISCUSSION
To date, there have only been two accounts reporting the intrinsic magnetic properties of bacterial spores (Melnik et al., 2007;Sun, Zborowski, & Chalmers, 2011). This work provides the first comprehensive characterisation of these properties in Bacillus spores and explores the extent to which they can be altered. In agreement with previous results (Warth et al., 1963), manganese is the only non-diamagnetic element that accumulates to any significant amount in spores and SQUID measurements have confirmed that this manganese is paramagnetic at all temperatures up to 300 K. Similar to Sun et al. (2011), we determined that several oxidation states of manganese (2+, 3+, and 4+) are present in spores; however, our results indicate a lower manganese content, in the range of 0.2 wt.% rather than 10 wt.%. Increasing the concentration of manganese in the culture medium raises the manganese content in spores approximately 10-fold. Nevertheless, the force experienced by these spores in a magnetic field would still be several orders of magnitude smaller than that experienced by ferromagnetic entities. With this in mind, potential applications associated with spore paramagnetism include: (i) Sensing: Manganese was found to be the cause for paramagnetism in all three Bacillus species studied, yet their respective SQUID measurements presented some clear differences.
The chemical state and environment in which the manganese ions exist in the spore differs across the species-under standard conditions, not only do the spores contain different amounts of manganese, their magnetic moments also range between 5.1 and 5.9 µ B . These differences may be useful for the detection of more harmful, pathogenic species such as B. anthracis or Clostridium difficile. There are, however, two major obstacles to overcome. The first concerns sensitivity; the device must be sensitive enough to detect the magnetic signal from, typically, a small quantity of spores.
This obstacle is perhaps the easiest to overcome as there are commercial paramagnetic oxygen sensors that can measure the concentration of oxygen at room temperature. According to Wills and Hector (1924), the magnetic susceptibility of oxygen at room temperature is of the order of 10 −7 , suggesting that these devices would also be capable of detecting the magnetic signal emitted by spores. The second obstacle concerns differentiation; the ability to distinguish specific spore species or strains from a mixture of spores and impurities. Although most impurities will possess a very different magnetic susceptibility to spores, it will be challenging to differentiate between spore species. In particular, the spore's magnetisation does not saturate at room temperature and therefore it will not be possible to obtain magnetisation curves such as those found in Error bars indistinguishable when plotted difficulty of overcoming this challenge, it is unlikely that magnetic fields could be used to discern between different spore species.
(ii) Separation: A microfluidic device designed to expose spores in aqueous suspension to an open gradient permanent magnet has been demonstrated to capture and deposit B. atrophaeus spores onto a glass slide (Melnik et al., 2007). To assess further the theoretical considerations associated with spores' intrinsic paramagnetism for separation applications, a microfluidic device such as that presented by Shevkoplyas et al.(2007) may be considered. To estimate the shortest time required for separation, the highest values for magnetic susceptibility (10 −5 ) and electric current (1 A, maximum current used in their work) will be assumed. Furthermore, spores may be approximated as spheres with radius 0.5 µm. The liquid viscosity is expected to be around 10 −3 Pa s. Using these values, Equation (7) estimates that, under the influence of a weak magnetic field, it would take over 166 hr for the spores to reach the sidewall from the midpoint of the microfluidic channel. At this timescale, separation would not be feasible, however it should be noted that the magnetic field generated in the channel is weak, at approximately 2.5 mT for 1 A current. For comparison, a typical permanent neodymium magnet has a remanence between 0.2 and 1.2 T. As the wire is only a proxy for the electromagnet in the experimental setup, the current passing through it may be increased indefinitely to achieve the desired magnetic field.
Therefore, by increasing the current through the wire to 100 A, a magnetic field of 0.25 T is produced and the separation time would then be reduced to approximately 60 s. This calculation illustrates that while it is possible to exploit a spore's intrinsic magnetic properties for separation, it does require moderately strong magnetic fields (see Supplementary Figure S4 and Melnik et al., 2007). This analysis also disregards the fact that the calculated magnetic susceptibility is an average property and therefore natural variation will mean that longer times will be required to achieve high separation yields.
(iii) Adsorption: Manganese is accumulated both inside the spore and on its surface. while the presence of manganese in the core is well known (Thomas, 1964), adsorption to the spore integuments is less well characterized. Alderton and Snell (1963) demonstrated that bacterial spores behave like ion exchange gels, where following an acid regeneration treatment, the surface becomes free to take up structures that show base exchange behavior. The results in Figure 3a may be interpreted further using an adsorption model.
Subtracting the manganese content of acid-treated spores in Figure 3a from that of untreated spores yields an adsorption curve. Note that the concentration of manganese has been adjusted by approximately 43 µM in order to account for the onset of adsorption. By assuming that manganese adsorbs onto spores following the Langmuir isotherm, an inverse plot may be used to obtain the adsorption parameters q m = 14.4 mg/g and K D = 0.88 mM −1 (Supplementary Figure S3). Based on Figure 3a, the adsorption capacity was higher than expected, however, in the absence of competing ions it does appear to be a reasonable value.
Compared with alternative natural manganese adsorbents (Omri & Benzina, 2012), a maximum adsorption capacity of 14.4 mg Mn 2+ per gram of spore may not be particularly attractive. Equally, spores are not limited to the adsorption of manganese ions. The activated surface of acid treated spores, for example, may find an application in removing heavy metal ions from contaminated waters.
Additionally, this work has shown that spores can be easily functionalized with ferromagnetic nanoparticles, greatly increasing their magnetic susceptibility and enabling their use in a wide array of biotechnological applications for example, drug delivery.
| CONCLUSION
Bacillus spores are paramagnetic due to the high manganese content accumulated within the spore core and that associated with the spore coat (which may be surface bound or distributed throughout the coat). | 6,070.4 | 2017-12-15T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science",
"Materials Science"
] |