text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A comparison of different bone graft materials in peri-implant guided bone regeneration
The aim of this study was to compare the effects of hydroxyapatite (HA), deproteinized bovine bone (DPB), human-derived allogenic bone (HALG), and calcium sulfate (CAP) graft biomaterials used with titanium barriers for bone augmentation to treat peri-implant defects in rat calvarium treated by guided bone regeneration (GBR). Thirty-two female Sprague-Dawley rats were divided into four groups: DPB, HALG, HA, and CAP. One titanium barrier was fixed to each rat’s calvarium after the titanium implants had been fixed. In total, 32 titanium implants and barriers were used. Ninety days after the surgical procedure, all the barriers were removed. After decalcification of bone tissue, the titanium implants were removed gently, and new bone regeneration in the periimplant area was analyzed histologically. Immunohistochemical staining of vascular endothelial growth factor (VEGF) was also performed. There were no statistically significant between-group differences in new bone regeneration or VEGF expression after 3 months. According to the results of the histological and immunohistochemical analyses, none of the grafts used in this study showed superiority with respect to new bone formation.
Introduction
The guided bone regeneration (GBR) method is used for the treatment of peri-implant bone tissue defects in oral-dental implantology. 1In GBR, a barrier membrane is used to preserve blood formation and create a closed area around the bone tissue defect. 2,3,4The GBR method encourages the proliferation of bone-forming cells called osteoblasts. 2,3,4In GBR, the barrier membrane must be permeable to enable the diffusion of nutrients required for bone regeneration. 5Previous experimental studies reported more successful bone tissue regeneration using the GBR procedure and a hermetically-closed, stiff occlusive titanium barrier as compared with permeable membranes in a rabbit calvarium model. 6,7,8utogenous bone grafts contain growth factors and promote the recruitment of stem cells. 9,10Due to their osteoinductive and osteoconductive properties, autogenous bone grafts are the current gold standard for bone augmentation procedures.However, autogenous bone grafts of extraoral origin also have a number of disadvantages.These include the need for a second surgical procedure; a limited supply of bone grafts; and postsurgery morbidity, pain, and neural damage in the donor bone area, as well as patient discomfort. 11,12Thus, various alternative bone graft materials, such as deproteinized bovine bone (DPB), human-derived allogenic bone (HALG), hydroxyapatite (HA), and calcium sulfate (CAP) bioceramic biomaterials, have been developed as alternative graft materials to autologous grafts. 9,10llografts, such as deproteinized human bone grafts, are one of the most commonly-used alternatives to autografts in the treatment of bone tissue defects.However, allografts have various disadvantages, including an increased risk of infections (hepatitis and HIV).Controversy also surrounds their osteoinductive potential.In experimental animal models, researchers reported increased bone regeneration using calcium phosphate ceramic-derived bone graft biomaterials (HA, tricalcium phosphate, and calcium sulfate), in addition to superior stability and osteogenic properties, compared to autologous bone grafts. 12In experimental and clinical research, another study demonstrated the osteoconductive capacity of this type of graft material (ceramic-derived bone graft biomaterial) in a GBR procedure for the treatment of bone tissue defects. 13he aim of the present study was to compare the effects of HA-, DPB-, HALG-, and CAP-derived bone graft biomaterials used with titanium barriers on bone augmentation in a peri-implant GBR procedure in rat calvarium.
Methodology
The study consisted of 32 female Sprague-Dawley rats, which were provided by the Experimental Research Center of Firat University.The experimental protocol and procedures in this study were approved by the Animal Experimental Ethics Committee of Firat University (Elazig, Turkey).The welfare and care of the experimental animals complied with the guidelines of the Helsinki Declaration.Throughout the experiment, the rats were kept in standard cages and fed a standard diet, with access to drinking water ad libitum.The 32 rats were divided into four graft groups: HA, DPB, HALG, and CAP.
Rigid dome-shaped titanium barriers with a hole in the top were constructed, and the top was covered with a Teflon™ cap to produce a hermetic seal.Prior to the surgical procedures, all the titanium barriers were cleaned and sterilized.All the surgical procedures were performed under sterile conditions.General anesthesia was induced with 10 mg/kg of xylazine and 40 mg/kg of ketamine.After the induction of general anesthesia and before surgery, the skull skin was shaved, and the skin was washed with povidone iodine.A skin incision was made over the linea media on the skull.To reach the skull bones, the flap and periosteum were lifted using a periosteal elevator.Nine holes were then created using a standard steel burr of 1 mm diameter, with water irrigation to prevent burning of the bone.After this procedure, an implant cavity 2 mm long × 2 mm wide was created using a steel burr.Then, 4 mm long × 2 mm wide titanium implants with a machined surface were placed in the center of the grafted area.The titanium barriers were placed around the implants and holes.The edges of the titanium barriers were fixed to the skull bone tissue using the adhesive N-butyl-2-cyanoacrylate.
After these procedures, the grafts were embedded in the holes in the titanium barriers, and the holes were then covered with Teflon™ caps.The skull skin and soft tissue were sutured using resorbable sutures.An antibiotic and analgesic were injected intramuscularly in all animals once a day for the first three postoperative days.After 3 months, all the rats were sacrificed by carbon dioxide inhalation.Following sacrifice, the titanium barriers were removed, and the calvarial bones containing the implants were harvested for histomorphometric and immunohistochemical analyses.
The original grafted bone tissue was used for histomorphometric and immunohistochemical analyses.The bone tissue samples were fixed in 10% formaldehyde solution for 72 h and demineralized in 10% formic acid solution.The implants were then gently removed from the samples.After decalcification, bone tissue samples were dehydrated, embedded in a paraffin wax block, and sectioned for hematoxylin and eosin (HE) and Masson's trichrome (MT) staining and microscopic analyses.
Sections 6 μm thick corresponding to the bone augmentation area were evaluated via light microscopy.New bone formation was determined by calculating the amount of regenerated new bone area as a percentage of the total grafted area in the peri-implant bone tissues using an image analysis program.All images of histological samples were taken with a digital camera attached to a light microscope, and the images were transferred to a computer at the original magnification. 14An Olympus DP71 (Tokyo, Japan) software imaging system was used for histomorphometric analysis.
The bone specimens were fixed by perfusion, decalcified, and embedded in paraffin as previously described.The sections were incubated for 10 min in an oven at 60°C, and then cut into 4 μm longitudinal sections.The sections were then transferred to an automatic staining machine for VEGF immunohistochemical staining.After the primary antibody procedure, the sections were washed with water and stored in ultramount.
In the immunohistochemical analysis, the staining ratio (%) of VEGF in regenerated new bone areas was calculated using an image analysis program. 14All the images of histological samples were taken with a digital camera attached to a light microscope and transferred to a computer at the original magnification.An Olympus DP71 software imaging system was used for immunohistochemical analysis.
Statistical analysis
SPSS 22 software was used for statistical analysis.The data were analyzed using one-way ANOVA and Tukey's HSD tests.A value of p < 0.05 was accepted as denoting a statistically significant difference.
Results
No fatal or nonfatal complications (such as wound infection) were encountered during the experiment.
No evidence of inflammatory activity was detected microscopically.
Discussion
The bone formation capacity of bone graft materials differed widely, and bone regeneration capacity influenced the integration of implanted bone grafts. 8,9,10lthough much progress has been made in recent years in oral implantology, autogenous bone grafts remain the gold standard in GBR procedures. 8,10,11hey have a major advantage in that they supply not only bone volume but also osteogenic cells, which are capable of quickly laying down new bone.However, they also have various drawbacks, including increased patient morbidity, limited bone graft availability, and additional surgical time/costs. 8,9,10,11Thus, studies aimed at identifying substitutes have been conducted.
Previous studies of an experimental animal bone defect grafting model reported that 3 months was a sufficient time to induce healing and the emergence of angiogenesis and new bone formation. 9,10,15In the present study, we observed marked histological changes in the grafted bone defects 3 months after the grafting procedure in all four groups.The findings of the present study showed that bone formation beyond the skeletal system should occur in a similar way to that observed in previous studies.In an experimental animal study, Manfro et al., 16 and Maréchal et al. 17 reported that new bone regeneration beyond the normal anatomic limits of a rabbit's skull bone occurred with autogenous blood application.Min et al. 7 demonstrated that new bone regeneration occurred after decortication of calvarial bone.In another experimental study, Ezirganli et al. 9 reported that new bone formation took place with different bone grafts and decortication of the calvarium.
The most common grafts used today are autografts, allografts, demineralized bone matrix, xenograft (bovine), and substitute bone grafts (calcium sulfate, calcium phosphate and HA). 17 To determine which graft is most appropriate for a given condition, an understanding of the biological function (osteogenesis, osteoinduction, and osteoconduction) of each graft is necessary.Furthermore, stable conditions in the host are essential for the incorporation of any graft material.Despite their drawbacks, autogenous bone autografts remain the gold standard to which every substitute must be compared.
The results of the present study were in accordance with those of previous studies of experimental applications of xenografts, human allografts, HA, and calcium sulfate grafts.There are a few reports in the literature on xenograft bone substitutes.Some studies showed good results in animal models and clinical research, whereas others demonstrated slower integration using xenograft bone substitutes compared with human allografts or lower bone union rates, with persistent radiolucent lines and local complications. 17,18,19Calcium sulfate has been used many times as a bone void filler. 18,19Recently, surgical grade calcium sulfate has been employed as a bone graft substitute. 17,18,19,20Multicenter clinical studies demonstrated that trabecular bone filling in autografts was qualitatively similar to that seen in calcium sulfate grafts. 17,18,19,20,21They also showed that surgical grade calcium sulfate was a host friendly and environmentally friendly biomaterial, which induced satisfactory bone production. 17,18,19,20Researchers also demonstrated that the histological grade score for calcium sulfate was similar to that of other graft substitutes. 17,18,19,20,21lloplastic bone graft materials should be biocompatible and not antigenic or trigger the inflammatory process. 20,22,23,24A previous study revealed that HA-derived synthetic bone grafts stimulated new bone tissue formation and had high osteogenic potential. 21A-derived synthetic bone grafts, when compared with autogenous bone, were shown to encourage new bone formation in experimental animal studies, with excellent stability and new bone regenerative properties.Due to their content and structure, HA bone grafts dissolved slowly and were displaced gradually by bone tissue. 18emineralized human bone allografts are thought to have osteoinductive capabilities and fast resorption, with bone ingrowth. 17Demineralized freeze-dried bone allografts are extensively utilized in regenerative oral implantology, as they possess excellent osteoconductive potential. 25,26,27ngiogenesis is the most important pathophysiological process in bone repair and formation.VEGF has an important role in angiogenesis and induces angiopoiesis over vascular endothelial cells.VEGF, which is expressed by endothelial cells, is one of the most important cytokines in angiogenesis and is associated with bone formation, mesenchymal condensation, cartilage formation, cartilage resorption, and blood vessel invasion. 28In immunohistochemical examination, VEGF expression was detected in all groups at similar levels with no significant difference between the groups.
In the present study, new bone tissue regeneration was evident in all the groups three months after implantation, with no statistically significant betweengroup differences.The histological findings indicated that all four graft materials (HA, DPB, HALG, and CAP) exhibited osteoconductive properties.
Conclusion
The present study compared the histological properties of several bone graft substitutes, which are widely utilized today.According to the results, none of the grafts showed superiority with respect to new bone formation.5]13 Additional studies are needed to define the indications, specifications, limitations, and contraindications of GBR in the treatment of peri-implant bone defects.
Figure 3 .
Figure 3. Immunohistochemical staining of VEGF in the a) calcium sulfate, b) deproteinized bovine, c) human-derived allogenic, and d) hydroxyapatite bone graft groups.
Table .
New bone formation (NBF) and vascular endothelial growth factor (VEGF) percentage of the groups (one way ANOVA, p > 0.05). | 2,854.6 | 2018-07-10T00:00:00.000 | [
"Medicine",
"Materials Science",
"Biology"
] |
Deficiency in the phosphatase PHLPP1 suppresses osteoclast-mediated bone resorption and enhances bone formation in mice
Enhanced osteoclast-mediated bone resorption and diminished formation may promote bone loss. Pleckstrin homology (PH) domain and leucine-rich repeat protein phosphatase 1 (Phlpp1) regulates protein kinase C (PKC) and other proteins in the control of bone mass. Germline Phlpp1 deficiency reduces bone volume, but the mechanisms remain unknown. Here, we found that conditional Phlpp1 deletion in murine osteoclasts increases their numbers, but also enhances bone mass. Despite elevating osteoclasts, Phlpp1 deficiency did not increase serum markers of bone resorption, but elevated serum markers of bone formation. These results suggest that Phlpp1 suppresses osteoclast formation and production of paracrine factors controlling osteoblast activity. Phlpp1 deficiency elevated osteoclast numbers and size in ex vivo osteoclastogenesis assays, accompanied by enhanced expression of proto-oncogene C-Fms (C-Fms) and hyper-responsiveness to macrophage colony-stimulating factor (M-CSF) in bone marrow macrophages. Although Phlpp1 deficiency increased TRAP+ cell numbers, it suppressed actin-ring formation and bone resorption in these assays. We observed that Phlpp1 deficiency increases activity of PKCζ, a PKC isoform controlling cell polarity, and that addition of a PKCζ pseudosubstrate restores osteoclastogenesis and bone resorption of Phlpp1-deficient osteoclasts. Moreover, Phlpp1 deficiency increased expression of the bone-coupling factor collagen triple helix repeat-containing 1 (Cthrc1). Conditioned growth medium derived from Phlpp1-deficient osteoclasts enhanced mineralization of ex vivo osteoblast cultures, an effect that was abrogated by Cthrc1 knockdown. In summary, Phlpp1 critically regulates osteoclast numbers, and Phlpp1 deficiency enhances bone mass despite higher osteoclast numbers because it apparently disrupts PKCζ activity, cell polarity, and bone resorption and increases secretion of bone-forming Cthrc1.
Bone modeling, remodeling, and repair in response to injury all occur through coordinated bone resorption and bone deposition; thus, cellular activities of osteoclasts and osteoblasts must be tightly coupled to maintain optimal bone health. Osteoclasts function in these processes through direct bone resorbing activity, but also via secretion of paracrine factors that stimulate osteoblast-mediated bone production. Because disruptions of osteoclast differentiation and/or activity impact bone resorption and coupling to bone deposition, a better understanding of osteoclast biology will help design strategies to limit bone loss.
Osteoclasts arise from the fusion of monocyte/macrophage progenitors. This process is facilitated by the actions of two cytokines, macrophage colony-stimulating factor (M-CSF) 2 (Csf1) and receptor activator of nuclear factor B (RANKL), which are necessary and sufficient for osteoclastogenesis. Whereas RANKL is required for definitive osteoclast differentiation, M-CSF promotes the proliferation, survival, and differentiation of osteoclast precursors (1). M-CSF exerts its actions through the C-fms receptor encoded by the Csf1r gene. Binding of M-CSF to the C-fms receptor induces receptor autophosphorylation and activation of downstream signaling kinases including Akt and Mek/Erk.
Once formed, osteoclasts become highly polarized cells. They resorb bone through the secretion of H ϩ ions and matrixdegrading enzymes through a specialized portion of the basal plasma membrane known as the ruffled border (2). To limit bone resorption to a defined area, osteoclasts must tightly adhere to the bone surface via formation of actin ring structures creating resorption lacunae. Disruptions in osteoclast polarity suppress bone resorption (3).
Phlpp1 (Phlpp, Scop, Plekhe1, Ppm3a) was identified in 2005 by searching the NCBI database for pleckstrin homology (PH) domain containing proteins. It is a member of the underexplored type 2C protein phosphatase family that is insensitive to This work supported by National Institutes of Health Research Grants AR065397, AR072634, AR068103, AR065402, and AR067129, the Mayo Clinic Center for Biomedical Discovery, and the Mayo Clinic Foundation. The authors declare that they have no conflicts of interest with the contents of this article. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This article contains Figs. S1 and S2. 1 To whom correspondence should be addressed: 200 First St. SW, Rochester, MN 55901. Tel: 507-293-1005; E-mail<EMAIL_ADDRESS>2 The abbreviations used are: M-CSF, macrophage colony-stimulating factor; RANKL, receptor activator of nuclear factor B; PH, pleckstrin homology; BV/TV, bone volume per total volume; AdGFP, adenoviral GFP; AdCre, adenoviral Cre recombinase; ␣-MEM, ␣-minimal essential medium; C q , quantitation cycle; TRAP, tartrate-resistant acid phosphatase; DAPI, 4Ј,6diamidino-2-phenylindole; PKC, protein kinase C; BMM, bone marrow macrophages; qPCR, quantitative PCR; PHLPP1, PH domain and leucinerich repeat protein phosphatase 1; CTHRC1, collagen triple helix repeatcontaining 1.
cro ARTICLE traditional phosphatase inhibitors (4). Although Phlpp1 is broadly expressed, its levels are controlled by numerous cellular mechanisms including transcription and proteolysis (5). Phlpp1 is poised to be a critical regulator of bone mass because it dephosphorylates and inactivates numerous pathways that promote bone cell function. These substrates include Akt, Raf1, Histone 3, Mst1, p70 S6K, and both the typical (e.g. ␣/) and atypical (e.g. /) PKC isoforms (4 -9). Germline knockout mice of this gene are viable, but show enhanced growth factor responsiveness, cellular proliferation, and survival (4). Our group has shown that Phlpp1 deficiency decreases body size, long bone length, and bone volume (10); however, the cell-type specific functions of Phlpp1 could not be elucidated with this model. Here we define the osteoclast-specific functions of Phlpp1 and demonstrate that deletion of Phlpp1 in mature bone resorbing osteoclasts enhances bone volume and mineral density.
Conditional deletion of Phlpp1 in Ctsk-expressing cells increases bone volume and bone mineral density
Germline deletion of Phlpp1 causes reduction in body size, limb length, and bone mass (10,11). Although changes in chondrocyte proliferation were noted, the effects of Phlpp1 deletion in other skeletal cell types on the gross phenotype were not determined. To examine the cell type-specific effects of Phlpp1, we generated a novel mouse model that allows for its condi-tional deletion. LoxP sequences were inserted in introns flanking exon 4 with TALEN technology (Fig. 1A). Sequencing demonstrated successful insertion of the loxP sites. To determine the function of Phlpp1 in mature, bone resorbing osteoclasts, we crossed Phlpp1 fl/fl mice with mice expressing Cre recombinase under the control of the Ctsk promoter (Ctsk-Cre) (12). To visualize recombination at the Phlpp1 locus, we isolated DNA from bone marrow macrophage-derived osteoclasts of Cre ϩ control, Phlpp1 fl/ϩ :Cre Ϫ and Phlpp1 cHet Ctsk mice. Successful recombination of the allele was observed in mature Phlpp1 cHet Ctsk osteoclasts (Fig. 1B). Reduction in Phlpp1 levels via qPCR and Western blotting were also noted ( Fig. 9, E and G).
We next assessed the bone phenotype of 12-week-old Phlpp1 cKO Ctsk mice. Radiographic analyses showed no overall reductions in hind limb length or gross abnormalities of male or female mice ( Fig. 2A). No cancellous bone changes were noted in male mice; however, micro-CT analyses of the distal femur trabeculae revealed a 55% increase the BV/TV of female Phlpp1cKO Ctsk mice as compared with their Cre ϩ control littermates (Fig. 2D). Likewise, bone mineral density was elevated by 29% in female Phlpp1 cKO Ctsk mice compared with their Cre ϩ control littermates (Fig. 2E). This was accompanied by an increase in trabecular number and a corresponding decrease in trabecular spacing (Fig. 2, F-H). Trabecular thickness was modestly increased (Fig. 2I). Reconstructions of female Phlpp1 cKO Ctsk and Cre ϩ control littermates are shown in Fig. 2, B and C.
Phlpp1 controls osteoclast activities
Other groups have documented activity of Ctsk-Cre drivers within mesenchymal lineage cells including perichondrial cells, hypertrophic chondrocytes, within the groove of Ranvier and potentially by cells within the ovaries and testes (12)(13)(14)(15)(16)(17). Phlpp1 was expressed within the growth plate of Phlpp1 cKO Ctsk mice and was present within osteoblasts and osteocytes of trabecular and cortical bone (Fig. S1). Recombination of the Phlpp1 allele was not observed within the ovaries, testes, or liver. No changes in Phlpp1 expression within the ovaries, testes, and bone shaft, which is primarily populated by osteocytes, were observed (Fig. S1). No changes in cortical bone were noted in male or female mice. Osteoblasts obtained from Phlpp1 cKO Ctsk mice mineralized normally (Fig. S2).
Bone histomorphometric analyses were conducted and confirmed that female Phlpp1 cKO Ctsk mice have a 2-fold increase in BV/TV as compared with Cre ϩ control littermate controls (Fig. 3, A-C). TRAP staining demonstrated increased osteoclast number and osteoclasts per bone surface (Fig. 3, A, B, D, and E). No alteration of osteoclasts per osteoclast perimeter was observed, indicating that osteoclast size was not altered in vivo (Fig. 3F).
We also evaluated the effect of Phlpp1 osteoclast-specific deletion on vertebral bone. Connective density (19%) and trabecular number (14%) were increased within the cranial region of L5 vertebral bodies of female mice (Fig. 4, A and B). Trabecular spacing was suppressed by 17% (Fig. 4C). Modest to negligible changes were observed in trabecular thickness, structure model index, and BV/TV (Fig. 4, D-F).
Deletion of Phlpp1 enhances osteoclast numbers and M-CSF responsiveness
To evaluate possible mechanisms for enhanced osteoclast number in Phlpp1 cKO Ctsk mice, we collected bone marrow macrophages from Phlpp1 fl/fl mice. Cells were transduced with AdGFP or AdCre on day 0 and osteoclastogenesis assays were performed. Genomic DNA was collected from multinucleated cells on day 4 and the recombined Phlpp1 allele in the presence of AdCre transduction was validated (Fig. 5A). AdCre-infected
Phlpp1 deficiency increases receptor tyrosine kinase expression, such as epidermal growth factor receptor, by mouse embryonic fibroblasts (18). To test the hypothesis that Phlpp1 suppression in osteoclasts enhances the receptor tyrosine kinase c-fms expression, we performed M-CSF dose-response assays on Phlpp1 fl/fl BMMs transduced with AdGFP or AdCre. M-CSF dose-dependently increased TRAP ϩ osteoclast numbers and area in AdGFP-transduced cells on day 4 of culture ( We next sought to determine whether Phlpp1 deficiency enhanced M-CSF signaling during osteoclastogenesis. BMMs from female WT or Phlpp1 Ϫ/Ϫ mice were cultured for 3 days with RANKL and M-CSF to select for pre-osteoclast cultures. These cells were then cultured in serum-free medium for 1 h and exposed to M-CSF for the indicated times (Fig. 6A). Phosphorylation of Akt2, Mek1/2, and Erk1/2 was higher following exposure to M-CSF in Phlpp1 Ϫ/Ϫ pre-osteoclasts (Fig. 6A). c-fms expression was higher in Phlpp1 Ϫ/Ϫ BMMs (Fig. 6B), as well as in mature osteoclasts from Phlpp1 Ϫ/Ϫ (Fig. 6C).
Phlpp1 promotes osteoclast polarization, bone resorption, and represses PKC activity
Enhanced responsiveness to M-CSF could in part account for the increase in osteoclast number exhibited by Phlpp1 cKO Ctsk mice, but this does not explain increased BV/TV and bone mineral density also observed. Phlpp1 dephosphorylates and destabilizes multiple PKC isoforms. Among those, Phlpp1 inactivates the atypical isoform PKC to control cell polarity (19). Establishment of cell polarity is crucial in actin ring formation, development of resorption lacunae, and secretion of bone resorbing enzymes and H ϩ ions required for osteoclast-medi-
Phlpp1 controls osteoclast activities
ated bone resorption (20,21). Osteoclasts derived from Phlpp1 cKO Ctsk bone marrow macrophages exhibited disrupted actin rings of lesser thickness as visualized by phalloidin binding (Fig. 7A). This was accompanied by a 37% decreased formation of resorption pits when Phlpp1 cKO Ctsk osteoclasts were cultured on bovine bone slices (Fig. 7C). Elevated osteoclast numbers in vivo were confirmed by increased circulating serum TRAP levels (Fig. 7E), but serum CTX-1 levels were not changed in 12-week-old Phlpp1 cKO Ctsk female mice (Fig. 7F). In contrast, serum P1NP was elevated by 36%, indicating increased bone formation of the Phlpp1 cKO Ctsk female mice (Fig. 7G). Osteoclasts generated from 12-week-old female Phlpp1 cKO Ctsk mice also had increased phosphorylation of PKC at the activation loop (Ser-410) and Ser-560 within the turn motif as compared with Cre ϩ control osteoclasts (Fig. 7D).
We next determined if the effects of Phlpp1 deficiency on osteoclastogenesis and bone resorption were due to increased PKC activity. Bone marrow macrophages were collected from WT or Phlpp1 cKO Ctsk mice and cultured with M-CSF and RANKL. On day 3 of differentiation, cells were exposed to a PKC pseudosubstrate (1 M) for 24 h. On day 4, actin ring formation was visualized by phalloidin Alexa 488 binding. The PKC pseudosubstrate partially restored actin ring formation in Phlpp1 cKO Ctsk osteoclasts (Fig. 8A). The increase in osteoclast number induced by Phlpp1 deficiency was attenuated by the PKC pseudosubstrate (Fig. 8, B and D). We then determined if blocking PKC activity likewise restored bone resorption of Phlpp1 cKO Ctsk osteoclasts. Blocking PKC activity also restored the number and size of pits formed by Phlpp1-deficient osteoclasts (Fig. 8, C, E, and F).
Phlpp1 suppresses the coupling of bone resorption to bone formation
Decreased osteoclast activity along with increases in bone suggested that Phlpp1 deficiency also regulated coupling to ossification. We therefore assessed osteoblast numbers within
Phlpp1 controls osteoclast activities
Cre ϩ control and Phlpp1 cKO Ctsk 12-week-old mice and found that osteoblast number per total area was increased by 3.2-fold and osteoblast numbers per bone surface were elevated by 2.4fold in Phlpp1-deficient females (Fig. 9, A-C). Although there was a trend toward increased mineralizing surface per bone surface (Fig. 9D), measures of dynamic histomorphometry (e.g. MAR, BFR/BS) were unchanged (Fig. S2). We surveyed a panel of known osteoclast coupling factors (Bmp6, Cardiotrophin1, Cthrc1, Ephrin B2, Semaphorin D, Sphk1/2, and Wnt10b) and found that expression of Cthrc1 was elevated by 3.5-fold in Phlpp1 cKO Ctsk osteoclasts (Fig. 9, E-G). Expression of all other coupling factors was unchanged by Phlpp1 deficiency. To determine whether increased Cthrc1 expression by Phlpp1-deficient osteoclasts was responsible for enhanced bone formation, we per-formed an ex vivo coupling experiment (Fig. 9H). Phlpp1 cKO Ctsk osteoclast-conditioned medium enhanced Alizarin Red staining of female calvarial osteoblasts. Knockdown of Cthrc1 in Phlpp1 cKO Ctsk osteoclast cultures attenuated the effect of conditioned medium on Alizarin Red staining (Fig. 9, I and J). No increase in Alizarin Red staining was observed in male calvarial osteoblasts cultured in Phlpp1 cKO Ctsk osteoclast-derived conditioned medium. These data demonstrate that enhanced osteoclast Cthrc1 secretion induced by Phlpp1 deficiency promotes ex vivo osteoblast mineralization.
Discussion
We previously demonstrated that germline deletion of the serine/threonine phosphatase Phlpp1 diminished bone vol-
Phlpp1 controls osteoclast activities
ume, but the mechanism for this phenotype was unclear. To define the cell-specific functions of Phlpp1, we generated a Phlpp1-floxed allele. In this report, we show that deletion of Phlpp1 in Ctsk-Cre-expressing cells causes increases in bone volume and mineral density. Despite elevated osteoclast numbers, bone formation of Phlpp1 cKO Ctsk mice is also enhanced. We attribute these findings to decreased osteoclast-mediated bone resorption due to disruption of osteoclast polarity with a concomitant increase in coupling to ossification (see Fig. 10 for a model). The enhanced bone mass and coupling to ossification phenotype is specific to female mice. Phlpp1 controls the activity of nongenomic estrogen signaling mediators (e.g. Akt, Erk1/ 2); thus, future work will be aimed at determining if the sex-dependent phenotype of Phlpp1 cKO Ctsk mice occurs via this mechanism.
The Ctsk-Cre transgene is active in osteoclasts and we have observed recombination of the Phlpp1 allele in osteoclasts using this Cre driver (12). Prior reports demonstrate that Ctsk-Cre drivers can be active within mesenchymal lineage cells including perichondrial cells, hypertrophic chondrocytes, and within the groove of Ranvier (13)(14)(15); however, the specific mesenchymal cell populations expressing the Ctsk-Cre driver were not consistent between these studies. We did not see a cortical bone phenotype or change in limb length that was observed with germline deletion of Phlpp1 (10). We therefore attribute our findings to the effects of Phlpp1 deficiency within osteoclasts.
Germline deletion of Phlpp1 results in decreased bone volume. Phlpp1 is widely expressed throughout the body and could impact a number of cell types controlling bone volume
Phlpp1 controls osteoclast activities
(e.g. osteoclast, osteoblasts, brain, and endocrine tissues). To delineate the effects of Phlpp1 within these different cell types we generated the floxed allele described here. The effects of Phlpp1 deletion within other cell types will be explored in future studies.
In this report we show that Phlpp1 represses the receptor tyrosine kinase c-fms. Phlpp1 deficiency was previously shown to decrease histone phosphorylation and acetylation leading to changes in growth factor receptor and RTK expression (18). We attribute increased osteoclastogenesis observed in our ex vivo Phlpp1-deficient cultures to enhanced c-fms expression. Likewise, hyper-responsiveness to M-CSF and enhanced signaling following exposure to M-CSF could be due to increased receptor density. We did observe two Phlpp1 bands in Western blots derived from D3 cells (Fig. 6A), whereas osteoclast precursors or mature D4 cells exhibited a single band (Figs. 5C, 6B, and 7G). There are two Phlpp1 isoforms, so this observation could be explained by a change in Phlpp1 isoform expression. Future work will be aimed at understanding the specific role of each Phlpp1 isoform and how each controls c-fms expression and commitment of progenitor cells to the osteoclast lineage. Increased osteoclast number with concomitant increase in bone is a phenotype that has been observed previously, including in mice with c-Src deficiency (22). Src is a nonreceptor tyrosine kinase, as such it would be interesting to evaluate the effects of Phlpp1 on c-Src expression and/or activity.
Here we show Phlpp1 deficiency leads to enhanced PKC activity within osteoclasts concomitant with disruptions in actin ring formation and bone resorption that are restored by the addition of a PKC pseudosubstrate. Osteoclast-mediated bone resorption depends on cellular polarization, with the basal cell membrane forming a tight seal against the bone via actin ring formation. Osteoclasts also establish the ruffled cell border along this membrane that promotes the secretion of matrix degrading enzymes and H ϩ ions. In other cell types, establishment of cell polarity is controlled by PKC activity (19). Full PKC kinase activity requires phosphorylation of the activation loop and the turn motif at Ser-410 and Ser-560, respectively (23-25). This is antagonized by Phlpp1-mediated dephosphorylation of PKC, leading to disruption of the Par complex that facilitates cell polarity (19,26,27). Likewise, in macrophages PKC regulates actin polymerization during phagocytosis (28). The PKC-Par complex has been observed in osteoclasts (20); but determining the requirement of PKC activity and Parcomplex function has not been evaluated and is a subject for future study.
Increased bone mass and mineral density that occurred along with increased osteoclast numbers also suggested that Phlpp1 deficiency within Ctsk-expressing cells enhanced coupling to bone formation. We surveyed a panel of known ossification coupling factors produced by osteoclasts, including Bmp6, Car-diotrophin1, Cthrc1, Ephrin B2, Semaphorin D, Sphk1/2, and Wnt10b. Of these, only Cthrc1 expression was elevated by Phlpp1-deficient osteoclasts. Cthrc1 is a hormone produced by bone cells and functions within the Wnt/PCP pathway (29 -31). Germline deletion of Cthrc1 diminishes bone mass and bone formation; in contrast, forced Cthrc1 expression by osteoblasts increases bone mass (30). Cthrc1 is also produced by osteoclasts, and expression markedly increases during differentiation. Osteoclast-specific deletion of Cthrc1 decreases bone mass, whereas osteoblast-specific deletion does not (31). Our results demonstrate that Phlpp1 suppresses expression of the bone-coupling factor Cthrc1. The mechanism by which Phlpp1 controls Cthrc1 transcript levels will be explored in future studies. Together our data demonstrate that Phlpp1 deficiency enhances bone mass while preserving osteoclast numbers due to loss of cell polarity and bone resorption concomitant with increased bone formation coupling facilitated by Cthrc1.
Generation of Phlpp1 conditional knockout mice
Phlpp1-floxed/floxed (Phlpp1 fl/fl ) mice were generated by insertion of loxP sites surrounding exon 4 using a TALEN-me-
Phlpp1 controls osteoclast activities
diated approach (Fig. 1). Phlpp1 fl/fl mice were crossed with mice expressing Cre-recombinase under the control of the Ctsk promoter to delete Phlpp1 within Ctsk-expressing cells (12). Mice were genotyped for Cre as previously described (32) or the Phlpp1-floxed allele using the following primers: forward: 5Ј-CAGTGGATATCTGGATAATC-3Ј, reverse: 5Ј-GATGAGT-GTTTTCATGAGGA-3Ј. Conditional knockout animals from these crossings are referred to as Phlpp1 cKO Ctsk mice and are on the C57Bl/6 background. Cre ϩ control littermates from crossings were used as controls as appropriate. Phlpp1 Ϫ/Ϫ animals were genotyped as previously described (11). Animals were housed in an accredited facility under a 12-h light/dark cycle and provided water and food ad libitum. All animal research was conducted according to guidelines provided by the National Institute of Health and the Institute of Laboratory Animal Resources, National Research Council. The Mayo Clinic Institutional Animal Care and Use Committee approved all animal studies.
Radiographs and micro-computed tomography
Radiographs of the right hind limb of 12-week-old mice were collected using a Faxitron X-ray imaging cabinet (Faxitron Bioptics, Tucson, AZ). Femurs from 12-week-old male and female Phlpp1 cKO Ctsk mice (n ϭ 3) and their Cre ϩ control littermates (n ϭ 3) were isolated and fixed in 10% neutral buffered formalin for 48 h. Femurs were then stored in 70% ethanol prior to scanning at 70 kV, 221 ms with a 10.5-m voxel size using a Scanco Viva40 micro-CT. For cortical bone analyses, a region of interest was defined at 10% of total femur length beginning at the femoral midpoint; defining the outer cortical shell and running a midshaft analysis with 260-threshold air filling correction analyzed samples. For trabecular measure- Figure 8. PKC is a downstream target of Phlpp1 that controls osteoclastogenesis and bone resorption. Bone marrow macrophages were collected from female Phlpp1 cKO Ctsk and Cre ϩ control littermates. Cells were cultured in the presence of 60 ng/ml of RANKL and 25 ng/ml of M-CSF. On day 3, cells were exposed to a PKC pseudosubstrate for 24 h and (A) phalloidin Alexa 488, DAPI, and phase-contrast images were collected. TRAP staining was performed (B) and the number of osteoclasts was determined (D). *, p Ͻ 0.05. C, cells were seeded onto bovine bone slices and cultured in the presence of 60 ng/ml of RANKL and 25 ng/ml of M-CSF for 14 days in the presence of the PKC pseudosubstrate or vehicle control (PBS). E and F, resorption pits were visualized by toluidine blue staining and the number of pits (E) and percent of each bone slice resorbed (F) was evaluated using ImageJ software. *, p Ͻ 0.05.
Phlpp1 controls osteoclast activities
Phlpp1 controls osteoclast activities ments, a region of interest was defined at 10% of total femur length starting immediately proximal to the growth plate; samples were analyzed using a 220-threshold air filling correction.
Histology and static and dynamic bone histomorphometry
Following micro-CT analyses, femurs from 12-week-old mice were decalcified in 15% EDTA for 14 days. Tissues were paraffin embedded and 5-m sections were collected and TRAP/Fast Green stained (Sigma, number 387A-1KT). Standardized histomorphometry was performed using Osteomeasure software (33). Calcein injections (10 mg/kg, i.p.) were also administered to mice 4 days and 1 day prior to euthanasia at 12 weeks of age as previously described (32). Femurs from calceinlabeled mice were fixed in 10% neutral buffered formalin for 48 h, transferred to 70% ethanol, and embedded in plastic. Calcein labeling was evaluated using standard histomorphometric techniques as previously described (32).
ELISAs for serum markers of bone formation and resorption
Serum was collected from 12-week-old Phlpp1 cKO Ctsk (n ϭ 3) female mice and their Cre ϩ control littermates (n ϭ 3) and stored at Ϫ80°C. An enzyme-linked immunosorbent assays (ELISA) for bone resorption (CTX-1) was performed in duplicate using 20 l of serum from each mouse according to the manufacturer's specifications (Ratlaps (CTX-1) number AC-06F1, Immunodiagnostics Systems). Bone formation was assessed using the ELISA for serum P1NP performed in duplicate using 5 l of serum from each mouse as described by the manufacturer (Ratlaps (P1NP), number AC-33F1, Immunodiagnostics Systems).
Ex vivo osteoclastogenesis, transfection, and bovine bone resorption assays
Bone marrow macrophages were collected from 4-to 6-week-old WT, Phlpp1 cKO Ctsk , or Phlpp1 fl/fl male or female mice as previously described (34). Cells were cultured in phenol red-free ␣-MEM overnight in the presence of 35 ng/ml of rM-CSF (number 410-ML, R&D Systems, Minneapolis, MN). Nonadherent cells were collected and cultured with 60 ng/ml of rRANKL (number 315-11, PreproTech, Rocky Hill, NJ) and 35 ng/ml of M-CSF. For dose-response assays using Phlpp1 fl/fl cells, cells were exposed to increasing M-CSF concentrations and infected with adenoviral (Ad) GFP or AdCre at an m.o.i. of 300 as previously described (35)(36)(37). The PKC pseudosubstrate (number 1791, Tocris, Minneapolis, MN) was used at 1 M and added at day 3 for counting assays. For bone resorption assays, nonadherent cells were seeded onto bovine bone slices (number NC1309388, Fisher Scientific) cultured with 60 ng/ml of RANKL and 35 ng/ml of M-CSF in 96-well plates. Cells were fed every 3 to 4 days with phenol red-free ␣-MEM supplemented with 35 ng/ml of M-CSF and 60 ng/ml of RANKL. For PKC pseudosubstrate experiments, bone marrow macrophages from WT and Phlpp1 cKO Ctsk mice were cultured on bovine bone slices in the presence of M-CSF and RANKL. On day 3, the PKC pseudosubstrate was added to cultures. On day 14, cells were lysed with 10% domestic bleach and bone slices were stained with 1% toluidine blue. For knockdown experiments, ON-TARGET plus siRNA smart pools targeting Cthrc1 or nontargeting siRNAs were purchased from Dharmacon (Cthrc1 siRNA ON-TARGETplus SMARTpool, number 68588, ON-TARGETplus Nontargeting Pool, number D-001810-10, Lafayette, CO). Osteoclast precursor cells were transfected on day 1 with each siRNA using Lipofectamine RNAiMax at a 1:1 ratio.
Osteoblast cell mineralization assays
Calvarial osteoblasts were collected from P7 male or female WT mice as previously described (38). Osteoblasts were cultured (0.65 ϫ 10 6 cells/cm 2 ) in conditioned medium derived from Cthrc1 or Control siRNA-transfected female Phlpp1 cKO Ctsk or WT littermate osteoclasts that was diluted 1:1 in fresh ␣-MEM and supplemented with 20% FBS, 50 g/ml of ascorbate, 10 mM -glycerolphosphate, 1 ϫ 10 7 M dexamethasone. Cultures were fed every 3-4 days with respective conditioned medium plus osteogenic supplements. Cells were fixed and stained with Alizarin Red on day 14. Cre ϩ control littermates (n ϭ 3) were aged to 12 weeks. Femora were decalcified, embedded, and sectioned. Masons trichrome staining was performed (A) and the number of osteoblasts per total area (B) and the number of osteoblasts per bone perimeter (C) was evaluated by histomorphometry. *, p Ͻ 0.05. D, female Phlpp1 cKO Ctsk mice (n ϭ 3) and Cre ϩ control littermates (n ϭ 3) were aged to 12 weeks and calcein was injected 4 days and 24 h prior to euthanasia. Femora were plastic embedded, sectioned, and dynamic histomorphometry was performed to determine the mineralizing surface per bone surface (MS/BS). Bone marrow macrophages were collected from 12-week-old female Phlpp1 cKO Ctsk and Cre ϩ control littermates. Cells were cultured in the presence of 60 ng/ml of RANKL and 25 ng/ml of M-CSF for 4 days. Expression of (E) Phlpp1 and (F) Cthrc1 was determined by qPCR. **, p Ͻ 0.01. G, Western blotting was performed as indicated. H, schematic depiction of experiment performed in I and J. I, bone marrow macrophages derived from WT or Phlpp1 cKO Ctsk female mice were transfected with siRNAs targeting Cthrc1 or control siRNAs. Western blotting was performed to confirm Cthrc1 knockdown. J, WT calvarial osteoblasts were cultured in conditioned medium derived from female Phlpp1 cKO Ctsk or WT littermate osteoclasts transfected with a siRNAs targeting Cthrc1 or control siRNAs (PBS). Alizarin Red staining of male and female WT calvaria was performed.
Phlpp1 controls osteoclast activities RNA extraction and semi-quantitative PCR
Total RNA was extracted from primary osteoclasts using TRIzol (Invitrogen) and chloroform, and 2 g was reverse transcribed using the SuperScript III first-strand synthesis system (Invitrogen). The resulting cDNAs were used to assay gene expression via real-time PCR using the following gene-specific primers: Phlpp1 (5Ј-CTCCAAGGTTGCATCACAGC-3Ј, 5Ј-CGCAGGGCATTGCAAGATAC-3Ј); Cthrc1 (5Ј-ATCCCA-GGTCGGGATGGATT-3Ј, 5Ј-CGTGAATGTACACTCCG-CAA-3Ј); and Tubulin (5Ј-CTGCTCATCAGCAAGATCA-GAG-3Ј, 5Ј-GCATTATAGGGCTCCACCACAG-3Ј) (32). Fold-changes in gene expression for each sample were calculated using the 2 Ϫ⌬⌬Cq method relative to control after normalization of gene-specific C q values to tubulin C q values (32). Each experiment was performed in triplicate and repeated at least three times. Results from a representative experiment are shown.
Western blotting
Cell lysates were collected in a buffered SDS solution (0.1% glycerol, 0.01% SDS, 0.1 m Tris, pH 6.8) on ice. Total protein concentrations were obtained with the Bio-Rad D C assay (Bio-Rad). Proteins (20 g ), actin (Sigma, A2228), and tubulin (Developmental Studies Hybridoma Bank, E7) and corresponding secondary antibodies conjugated to horseradish peroxidase (Cell Signaling Technology). Antibody binding was detected with the Supersignal West Femto Chemiluminescent Substrate (Pierce Biotechnology, Rockford, IL). Each experiment was repeated at least three times, and data from a representative experiment are shown.
TRAP and immunofluorescence staining
Cells were fixed in 10% neutral buffered formalin for 10 min and then washed 3 times with phosphate-buffered saline (PBS). Fixed cells were TRAP stained using the Acid Phosphatase, Leukocyte (TRAP) Kit (number 387A-1KT, Sigma). For c-fms immunofluorescence, cells were permeabilized with ice-cold methanol for 10 min at Ϫ20°C, blocked in PBS, 5% normal goat serum, 0.3% Triton X-100, and then washed 3 times with PBS. Cells were then incubated with an anti-c-fms antibody (diluted 1:50 in PBS, 1% BSA, 0.3% Triton X-100) overnight at 4°C. Cells were then washed 3 times with PBS, and incubated with an Alexa 488-coupled goat anti-rabbit antibody (number 150077, Abcam, Cambridge, MA) diluted 1:200 in PBS, 1% BSA, 0.3% Triton X-100. For phalloidin staining, cells were incubated in 0.33 M phallodin Alexa 488 (number 8878, Cell Signaling Technology) for 15 min. Cells were washed once with PBS and staining was visualized using wide-field fluorescence. Each experiment was repeated at least three times, and data from a representative experiment are shown.
Imaging and quantification
For osteoclastogenesis experiments, three images were collected using a ϫ10 objective per cover glass. Three cover glasses were used per experiment. The number and area of each image were quantified using ImageJ software. The average osteoclast area and average number of osteoclasts per field were determined. A logarithmic curve fit was applied to describe osteoclast number and area data resulting from increasing M-CSF concentrations. Each experiment was repeated independently three times.
Statistics
Data obtained are the mean Ϯ S.D. p values were determined with the Student's t test when only one experimental comparison was made. For assessment of significance with greater than two conditions, a one-way analysis of variance was performed. p Ͻ 0.05 was considered statistically significant. Statistical analyses were performed using GraphPad Prism 7 software. | 6,955.2 | 2019-06-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
V-Spline: An Adaptive Smoothing Spline for Trajectory Reconstruction
Trajectory reconstruction is the process of inferring the path of a moving object between successive observations. In this paper, we propose a smoothing spline—which we name the V-spline—that incorporates position and velocity information and a penalty term that controls acceleration. We introduce an adaptive V-spline designed to control the impact of irregularly sampled observations and noisy velocity measurements. A cross-validation scheme for estimating the V-spline parameters is proposed, and, in simulation studies, the V-spline shows superior performance to existing methods. Finally, an application of the V-spline to vehicle trajectory reconstruction in two dimensions is given, in which the penalty term is allowed to further depend on known operational characteristics of the vehicle.
Introduction
Global Positioning System (GPS) technology has become an essential tool in a wide range of applications involving moving vehicles, from transport management [1] and traffic safety studies [2] to modern precision farming [3]. Nevertheless, the accuracy of GPS tracking seems to be neglected in many applications [4,5]. Even if an accurate GPS device is utilized, GPS remains subject to various systematic errors due to the number of satellites in view, uncertainty in satellite orbits, clock and receiver issues, etc. [6,7]. These measurements are usually irregularly recorded, leading to what is known as irregularly spaced or intermittent data. Reconstruction or forecasting based on irregularly spaced data is usually more complicated and less accurate than that based on regularly spaced data [8].
To be fit for purpose, trajectory reconstruction must be accurate and robust. Two key issues for reconstruction are (i) how to handle observations that are inherently noisy measurements of the truth, and (ii) how to interpolate appropriately between observations, also known as path interpolation. In this context, statistical smoothing techniques can be useful processing tools because they are designed to minimize the impact of random error, and still typically require less time to detect random errors than visual inspection [9].
It has been shown previously that, if kinematic information such as velocity and acceleration can be included, interpolation and hence trajectory reconstruction can be greatly improved. The authors in [10,11] used B-splines to give a closed-form expression for a trajectory with continuous second derivatives that passes through the position points smoothly while ignoring outliers. The authors in [12] presented a quintic spline trajectory reconstruction algorithm connecting a series of reference knots that produces continuous position, velocity, and acceleration profiles in the context of computer (or computerized) numerical control (CNC). The authors in [13] gave a piecewise cubic reconstruction found by matching the observed position and velocity at the endpoints of each interval; this is essentially a Hermite spline. The authors in [14] also used Hermite interpolation to fit position, velocity and acceleration with given kinematic constraints. The authors in [15] implemented spline-based trajectories in order to overcome parametric singularities that occur in some reconstruction methods. The author in [16] proposed the kinematic interpolation approach that uses a set of kinematic equations to describe the motion of an object in terms of polynomial splines. Based on an adaptive cubic spline interpolation, the authors in [17] proposed an approach that, in the context of the Aircraft Communication Addressing and Reporting System, improves the smoothness and precision of trajectory reconstruction.
These approaches focus on optimal paths that are typically the shortest in either distance or time between starting and end points. Additionally, in some approaches, the moving object is assumed to be a point-like object. In this case, the object can rotate about itself to orient along the path in the direction of the goal point [15]. This assumption is unlikely to be appropriate for a real vehicle or vessel, particularly a tractor, which is the motivating example in our study.
Modern farming relies on the precise application of fertilizers, pesticides and irrigation. Large commercial farms typically operate a fleet of farm vehicles for these tasks, and it is of crucial importance for economic, environmental and regulatory reasons that the location and operational characteristics of these vehicles are recorded systematically and accurately. In order to do this, it is becoming standard to equip farm vehicles with GPS units to record the location of the vehicle on the farm. It is the goal of this study to develop an appropriate tool to reconstruct vehicle trajectories from such data, particularly when it is intermittent and noisy.
In this study, we assume that we have independent records of the position and velocity of a moving object at a sequence of observation times. Traditional methods often assume motion with constant speed between two observations times, but this will not work well in our case. Additionally, motivated by the fact that tractors often work in open fields, we assume no further information is available to constrain the position of the object. Initially, we constructed trajectories in terms of a Hermite cubic spline basis [18,19]. In each interval, the reconstruction is clearly continuous, as are its first and second derivatives. The goal is then to connect the piecewise splines keeping the trajectory and its first derivative continuous at the interior knots. In this approach, the trajectory is not required to pass through each knot and the main objective is the smoothness of the path, not a shortest or minimum-time path. To formalize this procedure, we propose a new objective function that incorporates velocity information and includes an adaptive penalty term. The penalty term utilises information about the distance and travel time on each interval. We dub the proposed smoothing spline the V-spline because it incorporates velocity information and can be applied to vehicle and vessel tracking. We show that the V-spline works better than other methods in simulation studies and that it produces satisfactory outcomes in a real-world application.
The structure of this paper is as follows: in Section 2, we introduce the basis functions and the V-spline objective function that depends both on position residuals y i − f (t i ) and velocity residuals v i − f (t i ). A new parameter γ in the objective function controls the degree to which the velocity information is used in the reconstruction. We show that the V-spline can be written in terms of modified Hermite spline basis functions. We also introduce a particular adaptive V-spline that seeks to control the impact of irregularly sampled observations and noisy velocity measurements. In Section 3, a cross-validation scheme for estimating the V-spline parameters is given. Section 4 details the performance of the V-spline on simulated data based on the Blocks, Bumps, HeaviSine and Doppler test signals [20]. Finally, an application of the V-spline to a two-dimensional data set is presented in Section 5. R code for implementing V-spline and reproducing our outcomes is provided as Appendices A-C at the end of the manuscript.
Objective Function
Conventional smoothing spline estimates of f (t) appear as a solution to the following minimization problem: findf ∈ C (2) [a, b] that minimizes the penalized residual sum of squares, for a pre-specified value λ > 0 [21][22][23]. The objective function combines goodness-of-fit to the data with a measure of roughness [24]. For V-splines, we consider the situation of paired position data y = {y 1 , . . . , y n } and velocity data v = {v 1 , . . . , v n } at a sequence of times satisfying where the second derivative of f (t) is piecewise continuous, we define the objective function where γ > 0, and we have chosen the penalty function λ(t) to be a piecewise constant function on interior intervals, i.e for t ∈ [t i , t i+1 ), i = 1, . . . , n − 1, In fact, each f i ∈ C (2) [t i ,t i+1 ] is a Hermite spline which satisfies the properties of a cubic spline. The complete spline function f , which connects all f i s, has piecewise continuous second derivative, and will be continuous if a particular condition is met. The second derivative f is zero on the exterior intervals [a, t 1 ] and [t n , b]. From now on, we will understand λ(t) to be piecewise constant (3), and we will often use λ to refer to the set of λ i . The proof of Theorem 1 is in Appendix B. Remark: In the language of splines, the points t 1 , . . . , t n are the interior knots of the V-spline, and a = t 0 , b = t n+1 are the exterior or boundary knots.
Basis Functions
The cubic Hermite spline where the basis functions are h (i) For V-splines, a slightly more convenient basis is given by Therefore, any f ∈ C (2) p.w. [a, b] can then be represented in the form where {θ k } 2n k=1 are parameters corresponding with the "true" position f (t i ) and velocity f (t i ) at the observation points.
Computing the V-Spline
In terms of the basis functions in the previous section, the objective function (2) is given by where B and C are n × 2n matrices with components and Ω λ is a 2n × 2n matrix with components [ In the following, we reserve the use of boldface for n × 1 vectors and n × n matrices.
The detailed structure of Ω λ is presented in Appendix A. It is convenient to write It is then evident that Ω λ is a bandwidth four matrix.
Since Equation (10) is a quadratic form in terms of θ, it is straightforward to establish that the objective function is minimized at which can be identified as a generalized ridge regression. The fitted V-spline is then given byf (t) = ∑ 2n k=1 N k (t)θ k . The V-spline is an example of a linear smoother [25]. This is because the estimated parameters in Equation (13) are a linear combination of y and v. Denoting byf andf the vector of fitted valuesf (t i ) andf (t i ) at the training points t i , we havê where S λ,γ , T λ,γ , U λ,γ and V λ,γ are smoother matrices that depend only on t i , λ(t) and γ.
It is not hard to show that S λ,γ and V λ,γ are symmetric, positive semi-definite matrices. Note that T λ,γ = U λ,γ .
Adaptive V-Spline
Until now, we have not explicitly considered the impact of irregularly sampled observations of noisy measurements of velocity on trajectory reconstruction. In order to do this, it is instructive to evaluate the contribution to the penalty term from the interval [t i , t i+1 ). Using (4), it is relatively straightforward to show that where is the average velocity over the interval. The ε ± i can be interpreted as the difference at time t i and t − i+1 respectively between the velocity implied by an interpolating Hermite spline and the velocity implied by a straight line reconstruction.
The contribution to the penalty term is then where As a consequence of (17), larger time intervals will tend to contribute less to the penalty term (other things being equal). However, this is exactly when we would expect the velocity at the endpoints of the interval to provide less useful information about the trajectory over the interval. In the case when the observed change in position is small, i.e., when y i+1 − y i =v i ∆T i ≈ 0, over-reliance on noisy measurements of velocity will result in "wiggly" reconstructions. In these two instances-graphically depicted in Figure 1a-we would like the V-spline to adapt and to favor straighter reconstructions; this is a deliberate design choice. We can achieve this by choosing where η is a parameter to be estimated. The penalty term then takes a particularly compelling form: the contribution from the interval [t i , t i+1 ) (17) is proportional to discrepancy in velocity average velocity 2 (19) for all i. We call the resulting spline the adaptive V-spline. The spline when λ i = λ 0 or, more accurately, when λ i is independent of ∆T i andv i , we call it the non-adaptive V-spline. Figure 1. Comparing cubic Hermite spline reconstruction and straight line reconstruction. When ∆T i = t i+1 − t i is large orv i ∆T i = y i+1 − y i is small, the adaptive V-spline favours straighter reconstructions.
Parameter Selection and Cross-Validation
The issue of choosing the smoothing parameter is ubiquitous in curve estimation and there are two different philosophical approaches to the problem. The first is to regard the free choice of smoothing parameter as an advantageous feature of the procedure. The second is to let the data determine the parameter [22,26], using a procedure such as crossvalidation (CV) or generalized cross-validation (GCV) [21]. We prefer the latter and use the data with GCV to train our model and find the best parameters.
In standard regression, which assumes the mean of the observation errors is zero, the true regression curve f (t) has the property that, if an observation y i is omitted at time point t i , the value f (t i ) is the best predictor of y i in terms of mean squared error [22]. We use this observation to motivate a leave-one-out cross-validation scheme to estimate λ and γ for both the non-adaptive and the adaptive V-splines.
The following theorem establishes that we can compute the cross-validation score without knowing thef (−i) (t, λ, γ): Theorem 2. The cross-validation score of a V-spline satisfies arg min λ,γ>0 wheref is the V-spline smoother calculated from the full data set with smoothing parameter λ and γ, and S ii = [S λ,γ ] ii , etc.
The proof of Theorem 2 is in Appendix C.
Simulation Study
In this section, we give an extensive comparison of methods for equal-spaced data. The comparison is based on the ability to reconstruct trajectories derived from Blocks, Bumps, HeaviSine and Doppler, which were used in [20,27,28] to mimic problematic features in imaging, spectroscopy and other types of signal processing.
Letting g(t) denote any one of Blocks, Bumps, HeaviSine or Doppler, we treat g(t) as the instantaneous velocity of the trajectory f (t) at time t, i.e., f (t) = g(t). Setting f (t 1 ) = 0, the position is then updated in terms of the average velocity over each interval: which is accurate to the second order in t i+1 − t i . Finally, the observed position and velocity are found by adding i.i.d. zero-mean Gaussian noise: where ε , σ f is the standard deviation of the positions f (t i ), σ g is the standard deviation of the velocities g(t i ), and SNR is the signal-tonoise ratio, which we take to be 3 or 7.
We compare the performance of the adaptive V-spline with a spatially adaptive penalized spline known as the P-spline with the function asp2 from the package AdaptFi-tOS [29][30][31], a generalized additive model gam from the package mgcv [32,33], the kinematic interpolation approach (KI) by [16], as well as the adaptive V-spline with γ = 0, which becomes a conventional spline with Hermite basis functions, and the non-adaptive V-spline where λ 0 is a constant. It is important to note that only the KI approach, the non-adaptive and adaptive V-splines incorporate velocity information. The V-spline parameters are obtained by minimizing the cross-validation score (22). In the gam model, we use tp basis functions with 1024 knots. For the KI approach, the position at time t i is interpolated from the two neighbouring points at t i−1 and t i+1 . (The positions at t 1 and t n are interpolated from points at (t 1 , t 2 ) and (t n−1 , t n ), respectively.) Following [34], we fix n = 1024 in the simulations.
To examine the performance of the adaptive V-spline, we compute the true mean squared error for each of the reconstructions via: and the Modified Nash-Sutcliffe efficiency (mNSE) [35] via: The results are shown in Tables 1 and 2. The V-spline, either adaptive or non-adaptive, returns the best solution in all cases. The reason for the poor performance of kinematic interpolation is two-fold: first, KI assumes v i is a good approximation to the velocity over the entire interval [t i−1 , t i+1 ). Second, KI is not a true smoother so it is prone to errors in the observations. In contrast, the V-spline successfully smooths and interpolates in the presence of noise. Table 3 shows the ability of the adaptive V-spline to retrieve the true SNR: for reconstructionf , it is estimated by σf /σ (f −y) . Table 3 shows that the estimates from the V-splinê f are very close to the true values. In summary, the simulation study has shown the ability of V-splines to accurately reconstruct trajectories from noisy and potentially problematic velocity profiles. The V-spline outperforms methods that do not use velocity information, and its smoothing strategy appears to be vastly superior to that of kinematic interpolation.
Inference of Tractor Trajectory
In this section, we apply the V-spline to a data set obtained from a GPS unit mounted on a tractor working in a horticultural setting. The motivating problem in this context is to accurately record where pesticide has been applied to ensure that neither over-spraying or under-spraying has occurred.
GPS units in vehicles provide y t , noisy measurements of the actual position x t , and v t , noisy measurements of the actual velocity u t , for a sequence of times t ∈ T, which is irregularly recorded with highly variable time differences ∆T i . These data may also be augmented with information on operating characteristics of the vehicle, b t , in this case data on whether the tractor boom was in a raised or lowered position. The trajectory reconstruction problem is the problem of estimating x s , for an arbitrary time s, given a subset of the observations {y t , v t , b t | t ∈ T}. Note that, in this definition of trajectory reconstruction, we are not explicitly interested in estimating u s .
The original data set consists of n = 928 records of longitude, latitude, speed, bearing and the status of the tractor's boom sprayer. The boom status, "up" and "down", denotes the operational state of the tractor, and indicates different types of trajectories. For example, if boom status is "down", the tractor is probably sowing, watering or harvesting on the farm. In this scenario, the speed is stable and its variance is low. On the contrary, when it is "up", the speed could be high because the driver is travelling between jobs, it could be zero because the driver is having a break, or it might indicate the tractor is turning. In this last situation, however, the acceleration could be high. For this reason, we add further complexity to the model by allowing the penalty parameter to depend on boom status.
For trajectory reconstruction, this data set was converted from longitude and latitude in degrees ( • ) into easting and northing in meters (m) by the Universal Transverse Mercator (UTM) coordinate system. The speed and bearing were converted into velocities (m/s) in those directions as well. See Figure 2.
The V-Spline in d-Dimensions
To generalize the V-spline to d-dimensions, we consider the situation preceding Equation (2) but where now y i , v i ∈ R d . Then, the function f : [a, b] → R d is a ddimensional V-spline if it minimizes: where · 2 is the Euclidean norm in d-dimensions. For each direction α = 1, . . . , d, the fitted V-spline has the formf α (t) = ∑ 2n k=1 N k (t)θ α k , wherê The parameters λ and γ are estimated by minimizing the cross-validation score: arg min λ,γ>0 In what follows, we allow the non-adaptive and adaptive V-splines to depend on the boom status. This is to demonstrate that our method can simply and usefully also incorporate known covariates. In this application, letting b i = 0 denote boom "up", b i = 1 denote boom "down", andv i = y i+1 − y i 2 /∆T i be the average velocity on the interval [t i , t i+1 ), the penalty term for the non-adaptive V-spline is and, for the adaptive V-spline, it is Optimization in (29) is now simply with respect to positive λ d , λ u and γ.
Two-Dimensional Trajectory Reconstruction
The V-spline reconstruction from the tractor data is shown in Figure 3. The parameters λ d , λ u and γ are found by our proposed cross-validation scheme using the stats ::optim function in R [36]. It is immediately evident from the trajectory that the tractor has been moving up and down rows of an orchard or travelling between parts of the orchard. It is instructive to compare the performance of the adaptive V-spline to a line-based approach that simply and unrealistically connects observations by a straight line, kinematic interpolation which also utilizes velocity information, and the non-adaptive V-spline. Figure 4 shows finer detail of the tractor trajectory given by these reconstructions. A feature of the KI method is the hugely unrealistic excursions near the turn-around points at the end of each row as shown in Figure 4b. On the contrary, the adaptive V-spline, see Figure 4d, adapts to the information based on observed velocity discrepancy to avoid such excursions. Without the adaptive term (18), the non-adaptive V-spline performs in a similar way to KI, which can be seen from Figure 4c; this proves the power of the adaptive penalty. KI relies on misleading velocity information that generates unrealistic trajectories. The non-adaptive V-spline also generates unrealistic trajectories at sharp-turning and braking points. However, the adaptive V-spline adapts to the information based on observed velocity discrepancy and generates plausible trajectories.
Discussion
In this paper, a smoothing spline called the V-spline is proposed that minimizes an objective function which incorporates both position and velocity information. Given n knots, the V-spline has 2n effective degrees of freedom corresponding to n − 1 cubic polynomials with their value and first derivative matched at the n − 2 interior knots. The effective degrees of freedom are then fixed by n position observations and n velocity observations. Note that, in the limit γ → 0, the V-spline reduces to having n effective degrees of freedom. An adaptive version of the V-spline is also introduced that seeks to control the impact of irregularly sampled observations and noisy velocity measurements.
The computational complexity of the V-spline method is equivalent to any smoothing spline that uses a cross-validation procedure to estimate the tuning parameters. The essential difference is that the V-spline incorporates 2n data points (in each dimension), as opposed to n. The impact of this shows up in the time to solve forθ in (13). Thus, the computation time of the V-spline is the same as a standard smoothing spline with 2n observations. Modest computational gains can possibly be made by improving the CV parameter estimation step, but Theorem 2 already assures us that this step is highly efficient. Future research directions for the V-spline include application to ship tracking [18] and development of a fast filtering algorithm.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,472.6 | 2018-03-19T00:00:00.000 | [
"Computer Science"
] |
Chronic constriction injury-induced microRNA-146a-5p alleviates neuropathic pain through suppression of IRAK1/TRAF6 signaling pathway
Background microRNA-146a-5p (miRNA-146a-5p) is a key molecule in the negative regulation pathway of TLRs and IL-1 receptor (TIR) signaling. Our recent study demonstrated that MyD88-dependent signaling pathway of TIR in the dorsal root ganglion (DRG) and spinal dorsal horn (SDH) plays a role in peripheral nerve injury-induced neuropathic pain. However, it was not clear whether and how miRNA-146a-5p regulates the TIR pathway of DRG and SDH in the development of neuropathic pain. Methods The sciatic nerve chronic constriction injury (CCI) model of rat was used to induce chronic neuropathic pain. The levels and cellular distribution of miRNA-146a-5p were detected with quantitative real-time PCR (qPCR) and fluorescent in situ hybridization (FISH). The RNA level, protein level, and cellular distribution of IRAK1 and TRAF6 that is targeted by miRNA-146a-5p were detected with qPCR, western blot, and immunofluorescent. The pain-related behavioral effect of miRNA-146a-5p was accessed after intrathecal administration. Mechanical stimuli and radiant heat were used to evaluate mechanical allodynia and thermal hyperalgesia. Results We found that the level of miRNA-146a-5p significantly increased in L4-L6 DRGs and SDH after CCI surgery; meanwhile, the protein level of IRAK1 and TRAF6 in DRGs was significantly increased after CCI. Intrathecal injection of miR146a-5p agomir or miRNA-146a-5p antagomir regulates miRNA-146a-5p level of L4-L6 DRGs and SDH. We found that intrathecal injection of miR146a-5p agomir can alleviate mechanical and thermal hyperalgesia in CCI rats and reverse the upregulation of IRAK1 and TRAF6 of L4-L6 DRGs and SDH induced by CCI. We furthermore found that intrathecal injection of miRNA-146a-5p antagomir can exacerbate the mechanical and thermal pain-related behavior of CCI rats and meanwhile increase IRAK1 and TRAF6 of L4-L6 DRGs and SDH expression even further. Conclusions miRNA-146a-5p of DRG and SDH can modulate the development of CCI-induced neuropathic pain through inhibition of IRAK1 and TRAF6 in the TIR signaling pathway. Hence, miRNA-146a-5p may serve as a potential therapeutic target for neuropathic pain. Electronic supplementary material The online version of this article (10.1186/s12974-018-1215-4) contains supplementary material, which is available to authorized users.
Background
Neuropathic pain is a rather stubborn pain induced by nerve injury. It can persist for months to years, even after the primary injury has healed [1]. Many studies focus on the molecular mechanisms are related to neuropathic pain. However, there is no medication currently available that treat neuropathic pain in a complete and definitive way. Accumulating evidence demonstrates that neuroinflammation in the peripheral and central nervous system (e.g., dorsal root ganglion (DRG) and spinal dorsal horn (SDH)) is involved in peripheral nerve injury-induced neuropathic pain [2][3][4]. DRG neurons are responsible for the complication of neuropathic pain as they include mechanoceptor, thermoceptor, and pruritic sensor [2]. Peripheral nerve injury activates nociceptive pathways and alters gene expression in DRG neurons, which may contribute to the development and maintenance of neuropathic pain.
Recent studies describe immune-related proteins of DRG and SDH are key players for peripheral and central sensitization of neuropathic pain [5][6][7][8]. Toll/interleukin-1 receptors (TIRs) such as TLR4 and IL-1R are found not only expressed on immune cells but also on sensory neurons in DRGs and glial cells (microglia and astrocytes) in the SDH [9][10][11][12][13][14]. Targeting toll-like receptors (TLRs) such as TLR4 expressed on spinal glial cells has been reported to relieve mice neuropathic pain [5]. Our recent studies show that suppression of myeloid differentiation factor-88 adaptor protein (MyD88)-dependent signaling alleviates neuropathic pain induced by peripheral nerve injury in the rat [15]. The MyD88 is involved in TIRs, mediates activation of TIRs, leads to the NF-κB activation, and induces proinflammatory mediators [9,16]. TIRs and its signaling pathway play important roles in the pathogenesis of neuropathic pain. The activation of TIRs also needs to recruit interleukin-1 receptor-associated kinase 1 (IRAK1) and tumor necrosis factor receptor-associated factor 6 (TRAF6) to activate NF-κB signaling pathway [16].
Recent studies found the activation of NF-κB, and binding the promoters NF-κB-sensitive genes induce transcription of hundreds genes including NF-κB-dependent miRNAs such as miRNA146a-5p [17,18]. miRNA is a family of small endogenous non-coding RNA molecules that silence target mRNAs by binding to their 3′UTRs. The miRNAs of the DRG participate in nociceptive modulation in the somatosensory pain [19]. miRNAs affect neuropathic pain by regulating key proteins in the pain progress, resulting in hyperalgesia and allodynia [20]. Mounting evidence suggests that miRNA-146a-5p is involved in the innate immune response and can reduce inflammation by targeting both TRAF6 and IRAK1 in monocytes, macrophages, and astrocytes [21][22][23][24]. Previous research demonstrated that spinal miRNA-146a could contribute to osteoarthritic pain of knee joints [25]. Also, Lu et al. found that miRNA-146a of astrocytes could attenuate SNL-induced neuropathic pain by suppressing TRAF6 signaling in the spinal cord [26]. However, the role of miRNA-146a-5p in DRG and SDH of nerve injury-induced neuropathic pain has not been fully investigated. How miRNA-146a-5p modulates the downstream target gene of DRG neurons in chronic constriction injury (CCI) is still unknown. TRAF6 and IRAK1 of TIR signaling may play an important role for neuroinflammation in DRG neurons of CCI model.
In the current study, we evaluated the expression of miRNA-146a-5p and its target genes, namely, IRAK1 and TRAF6, in the DRG of rats with CCI. We also intrathecally administered miRNA-146a-5p agonist (miR-NA-146a-5p agomir) or antagonist (miRNA-146a-5p antagomir) to investigate the function of miRNA-146a-5p in modulating neuropathic pain. Our data demonstrated that miRNA-146a-5p can alleviate CCI-induced mechanical and thermal hyperalgesia through inhibition of IRAK1 and TRAF6 and may be the target for protection against chronic pain.
Animals
Male Sprague-Dawley (SD) rats weighing 200-250 g were acquired from Laboratory Animal Center of Peking Union Medical College Hospital, Chinese Academy of Medical Sciences. Animals were randomly assigned to treatment or control groups. These rats were bred in a specific pathogen-free environment in 12-h light-dark cycle. The rats were fed with rodent diet and water. These experiments were approved by the Institutional Animal Care and Use Committee in Chinese Academy of Medical Sciences.
Rat model of neuropathic pain
In accordance with the study of Bennett and Xie YK [27], we performed CCI on rats anesthetized through intraperitoneal injection of sodium pentobarbital (40 mg/kg) under aseptic condition. After the sciatic nerve of the mid-thigh level on each side was exposed, four snug ligatures of chromic gut suture were loosely tied around the nerve with about 1-mm space between the knots. The sciatic nerves of sham animals were exposed without ligation.
Behavioral test
Eight rats were included in each group. Paw withdrawal threshold (PWT) in response to mechanical stimuli was used to access mechanical allodynia by using Von Frey filaments 1 day before operation and 1, 3, 5, 7, 14, and 21 days after the operation. Paw withdrawal latency (PWL) in response to radiant heat was used to evaluate thermal hyperalgesia. Three repeat measures were performed in each rat with a 5-min interval. This test was performed at 10 a.m. on day 1 preoperation and days 1, 3, 5, 7, 14, and 21 postoperation. At the end of behavior testing, the L4-L6 DRGs and SDH were chronologically harvested and rapidly frozen at − 80°C.
Intrathecal catheter implantation and intrathecal injection
Eight rats were included in each group. A PE10 catheter (length, 15 cm) was intrathecally implanted using a previously described technique [28,29]. Briefly, rats were intraperitoneally anesthetized with 10% chloral hydrate (300 mg/kg). A partial laminectomy at L5/L6 was performed to position the intrathecal catheter, and the dural membrane was exposed. The catheter was inserted through a dural incision and passed by 2 cm into the intrathecal space. The catheter was secured with 4/0 silk threads to the bones and muscles. After implantation, all rats were allowed to recover for a minimum of 2 days prior to the experiments. Rats presenting motor weakness or signs of paresis upon recovery from anesthesia were killed. Proper location of the catheter was confirmed through hind limb paralysis after injection of 10 μL of 2% lidocaine.
Quantitative real-time PCR
Total RNA was isolated with TRIzol reagent (Invitrogen Life Technologies) and reverse-transcribed using a reaction mixture in accordance to the manufacturer's instruction. RNA quality and quantity were determined with a NanoDrop spectrophotometer (ND-1000; Nano-Drop Technologies), and RNA integrity was assessed through gel electrophoresis. Quantitative real-time PCR (qPCR) was performed on a StepOnePlus real-time PCR system (Applied Biosystems, ABI, CA, USA) using the SYBR Green qPCR Master Mix (ABI, CA, USA). Expression data were normalized to the expression of β-actin. The total RNA was reverse-transcribed to determine the miRNA expression, and the resulting cDNA was mixed with miRNA-specific Taqman primers (ABI, CA, USA) and Taqman Universal PCR Master Mix (ABI, CA, USA). U6 RNA was used as an endogenous control for data normalization of the miRNA level. These primers used for SYBR Green qPCR are shown in Table 1. Relative changes in expression were measured using the comparative threshold cycle (Ct) method and 2 −ΔΔCt as previously described; the results indicated the fold change of expression.
Fluorescent in situ hybridization
To examine expression of miR-146a in DRG neurons, in situ hybridization was used with locked nucleic acid probes specific for miR-146a. Rats were sacrificed under anesthesia. L4-L6 DRGs were fixed by 4% paraformaldehyde. After incubated in hybridization solution at room temperature for 2 h, the sections were incubated overnight in hybridization solution with 8 ng/μL of FAM (488) labeled probes for miR-146a-5p (5′-FAM-AACCC ATGGA ATTCA GTTCT CT-FAM-3′, Wuhan Servicebio technology) at 37°C. The sections were washed in 2 × SSC at 37°C for 10 min and in 0.5 × SSC at room temperature for 10 min. Slides were then coverslipped with VECTASHIELD Mounting Medium with DAPI.
Immunohistochemistry
After the rats were anesthetized with sodium pentobarbital, they were perfused transcardially with fresh 4% paraformaldehyde. L4-L6 DRGs were harvested, postfixed in 4% paraformaldehyde for 2 h, and then dehydrated in 30% sucrose overnight at 4°C. The tissues were embedded in the optimal cutting temperature compound according to our previous studies. Frozen sections (each with 15 μm thickness) were used for immunohistochemistry analysis. The tissue sections were incubated with following primary antibodies. Then, tissue sections were incubated with the proper secondary antibodies or Alexa Fluor 594-conjugated isolectin B4 (IB4) (1:100, Invitrogen/ Thermo Fisher Scientific, USA) for 1 h. Slides were then washed in PBS and coverslipped with VECTA-SHIELD Mounting Medium with DAPI. Table 2 lists the primary and secondary antibodies used for the immunofluorescence staining analysis.
Western blot
Total proteins from rat L4-L6 DRGs or SDH were extracted with lysis buffer (CWBio, Beijing, China). Briefly, 30 μg of each sample was resolved through sodium dodecyl sulfate polyacrylamide gel electrophoresis and then transferred onto Immobilon-P polyvinylidene difluoride (GE). After blocking with 5% BSA for 1 h at room temperature, the membranes were incubated with an anti-IRAK1 antibody, anti-TRAF6 antibody, anti-pNF-κB (p65) antibody, and anti-β-actin antibody.
The corresponding secondary antibodies were probed after washing the membranes. Final results were acquired using a western blot detection system (GE) with enhanced chemiluminescence reagents eECL Kit (CWBio, Beijing, China). Table 3 lists the primary and secondary antibodies used for the western blot analysis.
Statistical analysis
Data are expressed as mean and standard errors (mean ± SEM). Statistical analyses were performed using SPSS software (vision 17.0). Differences between two groups were analyzed using Student's t test. One-way ANOVA followed by Bonferroni's post hoc tests was used to determine statistical differences among western blot and qPCR. Two-way ANOVA followed by Bonferroni's post hoc tests was used to analyze the behavioral data. P < 0.05 was considered statistically significant.
Results
Expression level of miRNA-146a-5p is elevated in DRG and SDH neurons of rat after CCI The TIR signaling is critical for nerve injury-induced neuropathic pain generation and maintenance. Our recent results revealed that CCI increased the level of phospho-NF-kappaB of DRG. The miRNA-146a-5p is the microRNA of NF-kappaB-dependent induction. To investigate the role of miRNA-146a-5p in nerve injury-induced neuropathic pain, we used the CCI model of rat to induce neuropathic pain. Compared with sham operation rats, CCI group rats showed a rapid and persistent mechanical allodynia and thermal hyperalgesia which achieved a significant decrease in paw mechanical threshold and thermal withdrawal latency from postoperative day (POD) 3 to POD21 ( Fig. 1a-b). We first examined the expression level of miRNA-146a-5p in the rat DRG of CCI model. The qPCR results showed that miRNA-146a-5p of CCI expression was gradually, slightly, and long-lastingly increased from POD3 to POD14 and recovered to the sham level at POD21 in the DRG of rat (Fig. 1c). We further determined the cellular distribution of miRNA-146a-5p in the DRG of CCI operation with fluorescent in situ hybridization (FISH). We found that the miRNA-146a-5p was distributed in large-sized, medium-sized, and small neurons, but increased staining miRNA-146a-5p was distributed in the DRG of CCI ( Fig. 1d-e). In the SDH, our qPCR analysis showed that the CCI operation produced increased from POD7 to PDO21 in the miRNA-146a-5p level (Fig. 1f). FISH results showed that the increased miRNA-146a-5p was distributed in the SDH of the spinal cord (Fig. 1g-h).
CCI increases level of IRAK1 and TRAF6 in DRG neurons of rat
IRAK1 is recruited by MyD88 and initiate TIR signaling. Our published results revealed that MyD88 protein was (Fig. 2a). Similar with the mRNA, western blot analysis showed that the protein expression of IRAK1 was significantly increased in POD7 and peaked in POD14 in DRG after CCI (Fig. 2b). To check the cellular distribution of IRAK1 in the DRG, we did stain of immunofluorescence of IRAK1. Our results showed that IRAK1 immunoreactivity (IRAK1-IR) cells were distributed in the three-size category of DRG neurons of sham and CCI group. We found the percentages of IRAK1-IR neurons of DRG were significantly increased after CCI operation (Fig. 2c-d and Additional file 1: Figure S1).
We also found IRAK1 was expressed in calcitonin gene-related peptide (CGRP)-positive neurons and IB4-positive neurons in the DRG (Fig. 2e-f ). The mRNA level of TRAF6 was also detected through qPCR. We found the mRNA level of TRAF6 was increased in POD3, peaked in POD14, and continuously increased to POD21 (Fig. 3a). We then examined the protein level of TRAF6 in rat DRG. western blot results showed that the expression of TRAF6 was started increasing in POD3 and maintained in POD21 after CCI operation compared with the sham operation (Fig. 3b).
We also examined the distribution of TRAF6 in DRG. Immunofluorescence results showed that TRAF6 immunoreactivity (TRAF6-IR) cells were distributed in the Fig. 1 Expression of miRNA-146a-5p in rat DRG and SDH after CCI. a, b CCI-induced mechanical allodynia and thermal hyperalgesia manifested as a lowered threshold of mechanical withdrawal (a) and thermal withdrawal (b). Eight rats were included in each group. Two-way ANOVA, *P < 0.05, versus sham. c qPCR showing the time course for miRNA-146a-5p level in DRG (n = 4 in each group). One-way ANOVA, *P < 0.05, versus sham. d, e FISH showing expression and distribution of miRNA-146a-5p in rat DRGs of sham (d) and CCI 14 days (e). Scale bar 50 μm. f qPCR showing the time course for miRNA-146a-5p level in SDH (n = 4 in each group). One-way ANOVA, *P < 0.05, versus sham. g, h FISH showing expression and distribution of miRNA-146a-5p in rat SDH of CCI 7 days (g) and CCI 14 days (h) three-size category of DRG neurons in the CCI group. CCI induced a marked increase of TRAF6-IR in the ipsilateral side of the DRG at POD7, POD14, and POD21 (Fig. 3c, d and Additional file 1: Figure S2). To check whether TRAF6 is expressed in the nociceptive neuron, we did double immunofluorescence of TRAF6 with nociceptive neuronal markers CGRP and IB4. We found that TRAF6-IR was colocalized with CGRP and IB4 (Fig. 3e, f).
Intrathecal injection miRNA-146a-5p agomir and miRNA-146a-5p antagomir regulate miRNA-146a-5p expression levels in DRG and SDH Agomir is a double-stranded miRNA that is specially marked and chemically modified to regulate the biology of the target gene. To evaluate the effect of miRNA-146a-5p agomir for increasing the miR146a-5p level, we intrathecally injected miRNA-146a-5p agomir in native rats. The expression level of miRNA-146a-5p in SDH and DRG was accessed by qPCR after intrathecally injected miRNA-146a-5p agomir. qPCR results indicated that the expression level of miRNA-146a-5p was increased in L4-L6 DRGs and SDH of rats after intrathecally injected with miRNA-146a-5p agomir compared with agomir control rats (Fig. 4a, b). We also evaluated the effect of miRNA-146a-5p antagomir on the expression of miR-146a-5p and found that compared to the antagomir control group, miRNA-146a-5p antagomir decreased the expression level of miR-146a-5p (Fig. 4c, d).
miRNA-146a-5p agomir relieves CCI-induced neuropathic pain
Compared with CCI rats that were injected intrathecally with agomir control, injection intrathecally with miRNA-146a-5p agomir significantly attenuated CCI-induced neuropathic pain from POD7 until POD21 (Fig. 5a, b). Subsequently, our qPCR results found that injection intrathecally with miRNA-146a-5p agomir significantly decreased the IRAK1 mRNA level in SDH and DRG of rats with CCI POD14 (Fig. 5c, d). In addition, we also found that the TRAF6 mRNA level in SDH and DRG were decreased in CCI rats with intrathecal miRNA-146a-5p agomir (Fig. 5e, f). We further investigated the potential effects of miRNA-146a-5p agomir on IRAK1 and TRAF6 protein level in DRG and SDH at 14 days after CCI. In comparison with the rats in the CCI rats injected with negative control, western blot showed that miRNA-146a-5p agomir significantly decreased the protein level of IRAK1 and TRAF6 in DRG and SDH (Fig. 5g, h). Meanwhile, our western blot results also found that intrathecal miRNA-146a-5p agomir decreased the phosphorylation level of pNF-κB (p65) protein in CCI rats (Fig. 5g, h). Our double immunofluorescence results showed CCI induced nuclear translocation of pNF-κB (p65) in DRG neurons of rats which were intrathecally injected with miRNA-146a-5p agomir or control agomir (Fig. 5i, j).
miRNA-146a-5p antagomir aggravate neuropathic pain of rats after CCI
We further determined the effect of miR-146a-5p antagomir on CCI rats. Compared with CCI rats that were injected intrathecally antagomir control, injection intrathecally with miR-146a-5p antagomir significantly aggravated CCI-induced neuropathic pain from POD7 to POD21 (Fig. 6a, b). Our qPCR results suggested that miR-146a-5p antagomir increased the mRNA level of IRAK1 and TRAF6 in DRG and SDH at 14 days after CCI compared with CCI rats which were intrathecally injected with antagomir control (Fig. 6c, f). Western blot results showed that miR-146a-5p antagomir increased the protein level of IRAK1 and TRAF6 in DRG and SDH at CCI POD14 (Fig. 6g-j). We also found intrathecal miR-146a-5p antagomir increased the phosphorylation level of pNF-κB (p65) protein in CCI rats (Fig. 6g-j).
Discussion
Neuropathic pain is a common type of chronic pain that affects the life quality of patients. The exact molecular mechanism of neuropathic pain has not been fully elucidated. Several rat models with partial injury to peripheral nerves have been used to investigate the possible mechanisms. CCI is a commonly used model to mimic the pathophysiological progress of chronic neuropathic pain. In this study, the role of miRNA-146a-5p in the pathophysiological mechanism of neuropathic pain was investigated. Our research successfully established a CCI rat model and found that the mechanical PWT and thermal PWL in the CCI group were significantly lower than those in the sham group. The CCI rats showed allodynia and hyperalgesia, which are precise clinical characteristics of neuropathic pain. Our results demonstrated a significant increase in the miRNA-146a-5p level in DRG and SDH of rats suffering from neuropathic pain and a considerable increase in the expression of IRAK1 and TRAF6. Our findings are consistent with other studies that used other pain models, in which miRNA-146a-5p, TRAF6, or IRAK1 are strongly upregulated in the SDH of pain models [26,30,31].
Several reports showed some miRNAs participate in the development of neuropathic pain and affect neuropathic pain by regulating protein level in the pain progress [19,[32][33][34]. The proposed possible mechanism indicates that peripheral stimuli from inflammation or nerve injury can induce the secretion of inflammatory mediators and thus change the miRNA expression in DRG or SDH. miRNA-146a-5p, a member of the miRNA family, is involved in immune responses, cell proliferation, and inflammation [18,35]. miRNA-146a-5p is related to pain-related pathophysiology of osteoarthritis. The variable expression of miRNA-146a-5p in the spinal cord and DRG contributes to osteoarthritic pain in the knee joint [25,31].
As a critical innate immune receptor, TLRs is activated in neuropathic pain, and its deficiency protects against neuropathic pain. The activation of the TLRs signaling on cells in the peripheral or central nervous system, particularly the glia cell and DRG neuron, contributes to neuropathic pain [5-8, 15, 36, 37]. Activated TLR4 initiates transmembrane signaling cascades that trigger intracellular mediators [13][14][15][16]. In this pathway, the activation of IRAK1 and TRAF6 leads to the nuclear translocation of the transcription factor NF-κB, resulting in the production of proinflammatory cytokines, such as IL-6 and TNF-α. Meanwhile, the activation of NF-κB can induce miRNA-146a-5p [17]. miRNA-146a-5p that is NF-κB-dependent microRNA plays a key role in the regulation of TIR signaling through its target molecules, namely, TRAF6 and IRAK1, which are two important protein kinase in the TIR signaling pathway [17,22,24]. We demonstrated that over-expression of miRNA-146a-5p protects rats against neuropathic pain after CCI operation by negatively regulating the expression level of IRAK1 and TRAF6.
To further determine the role of miRNA-146a-5p in the CCI-induced neuropathic pain, we found CCI rats which were intrathecally injected with miRNA-146a-5p antagonist; miRNA-146a-5p antagomir suffer from aggravated neuropathic pain. Intrathecal injection with miRNA-146a-5p antagomir elevated the level of IRAK1 and TRAF6 of CCI rats. Our finding is consistent with recent studies, in which miRNA-146a-5p negatively regulates the TIR signaling pathway by targeting IRAK1 and TRAF6. Several studies suggested that miRNA-146a-5p may negatively regulate the LPS-induced TLR signaling through downregulation of IRAK1 and TRAF6 by binding to the 3′UTR of their Fig. 4 Expression of miRNA-146a-5p in SDH and DRG of rat with intrathecal administration of miRNA-146a-5p agomir or miRNA-146a-5p antagomir. a, b qPCR showed intrathecal administration of miRNA-146a-5p agomir upregulated the RNA level of miRNA-146a-5p in SDH (a) and DRG (b). c, d qPCR showed intrathecal administration of miRNA-146a-5p antagomir reduced the RNA level of miRNA-146a-5p in SDH (c) and DRG (d) mRNAs [17,38]. Previous studies also confirmed that miRNA-146a-5p-deficient mice exhibit a considerable increase in IRAK1 and TRAF6 protein level and are hypersensitive to LPS [23,39]. However, whether miR-146a-null mice are sensitive to neuropathic pain must be further confirmed. In our research, we demonstrated that miRNA-146a-5p antagomir increased the phosphorylation level of NF-κB (p65). That indicated downregulation of miRNA-146a-5p may result in the over-responsiveness of TIR signaling pathway. By contrast, the over-expression of miRNA-146a-5p contributed to the lower level of phosphorylation for NF-κB (p65). In this study, we did not study the expression of miRNA-146a-5p and its targets in the brain after neuropathic pain. The expression of miRNA-146a-5p on the brain can possibly regulate neuropathic pain, yet this hypothesis must be further confirmed.
Conclusions
In this study, we demonstrated that neuropathic pain was associated with miRNA-146a-5p. The therapeutic approaches using miRNA-146a-5p agomir could relieve neuropathic pain in rat models of CCI. The mechanism may involve the regulation of the TIR signaling pathway Fig. 5 miRNA-146a-5p attenuated neuropathic pain and decreased IRAK1 and TRAF6 expression in rat DRG and SDH after CCI. a, b Intrathecal injection of miRNA-146a-5p agomir attenuated CCI-induced mechanical allodynia (a) and thermal hyperalgesia (b). Each administration is indicated by an arrow on 0, 4, 8, and 12 days after CCI operation. Two-way ANOVA, *P < 0.05 versus CCI + Ctrl. miRNA-146a-5p agomir was administrated i.t. in a volume of 20 μL. The negative miRNA agomir was used in the control group. Eight rats were included in each group. c, d qPCR showing mRNA level of IRAK1 mRNA in SDH (c) and DRG (d) after miRNA-146a-5p agomir administration. e, f qPCR showing mRNA level of TRAF6 in SDH (e) and DRG (f) after miRNA-146a-5p agomir administration. n = 4 in each group of c, f, Student's t test, *P < 0.05, versus control. g, h Western blot showing protein level of IRAK1, TRAF6, and pNF-κB (p65) in SDH (g) and DRG (h) after miRNA-146a-5p agomir administration. Data summary is shown on the right. n = 4 in each group, Student's t test, *P < 0.05 versus sham. i, j Immunofluorescence showing nuclear translocation of pNF-κB (p65) in CCI control group (i) and CCI agomir group (j). Scale bar 50 μm (i, j) by directly suppressing its target, IRAK1 and TRAF6. The administration of miRNA-146a-5p or its inducers can be used as a promising therapy to relieve neuropathic pain.
Additional file
Additional file 1: Figure S1. Cellular distribution of IRAK1 in DRGs. Figure Fig. 6 Inhibition of miRNA-146a-5p leads to aggravated neuropathic pain and increased IRAK1 and TRAF6 expression in rat DRG and SDH after CCI. a, b Intrathecal injection of miRNA-146a-5p antagomir aggravated CCI-induced mechanical allodynia (a) and thermal hyperalgesia (b). Each administration is indicated by an arrow on 0, 4, 8, and 12 days after CCI operation. Two-way ANOVA, *P < 0.05, versus CCI + Ctrl. miRNA-146a-5p antagomir was administrated i.t. in a volume of 20 μL. The negative miRNA antagomir was used in the control group. Eight rats were included in each group. c, d qPCR showing mRNA level of IRAK1 mRNA in SDH (c) and DRG (d) after miRNA-146a-5p antagomir administration. e, f qPCR showing mRNA level of TRAF6 in SDH (e) and DRG (f) after miRNA-146a-5p antagomir administration. n = 4 in each group of c-f, Student's t test, *P < 0.05, versus control. g-j Western blot showing protein level of IRAK1, TRAF6, and pNF-κB (p65) in SDH (g, h) and DRG (i, j) after miRNA-146a-5p antagomir administration. Data summary of western blot results in SDH (h) and DRG (j) is shown on the right. n = 4 in each group, Student's t test, *P < 0.05 versus sham Medical Sciences Chinese Academy of Medical Sciences, for their technical assistance in immunohistochemistry.
Availability of data materials
There is no data, software, databases, and application/tool available apart from the reported in the present study. All data is provided in the manuscript and supplementary data. | 6,046.6 | 2018-06-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Impact of Humidity and Temperature on the Stability of the Optical Properties and Structure of MAPbI3, MA0.7FA0.3PbI3 and (FAPbI3)0.95(MAPbBr3)0.05 Perovskite Thin Films
In situ real-time spectroscopic ellipsometry (RTSE) measurements have been conducted on MAPbI3, MA0.7FA0.3PbI3, and (FAPbI3)0.95(MAPbBr3)0.05 perovskite thin films when exposed to different levels of relative humidity at given temperatures over time. Analysis of RTSE measurements track changes in the complex dielectric function spectra and structure, which indicate variations in stability influenced by the underlying material, preparation method, and perovskite composition. MAPbI3 and MA0.7FA0.3PbI3 films deposited on commercial fluorine-doped tin oxide coated glass are more stable than corresponding films deposited on soda lime glass directly. (FAPbI3)0.95(MAPbBr3)0.05 films on soda lime glass showed improved stability over the other compositions regardless of the substrate, and this is attributed to the preparation method as well as the final composition.
Introduction
Organic-inorganic metal halide-based ABX 3 perovskites (A cation: methylammonium-MA, formamidinium-FA, cesium-Cs, rubidium-Rb; B cation: lead-Pb, tin-Sn; X anion: iodine-I, bromine-Br, chlorine-Cl) have gained tremendous attention in photovoltaic (PV) applications due to their high-power conversion efficiency which increased from 3.8% in 2009 to 25.5% [1][2][3] recently. Application of perovskite films to PV benefit from simple solution deposition processing, a tunable range of bandgap energies, and desirable optoelectronic properties, which include a high absorption coefficient above the bandgap energy, long carrier diffusion lengths, and long carrier lifetimes [4][5][6]. Solar cells made with these absorber layers are promising candidates for the next-generation high-efficiency PV technology [7]. The MAPbI 3 perovskite is the pioneer among those used in solar cells and has been the most studied to date. Despite showing desirable optoelectronic properties, MAPbI 3 degrades with exposure to humidity, oxygen, heat, and ultraviolet light [8][9][10][11]. Previous in situ real time spectroscopic ellipsometry (RTSE) studies conducted by Ghimire et al. on MAPbI 3 upon atmospheric exposure have shown phase segregation into PbI 2 and MAI starting at the interfaces of the film with the ambient and substrate [12]. It has been reported that MAPbI 3 becomes hydrated under a humid environment in the dark and forms PbI 2 with humidity exposure under illumination [13]. Another perovskite composition, FAPbI 3 , shows improved thermal stability compared to MAPbI 3 , however it may transform from the desired cubic, photo-conductive "black" perovskite α-phase into the "yellow" trigonal δ-phase in the presence of solvents and humidity [4]. Stability can be improved by tuning the cationic and anionic perovskite composition [14,15]. In particular, the A cation size is critical for the formation of a cubic perovskite structure [16]. When a mixture of MA and FA is used, the A cation size given by Goldschmidt tolerance factor falls in the 0.8-1.0 range, which is favorable for the cubic black phase perovskite structure to form, and the corresponding material stability is improved [17,18]. Recent studies indicate that partial Br substitution for I in MAPbI 3 prevents ion migration in the perovskite and maintains favorable light absorption [19]. Substituting I with Br has also been reported to shrink the lattice parameter and increase the photogenerated-carrier lifetime and charge-carrier mobility [20]. All these studies provide general knowledge on improving the stability of organic-inorganic perovskites, however the level of stability for different perovskite film compositions in terms of complex optical response has been rarely explored.
Spectroscopic ellipsometry is a non-destructive, non-invasive measurement often used to explore the structural and optical properties of thin film materials. It measures changes in polarization state and amplitude of an incident light beam upon interaction with a sample either by reflection or transmission [21,22]. The complex optical properties and thicknesses of each component layer will impact this polarization state change via coherent multiple reflections. Data analysis of ellipsometric spectra is conducted by the construction of a parametric structural and optical model based upon a transfer matrix method from which physical properties, including the complex optical response and layer thicknesses, are extracted in a least squares regression fit to experimental ellipsometric spectra. In previous studies, thickness information gained from spectroscopic ellipsometry have been found to be in good agreement with that obtained from other techniques such as X-ray reflectivity, atomic force microscopy, scanning electron microscopy, and transmission electron microscopy [23][24][25][26]. When studying the dynamic evolution of properties of interest, in situ RTSE is used [12,[27][28][29][30]. In RTSE, the sample is continuously measured so that the change in the polarization state of the incident light upon interaction with the sample is tracked as a function of time without physically moving the sample, although the sample characteristics may change.
Here in situ RTSE is used to track the variation of the optical and structural properties of MAPbI 3 , MA 0.7 FA 0.3 PbI 3 , and (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 perovskite films deposited on soda lime glass and commercial fluorine-doped tin oxide (FTO) coated glass substrates under controlled relative humidity (RH) and temperature variations. These films are prepared using single-step spin coating and two-step solution processing [31][32][33]. The non-destructive and non-invasive nature of RTSE makes it the most suitable measurement method to track these changes. In addition, it enables the simultaneous determination of both optical and structural properties. Strong variations in optical and structural properties for MAPbI 3 over time and the minimal variations for MA 0.7 FA 0.3 PbI 3 and (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 allow us to identify the factors influencing the stability of perovskites films. These factors include substrate type, mixing organic cations, incorporation of Br as a halide anion, and two-step versus singlestep preparation methods. The RTSE data analysis methodology can be adopted in studying degradation of the perovskite absorber and other layers inside PV devices to assist in identifying stable materials for a large-scale industrial application.
Perovskite Film Preparation
All perovskite films are prepared in a nitrogen (N 2 )-filled glove box to prevent exposure to ambient air. MAPbI 3 and MA 0.7 FA 0.3 PbI 3 films are prepared using single-step spin coating [31,32]. These films are deposited on soda lime glass and FTO coated glass (NSG Pilkington, Rossford, OH, USA TEC-15) substrates. , where the perovskite concentration is 1.5 M. The perovskite precursor solution is then spin-coated onto the substrate at 500 rpm for 3 s, and then at 4000 rpm for 60 s with diethyl ether dropped on the film at 10 s of the second step. The as-prepared perovskite films are annealed on a hotplate at 65 • C for 2 min and then at 100 • C for 5 min.
To prepare (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 , a two-step solution-processed preparation method is used as is described in [33]. Initially, 599.3 mg PbI 2 is dissolved in a mixed solvent of 950 µL DMF and 50 µL DMSO. 70 mg FAI, 6 mg MABr, and 7 mg MACl are then dissolved in 1 mL isopropanol (IPA). To deposit the PbI 2 film, 70 µL PbI 2 solution is dropped onto substrate and spin-coated at 2000 rpm for 30 s, followed by annealing at 70 • C for 2 min. Then, the IPA solution is spin-coated on the as-prepared PbI 2 film at 2000 rpm for 30 s to form the perovskite phase. Next, the samples are annealed at 150 • C for 15 min at 30-40% relative humidity in ambient air.
Real Time Spectroscopic Ellipsometry Data Collection and Analysis
To transfer the prepared perovskite films for RTSE measurements, the films are loaded into a sealed measurement chamber ( Figure 1) filled with nitrogen inside the glove box to prevent exposure to laboratory ambient air. The chamber containing the sample is placed on a temperature-controlled stage with a range from 7 to 70 • C (SCI-SCC6-L-F, Sciencetech Inc., London, ON, Canada). An infrared thermometer is used to confirm the temperature inside the chamber before and at the end of the measurements. Humidity is introduced by water vapor and nitrogen gas flows at a total rate of 5 standard cubic feet per hour (SCFH) into the chamber. 26% RH at temperatures of 7 and 70 • C, and 85% RH at 25 • C are measured within the chamber (Sensing solutions EVM GUI, Texas Instruments Inc., Dallas, TX, USA). In situ RTSE measurements are performed to collect ellipsometric spectra in the form of N = cos (2ψ), C = sin (2ψ)cos (∆), and S = sin (2ψ)sin (∆) where the ellipsometric angles ψ and ∆ describe the relative amplitude and phase shift between the electric field components perpendicular and parallel to the plane of incidence [22]. Ellipsometric spectra (N, C, and S) are collected at an angle of incidence of 70 • for 500 spectral points over a photon energy range from 1.25 to 6.00 eV using a single rotating compensator multichannel ellipsometer (M-2000FI, J.A. Woollam Co., Inc., Lincoln, NE, USA) [34]. Data is collected at 150 s intervals for each film to track changes in the complex optical response over time when the film is exposed to humidity and temperature. Selected time points are analyzed to track time dependent variations in the optical properties and film structure. Window effects of the chamber introducing an additional phase shift in the incident polarization state of the ellipsometer beam are accounted for through the use of ∆-offset parameters [35].
The experimental ellipsometric spectra are analyzed using a least square regression analysis in which the quality of fit between the model (mod) and the experimental (exp) ellipsometric spectra is defined in terms of the unweighted mean square error (MSE) [22]: where n is the number of measured values and m is the number of fit parameters.
To fit experimental ellipsometric spectra, a parameterized optical and structural model is used (CompleteEASE software, J.A. Woollam Co., Inc., Lincoln, NE, USA) to extract the optical response of the perovskite films in terms of complex dielectric function (ε = ε 1 + iε 2 ) spectra and structural parameters, which includes perovskite film thickness, the thickness of an interfacial layer forming between the substrate and perovskite film, and surface roughness layer thickness [36]. The full layer sequence consists of a substrate/perovskite + void interfacial layer/perovskite film/surface roughness/air ambient. The optical properties and the optical model used to describe soda lime glass and TEC-15 substrates have been reported in [37,38] respectively. Spectra in ε for the perovskite film are described using a parametric optical property model, which applies physically realistic parametric dispersion relations spanning from photon energies above to below the bandgap energy [39]. The mathematical equations for full parametric description of ε for these perovskites have been provided in [39,40]. The imaginary part of the dielectric function (ε 2 ) is described by the sum of critical-point oscillators assuming parabolic bands (CPPB) above the direct bandgap energy and an Urbach tail below the bandgap [41,42]. The above-gap critical points (CPs) in these perovskite films are assumed to be excitonic [43]. The lowest energy CP in ε 2 is considered to be the direct bandgap of the perovskite film. Spectra in ε 1 is calculated as the sum of a Sellmeier expression, a constant additive term to the real part of ε 1 denoted by ε ∞ , and Kramers-Kroning integration [39] of ε 2 over this spectral range. Spectra in ε for the surface roughness layer and interfacial layer are represented by Bruggeman effective medium approximations [29] consisting of 0.5 void (ε = 1) and 0.5 bulk perovskite volume fractions. Figure 2a shows the variation in spectra in ε as a function of time for a MAPbI 3 thin film deposited on a soda lime glass substrate when exposed to 85% RH at 25 • C. A noticeable decrease in the magnitude of ε 2 is observed after 200 min of exposure of MAPbI 3 to humidity. In Niu et al., MAPbI 3 is exposed to 60% RH air at 35 • C, and the absorption feature in ε 2 located at photon energies between the bandgap energy of 1.55 eV and 2.34 eV decreases sharply due to film degradation [44]. This interpretation provides context in understanding the decrease in magnitude of ε after 200 min of humidity exposure. Degradation throughout the MAPbI 3 film is reflected in the change in optical properties represented on Figure 2a. It has been reported that the interaction between MA and water molecules leads to the desorption of MA, eventually degrading the surface of MAPbI 3 by forming a hydrate phase MAPbI 3 ·H 2 O [13,45]. Due to lack of stability of the hydrate phase, further decomposition into MAI and PbI 2 could occur [45,46]. One of the possibilities to explain the observed variation in optical properties is that the film composition is no longer pure MAPbI 3 when the film is exposed to humidity for prolonged times. Previous studies indicated hydrogen bonding between the oxygen in water and the hydrogen in the NH 3 groups of MA cations [44,47]. Under exposure to humidity, the formation of that hydrogen bond will lead to an intermediate non-perovskite phase and this can cause the reduction in density of the perovskite film throughout the bulk [48]. It can be inferred that variation in optical properties can also result from the reduction in film density due to humidity induced film degradation. Figure 3a shows the time dependence of the structural parameters in terms of the surface roughness, bulk layer, and interfacial layer thicknesses as well as the quality of fit between the model and the ellipsometric spectra represented by the MSE. Effective material thickness is calculated by combining the perovskite bulk film thicknesses with the interfacial layer and surface roughness layer thickness weighted with the perovskite material fraction in each layer respectively [12]:
Results and Discussion
d e f f ective, material = ∑ layers f material × d layer (2) where d layer is the thickness of each layer and f material is the fraction of perovskite material in each layer. This quantity is used to identify changes in the total amount of perovskite per unit area on the substrate. Two models are applied to analyze the RTSE data. One model applies spectra in ε obtained prior to humidity exposure to fit the experimental data throughout the measurement time. The other fits spectra in ε independently for each time point. As shown in Figure 2a, spectra in ε for MAPbI 3 changes as a function of exposure time, therefore, the MSE obtained assuming static ε spectra obtained prior to humidity exposure is higher than that when spectra in ε are obtained at each time point. This difference in quality of fit indicates that both the structural parameters of the film and the intrinsic characteristics of the film reflected in ε vary with exposure to 85% RH. Surface roughness thickness increases gradually as soon as humidity flow is initiated (Figure 3a). In the beginning, there is no optically detectable interfacial layer between the film and the glass substrate, however this lower density layer forms after 300 min of exposure. The evolution of this layer increases the effective film thickness, however changes in the surface roughness and bulk layer thicknesses also occur. Within 550 min of humidity exposure, the effective thickness has increased by 218 nm (Table 1). After 300 min, water molecules may have penetrated the film, potentially from the top surface along grain boundaries or laterally from the sample edges, to the interface between the film and the glass substrate. Due to change in spectra in ε representing the entire perovskite film, it is likely that water transport from both directions occurs simultaneously. The increase in effective thickness may be due to an increase in unit cell volume caused by the hydration of MAPbI 3, which will reduce the film density [49]. Considering the lattice volume of MAPbI 3 and MAPbI 3 ·H 2 O to be 247 Å 3 and 263 Å 3 respectively, conversion of MAPbI 3 to MAPbI 3 ·H 2 O causes a volumetric lattice expansion of 6% based on the relative lattice parameters of both phases [49,50]. This result indicates that water molecules intercalated into the perovskite may have increased the volume of the unit cell, which leads to the increase in the thickness of the film and a reduction in density. The increase in volume of the unit cells due to hydration alone is not sufficient to describe the observed increase in effective film thickness. Other factors, such as decomposition and phase segregation, indicated by the presence of the interfacial layer with the substrate may have contributed to the increase in effective thickness. It should be noted that the MSE increases steadily with time using both initial static spectra in ε or allowing ε to vary for each time point. As even the model assuming changes in ε over time possesses a substantially large MSE relate to other samples (Table 1), it is likely that water infiltration might initiate breakdown, particularly along the interface. In addition, PbI 2 and MAI phase segregation might have occurred well before the final time point analyzed, which is consistent with the results reported by Ghimire et al. [12]. Briefly, we are not accurately capturing exactly how the film is changing and breaking down at long times since the MSE is quite high regardless of either the static or dynamic model used. Figure 2b shows the variation of spectra in ε for a MAPbI 3 film deposited on FTO coated glass substrate exposed to the same conditions. A decrease in the magnitude of spectra in ε occurs steadily with time. Smaller MSE values for MAPbI 3 deposited on FTO coated glass compared to the film directly on soda lime glass implies that the film deposited on FTO does not degrade as much as that on bare glass. Even though the optical properties for the perovskite film on FTO coated glass visually change more during exposure time, the higher MSE for the film deposited on bare glass indicates that the structural and optical property model for the starting material is inadequate to describe MAPbI 3 during the degradation process. We can assume that the intrinsic properties of the film on FTO change more, but perhaps these changes are a sign of largescale stability since it does not break down to the extent of the film on bare glass. A comparative study conducted on MAPbI 3 :Cl perovskite films grown on glass, glass/FTO, and glass/FTO/TiO 2 using photoluminescence indicated that films on FTO and on TiO 2 show more stable behavior over that on glass as oxygen can diffuse through the surface from the layer below. The underlying surface roughness was found to be a key factor in the morphology of the perovskite grains and in the resulting concentration of defects by providing a different nucleation site density and impacting the dimensionality of the grain growth dynamics [51]. Minimal increase in effective thickness is observed in Figure 3b compared to that for MAPbI 3 deposited on soda lime glass as listed in Table 1. The small variation in effective thickness for MAPbI 3 deposited on FTO emphasizes that this film does not decompose or delaminate to the same extent as that of the same composition film deposited on glass. This behavior is also reflected in ε as the film on FTO appears more uniform after 1000 min of exposure compared to the film on glass, which shows more substantial changes up to 500 min. In the case of MAPbI 3 deposited on glass, the huge increase in MSE suggests that the structural and optical property model applied no longer adequately describes this sample. The MAPbI 3 film on glass is likely to have completely or partially degraded or slightly delaminates as the large change in effective thickness is driven by the increase in the interfacial layer (Figure 3a). The interfacial layer is nearly as thick as the film itself in that case. Figure 4a shows the variation of spectra in ε when a MA 0.7 FA 0.3 PbI 3 film deposited on soda lime glass substrate is exposed to 85% RH at 25 • C. There is a minimal decrease in magnitude of spectra in ε compared to MAPbI 3 deposited on either soda lime glass or FTO coated glass. Mixed MA + FA A-cation perovskites have been reported to be more stable against humidity exposure than perovskites with only MA as the A-cation [52]. By mixing MA and FA A-cations in solid solutions, the cubic perovskite phase is better stabilized. Minimal variation of effective thickness with time for MA 0.7 FA 0.3 PbI 3 deposited on glass is observed compared to that observed for MAPbI 3 on glass (Table 1) within a similar time interval. The improved stability of mixed MA + FA perovskite can be attributed to the larger size of the FA cation compared to MA leading to a more favorable tolerance factor [14,53]. The ionic radii of MA and FA are 2.16 and 2.53 Å [54], respectively, and a larger FA cation fraction in the perovskite results in a more ideal Goldschmidt tolerance factor closer to 1 [14,54]. The corresponding tolerance factor values are 0.964 for MA 0.7 FA 0.3 PbI 3 and 0.911 for MAPbI 3 . The obtained tolerance factor values from calculation are reasonable considering the observed stability of the mixed MA + FA perovskite over MAPbI 3 . The MA 0.7 FA 0.3 PbI 3 film deposited on FTO coated glass substrate exposed to 85% RH at 25 • C shows a smaller decrease in magnitude of spectra in ε (Figure 4b) compared to MA 0.7 FA 0.3 PbI 3 deposited on soda lime glass. The improved stability over the MA 0.7 FA 0.3 PbI 3 film deposited on soda lime glass might be due to the influence of the chemical and structural nature of the underlying FTO [51] similar to that observed for MAPbI 3 . Variation in effective thickness is less than that of the film on soda lime glass as shown in Table 1, although both films are relatively stable overall as shown in Figure 5.
The MSE values for the MA 0.7 FA 0.3 PbI 3 perovskites on either substrate are low indicating that there are likely no other phases present or changes in phase structure during the observation time. Additionally, the MSE values in each case assuming either spectra in ε are fixed to the pre-humidity exposure value or allowed to vary with time are very similar when compared to the substantial differences observed for MAPbI 3 . To further understand the factors that influence the stability of different perovskite film compositions, (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 thin films deposited on soda lime glass substrates have been exposed to 26% RH at 7 • C, 26% RH at 70 • C, and 85% RH at 25 • C. Figure 6 shows spectra in ε for (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 deposited on soda lime glass exposed to 26% RH at 7 • C. The decrease in magnitude of spectra in ε is minimal compared to both MA 0.7 FA 0.3 PbI 3 and MAPbI 3 . Figure 6. Spectra in ε at selected time points for thin film (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 deposited on soda lime glass as a function of photon energy when exposed to 26% RH at 7 • C. RH is introduced at 5 min.
There is a small variation in effective thickness (Figure 7) when the film is exposed for a prolonged time. Improved quality of fit reflected in the low MSE assuming static ε obtained prior to humidity exposure and dynamic ε are evident, and the gap between the MSE describing the two increases only moderately during exposure to humidity. This indicates that low temperature humidity exposure does not affect the stability of this perovskite film substantially, and there are likely no other phases present. Figure 7. Surface roughness thickness, bulk layer thickness, interfacial layer thickness, effective perovskite thickness, and the quality of fit between the model and experimental ellipsometric spectra in the terms of mean square error as functions of time for (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 thin film deposited on soda lime glass when exposed to 26% RH at 7 • C. Values obtained assuming the initial spectra in ε (time = 0 min; static) prior to exposure are represented by a solid line, and spectra in ε fit at each time point (dynamic) are represented by solid circles. RH is introduced at 5 min. Figure 8 shows the variation of spectra in ε when a (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 film deposited on soda lime glass substrate is exposed to the sequence of 26% humidity at 70 • C initially and then 85% RH at 25 • C later. The goal of this combined measurement is to test how this film composition resists varying exposure at both moderate humidity at high temperature and high humidity at moderate temperature after determining its stability at fixed 7 • C and 26% RH conditions. There is no change in magnitude of spectra in ε as a function of time compared to MA 0.7 FA 0.3 PbI 3 and MAPbI 3 , and the changes are even smaller compared to those observed in this composition perovskite at a low temperature. The (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 film at a low temperature may exhibit a small change over time as small amounts of water condense onto the film. However, even those changes are minor compared to the other perovskite compositions studied. Figure 9 illustrates the structural parameters in the form of the surface roughness, bulk layer, and interfacial layer thicknesses as functions of time for the (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 film deposited on soda lime glass substrate exposed to 26% RH at 70 • C then 85% RH at 25 • C. No variation in effective thickness is observed for the film exposed to 26% RH at 70 • C but there is a minimal increase in effective thickness during 85% RH at 25 • C exposure in the reported time interval in Table 1. This variation is minimal compared to the other film compositions and measurement conditions. The small changes that are present (<10 nm) are attributed to small precession of the measurement beam spot on the sample surface over the time of the measurement. The quality of fit assuming a static ε obtained prior to humidity exposure is almost identical to that when spectra in ε are obtained at each time point (dynamic ε) for the time exposed to 26% RH at 70 • C. Note that the same static ε obtained prior to humidity exposure at 70 • C is used as the temperature is decreased, leading to a small increase in the MSE. This difference is expected as spectra in ε for semiconductors will vary with measurement temperature [55][56][57]. However, even in this case the spectra in ε measured at 70 • C provides a low MSE. This similarity in quality of fit indicates that the structural parameters of the film and the intrinsic characteristics of this perovskite reflected in ε are not affected by humidity exposure and temperature variations. Overall, temperature variations over the 7 to 70 • C ranges do not seem to have an impact on the stability of the (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 composition perovskite films. Figure 9. Surface roughness thickness, bulk layer thickness, interfacial layer thickness, effective perovskite thickness, and the quality of fit between the model and experimental ellipsometric spectra in the term of mean square error as functions of time for thin film (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 deposited on soda lime glass when exposed to 26% RH at 70 • C then 85% RH at 25 • C. Values obtained assuming the initial spectra in ε (time = 0 min; static) prior to exposure are represented by a solid line, and spectra in ε fit at each time point (dynamic) are represented by solid circles. The vertical dotted line marks when the temperature and humidity are changed. RH is introduced at 5 min.
The improved stability for (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 over MAPbI 3 and MA 0.7 FA 0.3 PbI 3 with temperature variations and humidity exposure is attributed to the incorporation of Br into the mixed MA + FA film. Previous research has shown that including a small amount of Br enhances the stability, suppresses ion migration, and reduces trap-state density [53]. According to Noh et al., upon incorporation of Br into MAPbI 3 for solar cell absorbers, there is an improvement in open circuit voltage (V OC ) from 0.87 to 1.13 V and an increase in fill factor (FF) from 0.66 to 0.74 [58]. Increases in these parameters is an indicator of good electronic film quality [36]. It has also been reported that replacing larger I ions with smaller Br ions results in a greater resistance to high humidity at room temperature [53]. Furthermore, perovskite films with mixed cations and mixed anions crystallize at much higher temperatures and exhibit greater degrees of crystalline order. A material with enhanced crystallinity both can promote high V OC in PV devices and are more stable against degradation [4]. The two step-preparation solution-processing method for (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 also leads to larger grain size than in the single step approach which further enhances film stability [33].
Bandgap energies of the studied perovskites films (Table 1) are not substantially affected by prolonged exposure to temperature and humidity, although there is observed variation in the magnitude of spectra in ε above the bandgap. There is a slight decrease in bandgap when the (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 film exposed to 26% RH at 70 • C is cooled to 85% RH at 25 • C. Previous research has shown that the bandgap decreases with decreasing temperature in these hybrid halide perovskites unlike in other semiconductors where the bandgap decreases with increasing temperature as a result of lattice dilatation [55,57]. Here, a minimal decrease in bandgap is observed with decreased temperature. This change is small on its own since the temperature variation is not extreme. Film composition also controls variation in bandgap [59]. Lack of variation in bandgap in the case of the perovskite films continuously exposed to a fixed temperature and humidity can be attributed to the fact that the film composition remains largely the same during the time of exposure at this range of temperatures. Even MAPbI 3 , which exhibits substantial degradation by water molecules, retains a similar bandgap energy throughout humidity exposure although the higher energy CPs and magnitude of ε change more substantially.
Conclusions
The stability of optical and structural properties of organic-inorganic metal halidesbased perovskites thin films exposed to various humidity and temperature conditions have been explored using RTSE. MAPbI 3 degrades upon exposure to 85% RH at room temperature whereas mixed MA + FA perovskites shows improved stability when exposed to RH at various temperatures. Temperature alone in the range of 7 to 70 • C does not affect the stability of studied (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 perovskite films. There is an influence of the substrate on the stability of these perovskites as the films deposited on FTO coated glass shows enhanced stability compared to the films deposited directly onto soda lime glass. Additionally, incorporation of bromine and the two-step preparation method are among other factors for improved stability of the mixed MA + FA cation perovskites. Among the studied perovskites, (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 demonstrated a robust stability for the tested humidity and temperature conditions.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,431.2 | 2021-07-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
First performance results of the ALICE TPC Readout Control Unit 2
This paper presents the first performance results of the ALICE TPC Readout Control Unit 2 (RCU2). With the upgraded hardware typology and the new readout scheme in FPGA design, the RCU2 is designed to achieve twice the readout speed of the present Readout Control Unit. Design choices such as using the flash-based Microsemi Smartfusion2 FPGA and applying mitigation techniques in interfaces and FPGA design ensure a high degree of radiation tolerance. This paper presents the system level irradiation test results as well as the first commissioning results of the RCU2. Furthermore, it will be concluded with a discussion of the planned updates in firmware.
Inroduction
ALICE (A Large Ion Collider Experiment) is a general-purpose, heavy-ion detector at the CERN LHC focusing on quark-gluon plasma (QGP), which is believed to exist at extremely high temperature, density, or both temperature and density [1]. Due to its short living state, the QGP is not possible to be observed directly. Therefore, a set of detectors, which aims at observing events and signatures that indicate the existence of Quark-Gluon Plasma, were designed and installed [2]. The Time-Projection Chamber (TPC) is the main tracking detector of the central barrel in ALICE. Through the study of hadronic observables, it is optimized to provide, together with the other central barrel detectors, charged-particle momentum measurements with good two-track separation, particle identification, and vertex determination [3]. The TPC data is collected by 557568 readout pads on its two end plates, behind which the readout electronics are connected [4]. The readout electronics consists of 4356 Front-end cards (FECs) in 216 readout partitions, distributed on 36 sectors. Each readout partition includes one RCU connecting to from 18 to 25 FECs via a multi-drop Gunning Transistor Logic (GTL) Bus. More information about the present TPC readout electronics has been discussed in [3][4][5].
In LHC Run 1, the RCU was working stable [5]. However, with the upgrades in long shutdown 1 (LS1), the energy of the colliding beams will be increased to 13 ∼ 14 TeV compared to the 7 ∼ 8 TeV during Run 1. As a result, the event size is expected to increase by 20% and the radiation load on the TPC electronics which is located in the innermost partitions is estimated to increase from 0.8 kHz/cm 2 to 3.0 kHz/cm 2 [4]. This led to the requirements of higher readout speed and improved radiation tolerance which could not be fulfilled by the current TPC readout electronics. In order to provide the needed performance, the present Readout Control Unit (RCU1) is upgraded -1 -to the Readout Control Unit 2 (RCU2). Further information on what motivated the upgrade of the RCU can be found in [4].
As presented in figure 1, the upgrades from RCU1 to RCU2 generally includes five aspects: (1) The GTL bus is divided into four branches from the current two branches structure, (2) the speed of detector data link (DDL) [6] is increased from 1.28 Gbps to 4.25 Gbps, (3) the functionalities of three PCB boards in RCU1 are integrated into a single PCB board in the RCU2, (4) the flash based Microsemi Smartfusion2 (SF2) FPGA [7] is chosen to replace the two SRAM based FPGAs and one flashed based FPGA in the RCU1, (5) a detector pad based readout scheme, aiming at utilizing the parallelism of improved hardware, is designed. More details regarding the upgrades from RCU1 to RCU2 has been discussed in [4,6,8].
The ALICE TPC Readout Control Unit (RCU2)
As shown in figure 2, the RCU2 consists of two major systems, the Readout System which is implemented in the SF2 FPGA fabric [7] and the Control and Monitor System which runs on the SF2 Microcontroller Subsystem (MSS) [9]. In the Readout System, the Trigger Receiver accepts, decodes and processes the trigger sequence that comes from the ALICE Central Trigger Processor (CTP) [10], before it passes the generated local triggers [11] to the Readout Module. Based on the local trigger information, the Readout Module reads data from the four branches of FECs in parallel, checks its quality, merges and packages it into the format of ALICE data files [12]. At the final stage, the packaged data is pushed into the DDL2 Module [6] through which it is shipped into the ALICE data acquisition system [13]. In addition, an Internal Logic Analyzer, which provides the capability of debugging the internal logics of the Readout System, has been implemented.
The Control and Monitor System includes the Monitoring and Safety Module (MSM) [14], the Ethernet Module and the SF2 MSS with its peripherals. The Monitoring and Safety Module is responsible for monitoring the status of the FECs and reporting it to the ALICE Detector Control System (DCS) [15] in case of abnormal situations. As shown in figure 3, a tailored 32-bit Linux system operates in the ARM-Cortex M3 [9] of the SF2 MSS and three 16-bit DDR3 SDRAMs [16]. Two of the SDRAMs together store the 32-bit words of the Linux, and the third one stores the parity bits used in SECDED mechanism. When SECDED is enabled, the MSS DDR controller [9] computes and adds parity bits to the data while writing into the DDR3 SDRAMs. Then in a read operation, the data and the parity bits are checked to support 1-bit error correction and 2-bit error -2 -
System level irradiation test
As mentioned above, the increased luminosity in LHC Run 2 with respect to LHC Run 1 will lead to higher radiation load on the TPC electronics, thus improved radiation tolerance of the RCU2 is required. The FPGA on the RCU2 is the Microsemi Smartfusion2 (SF2) SOC FPGA, where the configuration is stored in Single Event Upset immune flash cells [7]. In addition, several of the interfaces on the SF2, such as the Ethernet and the DDR interface, are protected by native mitigation techniques in the hardware. The RCU2 has been through several irradiation campaigns, and the -3 -
JINST 11 C01024
results on the final version of the RCU2 hardware have so far been promising. More details on the previous irradiation tests results can be found in [8].
In April 2015, a system level irradiation campaign was performed at The Svedberg Laboratory (TSL) in Uppsala using a 170 MeV proton beam. During this campaign the RCU2 was operated in a close to normal running situation while exposed to a wide proton beam at a moderate flux. As shown in figure 4, the test setup consists of three parts.
In the radiation area, the RCU2 is connected to four FECs and the supply voltage and current consumption of the SF2 FPGA is monitored by a SF2 starterkit [17]. The trigger crate, the data computer with the CRORC [6], and the PC which provides serial communication to the RCU2 are located some meters away in a shielded area. Via the LAN, all the above-mentioned devices are controlled and monitored by the three PCs which are located in the control room. In this test, the RCU2 was receiving and processing triggers upon which it was performing the basic data taking operation. At the same time all available registers in the RCU2 were monitored. This section will present the observations on the RCU2 stability, especially regarding the readout and the Linux, and discuss the corresponding mitigation actions. To evaluate the radiation tolerance of the RCU2, Mean Time Between Failures (MTBF) for Run 2 for different kind of failures were calculated based on the cross-section extracted from the test. While calculating the MTBF for Run 2, the radiation load on the TPC electronics in the innermost partitions (3.0 kHz/cm 2 [4]) is used and all the 216 RCUs plus 4356 FECs are counted in. Since the flux of high energy hadron in the outermost partitions is expected to be one third of that in the innermost partitions, the numbers listed in this paper are worst case estimations.
Readout stability
To evaluate the readout stability, data taking of the RCU2 was monitored with the trigger rate set to 10 Hz and two test cases were performed: irradiating the whole RCU2 and irradiating solely the SF2, which was realized by shielding the other parts of the RCU2 with a collimator. The FCEs were always irradiated, however, in the second case, they were partially shielded.
During the test, the readout was observed to stop for several times due to three categories of errors: the reset due to PLL lose lock, the SEU induced error on FECs and the data transmission error. The cross-section and MTBF in Run 2 of these errors are presented in table 1.
At the time of testing, the PLL lock signal was directly used as a reset signal in RCU2 FPGA design, thus any losses will lead to the stop of data taking. In the SF2, the PLL has three configuration options [9]: (1) holding reset before getting lock, (2) outputting clock before lock and do not synchronize after getting lock, and (3) outputting clock before lock and synchronize after getting lock. According to figure 5, it is concluded that the output clock of the PLL is not reliable -5 -
JINST 11 C01024
if it loses lock thus its usage should be minimized. There will not be any output clock if the PLL is configured with option (1) and the output clock will be unstable for several clock cycles with option (2) or (3) selected. Following the irradiation campaign, the reset strategy of the RCU2 has been redesigned, where the PLL lock signal is used as a reset signal only when the RCU2 is powered up, and after which it is no longer contributing to the reset scheme.
To deal with the SEU induced error on the FECs, which may cause the data taking to get stuck, the following mitigation actions have been implemented. Firstly, the front-end control bus on the FECs is continuously monitored. Secondly, the communication protocols between the RCU2 and FECs are monitored. Thirdly, the trailer word of each data package coming from the FECs to the RCU2, which contains signature information like channel address, length of data, etc. is verified. With all these actions, it is expected that the error situations will be detected and corrected at an early stage. In case of any data transmission error, ALICE DAQ will enter into a Pause and Recover (PAR) state so that the physics run does not need to stop. This PAR scheme benefits all the detectors and is to be supported by the RCU2. In addition, although no scenarios which can be interpreted as a FPGA fabric error has been seen in this irradiation tests, critical registers and state machines are considered to be protected with Triple Module Redundancy (TMR) or hamming as suggested in [18].
Linux stability
As mentioned in section 2, the Linux of the RCU2 operates on the ARM processor in the SF2 MSS together with three DDR3 SDRAMs, on which SECDED [9] protection can be enabled. While testing its stability, two kinds of errors were observed: sometimes the Linux reboots and in some cases it is frozen. The possible reason of these errors may be single-event upsets (SEUs) and multi-bit upsets (MBUs) in the DDR SDRAMs and in the ARM processor which lead to the kernel panic. The cross-section and MTBF of the Linux rebooting and frozen errors, with different test cases, are presented in table 2. Due to the limited statistics, it is hard to conclude weather SECDED protection on the DDR memories helps or not. To reduce the impact of instabilities caused by Linux errors, several mitigation actions have been taken or explored. First of all, a stand-alone module for DDL2 SERDES [6] initialization has been designed to replace its default initializing scheme, in which the SERDES is initialized by SF2 MSS on system boot-up. Furthermore, configuring the FECs via DDL2 has been realized. With the two above-mentioned measures, the readout can be separated from the Linux, so that the RCU2 could continue taking data in case any error occurs in Linux. In addition, an exploration -6 - Figure 6. SEUs in SF2 eSRAM on replacing the Linux system with a real time operation system (RTOS) that only resides on the internal eSRAM in the SF2 is ongoing. As a part of this activity, the cross-section of SEUs in the SF2 eSRAM has been characterized and the mean time between SEUs in RUN2 has been calculated. As shown in figure 6, providing single eSRAM is used on each RCU2, it is expected to see a SEU around every 220 s.
Trigger interface, Ethernet and MSM stability
In accordance with the previous tests [8], the Trigger Reception (TTCrx) is stable: no error was seen in this irradiation test. The Monitoring and Safety Module (MSM) is also stable, which means that no error has been seen on RCU2 side. Additionally, the stability of the Ethernet is acceptable. Two errors were observed in the tests, which refers to a cross-section of 2.5E − 11 ± 71%, and a (MTBF) in RUN2 of 17.0 ± 12.1 hours.
Readout performance of the RCU2
The readout time of single events has been measured in the setup as shown in the subplot (c) of figure 7, where one full readout partition, which consists of one RCU2 and 25 FECs (maximum number), is used. The benchmarking has been performed with full range of readout parameters: the number of data samples in each ALTRO channel [19] was varied from 0 to 1000, with the DDL2 working at 2.125 Gbps and 4.25 Gbps, separately. As presented in the subplot (b) of figure 7, the size of single event is in exact linear proportion to the number of samples, and it is also consistent with that of the events recorded by RCU1 [5].
At the speed of 2.125 Gbps (∼ 200 MB/s), the DDL2 link starts to get saturated if the number of samples exceeds ∼ 50. In this condition, the readout speed can be improved with a factor of ∼ 1.3 with respect to RCU1 [5]. With DDL2 working at 4 Gpbs (∼ 400 MB/s), the readout speed of the RCU2 can be increased by a factor of ∼ 2 compared to the RCU1. In this case, it -7 - is the Readout System operating at 80 MHz that limits the performance, because it can provide a maximum bandwidth at only ∼ 305 MB/s. A further performance improvement is expected by changing the internal clock frequency from 80 MHz to 100 MHz. In this case the readout speed is estimated to be ∼ 2.6 times that of the RCU1. The 100 MHz clock source will be provided by an on-board oscillator, thus the usage of PLLs in the SF2 can be fully avoided.
Commissioning results for the RCU2
In total 255 RCU2 boards have been produced, which includes more than 10% of spare cards. Since January 2015, 6 RCU2s have been installed and commissioned on one of the 36 TPC sectors. Their geometric locations and appearance can been seen in the subplot (a) and the subplot (b) of figure 8, respectively.
During this commissioning period the readout of the RCU2 is working stable with DDL2 at the speed of 2.125 Gpbs. This is verified by the following method: fixed pattern is filled into the pedestal memories [19] of FECs, read by the RCU2 and checked by the ALICE DAQ. In the commissioning, several TB of data has been looped and no corruptions on the data or stops of the readout have been observed.
Besides, no Linux reboots or freezes have been seen on the RCU2 boards. The statistic is however too low to draw any conclusion on the Linux stability of the RCU2. For comparison, only about 10 Linux reboots have been experienced on a total of 210 RCU1s. The trigger reception, the Monitoring and Safety Module (MSM) and the Ethernet are working stable. The In-System Programming (ISP) of the RCU2 SF2 FPGA is in general operational. However, in 10-15 out of 100 times it exits prematurely. The reason could not clearly been identified, as the ISP programming is handled by SF2 MSS internally. In these cases a retry of the ISP leads to the desired result.
Conclusion and outlook
In April 2015, the RCU2 system level irradiation campaign has been performed. It revealed some stability issues, especially regarding the Linux and the readout. All the radiation related problems have so far been solved or the mitigation actions for them have been planned. Since January 2015, 6 RCU2s have been commissioned on the ALICE TPC. They were verified with all the surrounding systems (trigger, DCS and DAQ) and found to be working stable with the DDL2 at the speed of 2.125 Gbps. The RCU2 FPGA design is entering the finalizing phase, while some development is still ongoing: integration and verification of DDL2 working at 4.25 Gbps, increasing the system clock frequency from 80MHz to 100MHz, implementing a novel data sorting algorithm and implementing multi-event buffering for triggers. With DDL2 working at 4.25 Gbps and system clock at 100 Mhz, the readout speed will be improved by a factor of at least 2 compared to the current system, which will fulfill the requiremnts for Run 2 operation. With all the major building blocks in place, the RCU2 is planned to be installed in the ALICE TPC during LHC winter break (December 2015 to March 2016). | 4,053 | 2016-01-15T00:00:00.000 | [
"Computer Science"
] |
A Metamaterial-Inspired Approach to Mitigating Radio Frequency Blackout When a Plasma Forms Around a Reentry Vehicle
: Radio frequency (RF) blackout and attenuation have been observed during atmospheric reentry since the advent of space exploration. The effects range from severe attenuation to complete loss of communications and can last from 90 s to 10 min depending on the vehicle’s trajectory. This paper examines a way of using a metasurface to improve the performance of communications during reentry. The technique is viable at low plasma densities and matches a split-ring resonator (SRR)-based mu-negative (MNG) sheet to the epsilon-negative (ENG) plasma region. Considering the MNG metasurface as a window to the exterior of a reentry vehicle, its matched design yields high transmission of an electromagnetic plane wave through the resulting MNG-ENG metastructure into the region beyond it. A varactor-based SRR design facilitates tuning the MNG layer to ENG layers with different plasma densities. Both simple and Huygens dipole antennas beneath a matched metastructure are then employed to demonstrate the consequent realization of significant signal transmission through it into free space beyond the exterior ENG plasma layer.
Introduction
When humans began traveling into space in the 1960s, one major, immediately recognized concern was the severe attenuation of radio communications caused by the plasma formed around the reentry vehicle. In particular, it was determined that when a vehicle moves at very high velocities within the atmosphere, the air in front of the vehicle becomes highly compressed and a shock wave is formed. This gas compression creates a significant amount of heat [1]. With enough heat, the gases in the air within the shock wave will ionize and, thus, an electron plasma is created.
The presence of a plasma layer causes attenuation of the electromagnetic signals propagating through it. These signals are associated with a variety of radio frequency (RF) systems such as voice communications, telemetry, and global positioning system (GPS) data. The attenuation increases as the thickness and density of the plasma surrounding the reentry vehicle increases. An illustration of the plasma sheath surrounding a reentry vehicle is given in Figure 1. Regions of lower and higher plasma density near the vehicle are indicated. The associated effects can become so severe that complete loss of voice communications and even GPS acquisition can occur. This RF blackout can last anywhere from 90 s to 10 min depending on the trajectory and velocity of the reentry vehicle [3]. It has had severe consequences, for example, in the unmanned Genesis mission to collect samples of solar winds. During reentry of the capsule, a sensor failure caused a parachute to not deploy. If mission control had been in contact with it, a backup parachute could have been deployed to avoid the capsule from crashing into the Utah desert [4]. Similarly, hypersonic flight has been pursued with both manned and un-manned vehicles since 1961, e.g., with the X-15 hypersonic airplane which flew at Mach 6.7 using only rockets [5]. As with the space vehicle reentry scenario, sustained hypersonic flight also faces the potential loss of communications, telemetry and GPS signals. Renewed interest in the RF blackout issue has arisen from the resurgence in interest into hypersonic delivery systems. Consequently, a reasonable solution to the RF blackout problem would greatly increase the chance of mission success for both space missions and hypersonic vehicular technologies.
Many approaches to ameliorate the RF blackout problem have been tried with varying degrees of success [3]. Specifically, remote antenna assemblies [6], which use antennas placed on sharp slender probes ahead of the plasma sheath of a blunt-nosed vehicle; magnetic windows [7], which require generation of powerful magnetic fields; and the addition of quenchants to reduce the electron density [8] or change chemical reaction rates [9]. Implementing one or a combination of solutions may be the most appropriate path forward, depending on the application. System designers will need to evaluate all approaches as research continues to develop and explore new options. The focus in this paper is on a purely electromagnetics-based approach.
An artificial medium, the bed of nails or wire medium [10], was created to study this plasma-related communications problem during the early stages of the Mercury-Gemini-Apollo space program in the 1960s. As will be described in Section 2, the electron plasma acts as an epsilon-negative (ENG) medium for frequencies below its plasma frequency. It will be modeled as a Drude medium, and its electromagnetic attenuation and reflection mechanisms will be explained with it. This wire medium was popularized in the early stages of the birth of metamaterials [11], and is now often recognized from the first reports of negative index [12] and negative refraction [13]. A matching mu-negative metasurface based on split-ring resonator (SRR) elements will be introduced in Section 3.
It is related to the SRR-based bulk metamaterial designed for artificial magnetic conductor (AMC) applications [14]. As discussed in [15,16], an appropriate pairing of a MNG metamaterial layer with an ENG one leads to an effective medium through which electromagnetic waves will propagate. The developed MNG metasurfaces are loaded with varactors to enable an adjustable resonance frequency that can be matched to an ENG (plasma) medium with a specific permittivity value. It will be demonstrated that these SRR-based metasurfaces can be tuned to maximize the field transmission through the resulting MNG-ENG plasma layered medium. Finally, the propagation of signals through the MNG-ENG plasma effective medium will be modeled with the fields radiated by both dipole and Huygens dipole antennas in Section 4. It will be demonstrated that the transmitted signal strength and cardioid-shaped pattern associated with the Huygens dipole antenna is an efficacious solution to the RF blackout problem. This metamterial-inspired answer to a long-standing practical issue associated with reentry and hypersonic vehicles will be summarized in Section 5.
Electron Plasmas as ENG Media
To properly introduce the Drude model of the permittivity of a lossy electron plasma, several parameters need to be defined. With the Drude model in hand, it is straightforward to understand the impact of an ENG medium on wave propagation in it.
Plasma Parameters
The plasma frequency is a principal characteristic of a plasma; it depends on its density. It is defined as where ω p is the angular plasma frequency in rad/s, n e is the electron density per unit volume, q is the charge of an electron, m e is the mass of an electron, and ε o is the permittivity of free space. Clearly, the plasma frequency increases as the density does. Another principal characteristic of a plasma is its collision frequency: where k b is Boltzmann's constant, T is temperature in Kelvin, and ln(Λ) is the Coulomb logarithm, which is typically 10 < ln(Λ) < 30. The value of ln(Λ) generally only varies by a factor of 2 over many orders of magnitude of plasma densities. Losses in a plasma result primarily from particle collisions, and therefore the collision frequency is used to characterize them. The Coulomb logarithm is the factor that indicates when small-angle collisions are more effective than large-angle ones. It is reasonable to assume ln(Λ) = 10 for a laboratory plasma and ln(Λ) = 20 for a reentry plasma [17]. A plot of both the plasma and collision frequencies as functions of the plasma density is given in Figure 2, assuming the Coulomb logarithm is equal to 20 for a fully ionized gas. This plot is an example of the behavior of the plasma and collision frequencies versus the electron density of the plasma. It shows that there is an initial peak in the collision frequency. However, it also indicates that above a density of about 9 × 10 21 m −3 , the collision frequency begins to decrease. This feature is associated with the fact that as the temperature of the plasma continues to increase, the majority of the atoms creating it become ionized and then the temperature dominates the value of the collision frequency, causing it to decrease. The actual location of the collision frequency peak as well as the values of the plasma and collision frequencies will vary depending on the parameters of the reentry environment, e.g., the pressure, air density and composition, etc., but the relative behaviors of the plasma and collision frequencies always follow the pattern illustrated in Figure 2. The plasma and collision frequencies are both used to define the Drude model of the plasma's permittivity.
Drude Model
The Lorentz model of the polarization field, P, in a dielectric represents its response to the presence of an electromagnetic field in it. It is a second order differential equation that is synonymous with the damped simple harmonic oscillator model of a spring or the model of a RLC circuit. It is given by Equation (3) [16,18].
where Γ is the damping rate, ω 0 is the resonance frequency, and E is the exciting electric field. Because of the difference in mass of an electron and a nucleus, this model of the wave-matter interaction principally describes the motion of the electrons. The resonance frequency is associated primarily with the restoring (attractive) force of the nucleus; the damping rate is associated with all of the interactions between an atom's constituents. The Lorentz model reduces to the Drude model when the restoring force is negligible in comparison to the driving force of the electromagnetic field. Therefore, the Drude model is applicable in cases where the material has a large amount of free electrons, such as metal or plasma. Assuming the engineering time convention exp(jωt), the time harmonic form of the Drude model describes the relative permittivity of a plasma as [16] where the damping rate Γ e takes the role of the plasma collision frequency and represents the losses. It is clear that in the case of a lossless Drude medium, Equation (4) tells us that ε r (ω) < 0 when ω < ω p . Thus, a low loss plasma essentially acts as an ENG medium when an electromagnetic field is propagating in it when the source frequency ω is smaller than the plasma frequency ω p . Consequently, the plasma acts as an ENG medium depending on its density through ω p and Γ e , and the field interacting with it. Assuming the electron density is known, the plasma and collision frequencies can be calculated. The real part of the relative permittivity versus frequency can then be computed from Equation (4). The result is the plot given in Figure 3 with the electron density and the source frequency as the two independent axes. The real part of the permittivity is negative at low frequencies when the electron density is small. As the electron density increases, the negative region of the real permittivity extends into higher frequencies.
Wave Propagation in Complex Media
Wave propagation within a material can be understood through the solutions of the equation [19]: where Ψ can be either the electric or magnetic field, and γ is the complex constant: where α is the attenuation constant and β is the propagation constant. Solutions of the one-dimensional version of Equation (5) taken, for example, along the positive z-axis that are not growing are simply Ψ = A exp(−γz) = A exp(−αz) exp(−jβz), where A is a complex constant. Traveling (evanescent) wave solutions of Equation (5) are defined as solutions whose propagation constant β is real (imaginary). Evanescent waves are then simply decaying within the material. From Maxwell's equations it is well-known that β = ω √ µ √ ε. Consequently, an ENG medium, i.e., a medium in which µ > 0 and ε < 0, has β = −jξ, where ξ is a real number, and thus has only decaying fields in it. Similarly, a MNG medium, i.e., a medium in which µ < 0 and ε > 0, also has β = −jξ and similarly has only decaying fields in it. On the other hand, both a double positive (DPS) medium, i.e., a medium in which µ > 0 and ε > 0 so that β = +ω |µ| |ε| is real, and a double negative (DNG) medium, i.e., a medium in which µ < 0 and ε < 0 so that β = −ω |µ| |ε| is real, have propagating fields in them [16]. Therefore, as the permittivity in a plasma becomes negative when a wave whose frequency is below the plasma frequency interacts with it, β is imaginary in it and the corresponding waves are evanescent. Consequently, a plasma only allows evanescent waves at those frequencies, and an electromagnetic signal in it will be attenuated. These features have been verified experimentally by NASA [20].
MNG-ENG Layered Structure
To explore overcoming the attenuation associated with the ENG nature of the reentry plasma, a MNG layer is introduced between the electromagnetic radiator and the plasma region. The combined MNG-ENG layers act as a DNG medium, allowing wave propagation through it. Tuning the MNG layer to match the ENG one enhances the overall transmission level.
Tunable MNG Medium
In 1999, Pendry et al. reported a way to create, by design, an artificial media that acts with a negative permeability [21]. The MNG effect was achieved using highly sub-wavelength resonant structures, i.e., split-ring resonators (SRRs), as inclusions in a dielectric background. The SRRs work by creating small loop currents when an incident magnetic field is correctly oriented to create flux through them or an incident electric field is parallel to its face and drives the current directly. The resulting loop current acts as a magnetic dipole, affecting the material's permeability in a manner similar to an electric dipole affecting the material's permittivity. It was recognized that a non-resonant closed loop produces a much smaller response than the resonant open one. The SRR can be designed for a desired operating frequency. The capacitances between its two split rings and in the gaps of each ring along with the inductances of each split ring act as a resonant L-C element, whose resonance frequency can be tuned by design. When many split rings are combined in a planar or volumetric array, they can be arranged to act coherently to yield a negative permeability above the resonance frequency as shown in Figure 4. The layout of a C-band SRR is shown in Figure 5. The metallized rings combine to give the effective inductance L s . All of the gaps combine to yield the effective capacitance C s . The resulting LC tank circuit has the resonance frequency: f 0 = √ L s C s /2π [22]. Introducing a varactor across both of the open gaps of the split rings facilitates the realization of a total capacitance that is tunable. Consequently, the resonant response of the SRR-based metamaterial permeability is tunable.
Wave Propagation through the MNG-ENG 2-Layered Structure
As simulated in [15], it is possible to build a DNG material by combining alternating layers of MNG and ENG materials. Care must be taken to establish a match between the ENG and MNG regions in order to maximize the transmission through their combination. The ENG medium is assumed to be lossless to simplify the discussion. It became evident while studying the reentry plasma scenarios that the plasma density, and therefore the relative permittivity of the ENG region changes, over the reentry trajectory as the temperature near the exterior of the vehicle increases. Therefore, to maintain matching with a plasma whose permittivity is changing, it becomes desirable to have the resonance of the permeability of the MNG layer be tunable to move the negative region into the frequency range of the negative permittivity. Hence, a tunable SRR-based metasurface is needed.
The ability of the tunable SRR-based MNG layer to be matched to the plasma's ENG response is demonstrated by studying two cases in which the same SRR design is used to match the SRR-based negative permeability to the plasma's negative permittivity simply by changing the voltages applied to the varactors. The response of an infinite layer of the SRR unit cells in Figure 5 on a 1.0 mm thick Rogers TM 5880 material whose relative permittivity is ε r = 2.2 and whose loss tangent tan δ = 0.009 combined with a 1.0 mm thick bulk ENG material representing the plasma layer was simulated for a normally incident electromagnetic wave. The simulations were performed in CST Microwave Studio (MWS) using a unit cell configuration, i.e., perfect electric and magnetic boundaries were introduced in the x-and y-directions to yield a normally incident, x-polarized plane wave. Ports were implemented at both of the surfaces with z-directed normals. The phase reference plane for each port was adjusted to the corresponding nearest surface of the two-layered material to attain the correct reflection and transmission coefficients. The geometry of the CST model is shown in Figure 6. The adjustable capacitance in the CST model was added as a discrete component in the SRR gaps as shown in Figure 6. Both cases of the electric field being oriented across the SRR gaps (parallel to the varactor orientation) and orthogonal to it were simulated. Only the former (parallel) case induces the desired resonant currents in the SRRs whose resonance frequency can be tuned by adjusting the varactors' effective capacitance.
The simulated S-parameter results for the parallel case are given in Figures 7 and 8. They show a narrow nearly-complete passband response that depends on both the ENG permittivity and the SRR's capacitance. The variation in the S-parameters as the ENG material's permittivity becomes more negative while the resonance of the effective permeability is fixed at 4.4 GHz is displayed in Figure 7. The maximum transmission loss for each case is −0.32 dB at 4.47 GHz when ε = −2.0, −0.36 dB at 5.02 GHz when ε = −4.0, and −0.38 dB at 5.20 GHz when ε = −6.0. These results show that the passband moves up in frequency as the plasma frequency increases, i.e., the permittivity becomes more negative in the targeted C-band. On the other hand, Figure 8 illustrates that as the SRR capacitance is increased with the plasma frequency fixed at 8.838 GHz and the collision frequency fixed at 300 MHz, the resonance frequency of the MNG layer decreases, i.e., the passband moves down in frequency. The maximum transmission loss for each case is −0.40 dB at 5.22 GHz for 1.0 pF, −0.38 dB at 4.79 GHz for 2.0 pF, and −0.37 dB at 4.54 GHz at 4.0 pF. The −3 dB bandwidths of the passbands are all approximately 360 MHz, a 7.2% fractional bandwidth at 5.0 GHz. Both results illustrate that the MNG-ENG two layered medium supports a nearly complete, finite passband that can be tuned to a desired frequency.
It is also noted that as considered here, the MNG layer would have to be integrated into the surface of the reentry vehicle or airframe as it is the interface with the plasma. Consequently, its SRRs and embedded varactors would necessarily have to be on the interior side of it. Moreover, a metal-clad ceramic material, which can readily survive the indicated temperatures, would have to replace the Duroid substrate from a practical point of view. The SRR-based design is very amenable to the modifications that would be necessary to accommodate these practical considerations.
Propagation Losses through a Plasma
The loss experienced by a wave propagating in the plasma region can be calculated with the imaginary part of the permittivity in Equation (4), i.e., As expected, the plasma frequency (Equation (1)) figures prominently in the calculation. The collision frequency Γ e is generally at least an order of magnitude smaller than the plasma frequency as seen in Figure 2. As the plasma frequency is determined from the electron density, the plasma losses are directly proportional to the plasma density. However, the plasma density varies greatly depending on the airframe and the speed and angle of reentry [23].
The predicted electron density can vary dramatically from near the capsule's or airframe's surface to the surrounding plasma sheath to the region beyond as has been reported in a variety of works [24][25][26][27]. For our studies, an electron density of 10 18 m −3 was chosen as a reasonable intermediate value. This choice results in a plasma frequency of 8.927 GHz. A plasma temperature of 2000 K is chosen for Equation (2) which results in a 42.5 MHz collision frequency. This temperature choice is not necessarily indicative of the maximum temperature of a reentry plasma, but rather an average temperature seen along the path of the electromagnetic wave traveling through the plasma. The basis for this choice can be understood from Figure 1. The intense plasma formed by the shock wave associated with the "heat shield" portion of the reentry capsule or the nosecone at the front of an airframe can reach temperatures exceeding 10,000 K. However, knowing the expected characteristics of the vehicle's atmospheric trajectory, any communications or GPS antenna would be placed away from those hot-spot regions, i.e., these antennas would be placed in the indicated "lower density" region near it. Thus, the indicated plasma temperature is then a highly reasonable value. The resulting collision frequency and attenuation values also are consistent with previous studies of the reentry plasma [28]. The values associated with these choices give r = −2.06 + j0.026 at the 5.1 GHz operating frequency. The real part of r is rounded down to −2.0 in order to simplify the ensuing discussion.
To explore the magnitude of the propagation losses in such a plasma as a function of its thickness, an analytical one-dimensional multi-layer model was developed and modeled with the transfer matrix method (TMM) [29] implemented in MATLAB. The multilayer model is shown in Figure 9a. Regions 1 and 4 are free space. Region 2 represents the MNG layer and Region 3 represents the ENG layer, i.e., the plasma. As a representative example, consider the relative permeability value of the MNG layer to be chosen to be a conjugate match to the ENG layer as described in [15]. In particular, setting µ 1r = −µ 2r , 1r = − 2r , and d 1 = d 2 = 1.0 mm, the TMM code calculated S-parameter results as a functions of the frequency are shown in Figure 9b. They clearly indicate that little to no reflection and nearly complete transmission occurs for this matched case near the operating frequency, 5.1 GHz. Thus, the attenuation of the fields radiated by an antenna in Region 1 that would be accumulated after propagating through this structure, i.e., as they reach the output face of Region 3 and enter the free-space Region 4, would be due primarily to the attenuation they experienced after passing the 1.0 mm depth as they propagated through the plasma region. Using the TMM model further, the thickness of the ENG (plasma) layer was varied from 1.0 mm to 100 mm (10.0 cm), while leaving the MNG layer at the designed 1.0 mm thickness. To verify these TMM results and to confirm the efficacy of the CST software for this class of problems, simulations of the same problems were compared. The CST S-parameter data was imported into MATLAB and the differences with the TMM data were calculated. Both lossless and lossy plasma cases were studied.
Both cases were simulated with the real part of the permittivity being Re[ r ] = −2.0. Both lossy cases were simulated with the corresponding conductivity [18]: i.e., with σ = 0.073 S/m. The TMM-calculated transmission coefficient results as a function of the thickness of the plasma are shown in Figure 10a. Figure 10b shows the difference in dB between the TMM and CST results. These differences are quite small. The maximum difference is in the lossy plasma case, being 0.15 dB, which is well within acceptable limits for the discretization level used in the CST simulations. The exponential loss shown in Figure 10a varies from 0.004 to −124 dB. However, in a very dynamic environment such as atmospheric reentry, the losses typically reach a maximum of about −40 dB because of the variations in the plasma density in the sheath [23,28]. With further simulations, the attenuation realized simply by the evanescent nature of the fields as they propagate beyond the matched distance dominate the losses. The additional loss associated with taking into account the loss term Im[ r ] in Region 3 is shown in Figure 11. It is insignificant in comparison to the evanescent decay associated with the imaginary part of the wave number in the plasma and thus will be ignored for the remaining aspects of this study. Figure 11. Attenuation difference between assuming the plasma is lossless and including the loss term Im[ r ].
Antenna Performance in the Presence of the 2-Layer MNG-ENG Structure
The unit cell (i.e., the planar, periodic, transversely infinite) simulations demonstrated the proof of concept that a MNG-based metasurface can be designed to enable a passband in the presence of an electron plasma (i.e., an ENG medium). As an antenna on a reentry or hypersonic vehicle is ideally positioned within rather than on the structure for aerodynamic and structural integrity purposes, the MNG metasurface would be integrated as a window in the surface of the vehicle with the application antenna being located behind it relative to the exterior. Thus, simulations of the response of an antenna radiating in free space from the MNG side of the MNG-ENG structure were performed. The results demonstrate that the antenna-MNG metastructure can be designed to achieve application-significant transmission through the ENG-based plasma, e.g., for communications purposes.
Dipole Performance
Simulations with an idealized electrically small dipole antenna first illustrate this concept. The standard dipole antenna pattern is well known [30]. The peak gain is 1.76 dBi. Figure 12 shows the simulation model in the presence of the two-layer metastructure. A finite-sized MNG-based metasurface that consists of nine SRR unit cells is introduced on the antenna side of the ENG (plasma) layer. The thicknesses of the Duroid substrate and the plasma region were again fixed at 1.0 mm. The dipole antenna is centered along the y-axis at the origin and has the total length 11.8 mm. The dipole was implemented using 0.6 mm diameter cylinders with a 2.0 mm gap between them driven by a 50 Ω discrete port. The finite-sized SRR material is located 10.0 mm away (∼ λ/5.88 at 5.1 GHz, slightly beyond the reactive near field of the dipole, λ/2π). This offset distance from the metasurface is sufficient for this short dipole to effectively illuminate a significant portion of it directly, exciting its steady state MNG response. Note that the ENG material is located on the free space side opposite to that of the antenna as it would be for the reentry scenario. Also note that the linearly polarized dipole antenna is oriented along the x-axis. Consequently, its broadside electric field is also oriented along the x-axis, which is the optimal configuration, i.e., the electric field is oriented across the gaps of the two split rings in each SRR element to maximize the response of the metasurface. The ENG region's relative permittivity was set at ε r = −2.0 and its relative permeability was set at µ r = 1.0. The unit cell design of the MNG metasurface was the same as the infinite periodic case; it was matched to the ENG region to obtain the passband at 5.1 GHz shown in Figure 7 by specifying the capacitance value to be 1.0 pF. Because the metasurface is now finite in size, the actual transmission values are not identical to the infinite periodic case. Any reflection or absorption effects associated with this fact will now be exposed with comparisons of the far-field performance characteristics of the antenna in free space and in the presence of the layered MNG-ENG metastructure.
The far-field directivities of the dipole in its E-and H-planes are presented in the Figure 13. Its frequency, 5.1 GHz, is matched to the passband of the MNG-ENG structure. Moreover, the size of the dipole is small relative to the overall size of the metasurface as is its distance from it. Consequently, there is very little leakage of the radiated field diffracted around it. The dipole was artificially matched to its idealized source to yield its best performance. The maximum directivity in the "exterior" ("interior") free space region shown in the E-plane in Figure 13a is 4.07 dBi (4.45 dBi) in the broadside +z (-z) direction. The higher than free-space broadside directivity in the φ = 0 • plane is an immediate consequence of the significant narrowing of the field pattern brought about by the inherent enhanced response of the metastructure to the x-directed electric field. On the other hand, the expected omnidirectional directivity pattern in the H-plane in Figure 13b only shows a bit of distortion along the y-directions. This distortion is the direct result of the magnetic fields of the finite-sized dipole, which are no longer parallel to the metasurface but rather, are obliquely incident on it. Thus, they impact the performance of the SRR elements, which in fact have a bi-anisotropic response. Nonetheless, the results demonstrate that the effective DNG metastructure does provide an passband window for signals radiated by the dipole antenna. Nevertheless, the dipole itself, being a bidirectional radiator, directs slightly more than half of its radiated power into the source region. In particular, the realized gain in the broadside, +z-direction is 2.48 dBi and is 2.85 dBi in the -z-direction.
Huygens Design Performance
Ultimately, a unidirectional antenna is highly desired for any "sub-exterior surface" antenna system. The electrically small Huygens dipole antennas developed in [31][32][33][34] are advantageous in this regard. The linearly-polarized Huygens antenna is made up of three major components: a capacitively loaded loop (CLL), an Egyptian axe dipole (EAD), and a smaller dipole that excites these near-field resonant parasitic (NFRP) elements by coupling energy into them. In free space they are electrically small systems that radiate a broad beamwidth, unidirectional cardioid pattern with basically a 3 dB peak directivity enhancement over that of an equivalent sized dipole antenna. The Huygens design in [31] was modified to have the same operating frequency as that of the dipole case, 5.1 GHz. The modified design is shown in Figure 14, and the design parameters are shown in Table 1. Its performance was verified first in free space. The CST simulation results are shown in Figure 15. The cardioid nature of the patterns is immediately recognized. The peak directivity, 4.15 dBi, at 5.1 GHz is in the broadside +z direction; the corresponding front-to-back ratio (FTBR) is 10.9 dB. The corresponding peak realized gain value is 3.58 dBi. The associated minimum FTBR is 10.9 dB. The radiation efficiency is 88.6%.
The same MNG-ENG metastructure used in the dipole case was then introduced. As the Huygens antenna is also linearly polarized, it was also oriented with its electric NFRP element oriented along the x-axis as shown in Figure 14a to maximize the response of the MNG metasurface. Moreover, as with the dipole antenna, it was located 10 mm from the front surface of the MNG layer. The corresponding realized gain patterns in the two principal planes, φ = 0 • and φ = 90 • , at 5.1 GHz of the Huygens dipole antenna in the presence of the MNG-ENG layered metastructure with ε r = −2.0 in the plasma layer and the capacitor = 1.0 pF to attain the MNG layer with µ r = −2.0, are shown in Figure 16a. The peak directivity, 5.87 dBi, at 5.1 GHz remains in the broadside +z direction; the corresponding FTBR is 4.7 dB. The associated peak realized gain in the +z-direction is 3.40 dBi, and the radiation efficiency is 92.9%.
Note that the cardioid nature of the patterns is distorted from their free space shape but its resemblance is still recognizable. The MNG-ENG structure has unbalanced the relative electric and magnetic dipole responses of the free-space Huygens design. Nevertheless, the results clearly demonstrate that significant field levels are radiated into the exterior, free space region. In fact, these results compare favorably with the free space ones. They clearly demonstrate that the combination of the metastructure and a Huygens dipole antenna could potentially be successful in mitigating the plasma blackout problem.
Moreover, note that there was an unexpected directivity improvement in the +z-direction along with some narrowing in the patterns in both the φ = 0 • and φ = 90 • planes. It is associated with the fact that the MNG-ENG pair in this case is a conjugate match. In particular, the MNG-ENG metastructure is nearly transparent at every incident angle. However, due to the anisotropic nature of the SRRs in the realistic MNG layer, its response does vary away from the broadside direction. It was found that this effect actually causes the MNG-ENG metastructure to act as a lens, directing more power into the broadside direction with a corresponding narrowing of the realized gain patterns.
Another consequence of the lensing effect is that the beamwidth in the φ = 90 • plane can be adjusted at the cost of realized gain by varying the distance between the metastructure and the Huygens dipole antenna. The results of a parameter study of the pattern quality as the distance between the antenna and the metastructure was varied are summarized in Table 2. The general effects are clearly seen in the comparison between the 5.5 mm spacing results shown in Figure 16b Moving the antenna away from the metastructure narrows the −3 dB beamwidth while increasing the realized gain in the +z direction and generally reducing the associated FTBR value. On the other hand, moving the antenna closer to the metastructure generally broadens the beamwidth while decreasing the realized gain and increasing the FTBR. While the lensing effect is more severe and particularly noticeable in the φ = 0 • plane, where the SRRs have their strongest response, the beamwidth remains relatively constant between 50 • to 60 • despite the dramatic visual narrowing of the pattern. It is found that if the antenna is moved too close to the metastructure, the beamwidth in the φ = 90 • plane becomes much larger than 180 • and the FTBR decreases as the overall pattern tends to become essentially fan-like. The 5.5 mm and 7.0 mm spacing choices yield both acceptable realized gain and FTBR values. Comparing the realized gain patterns in Figure 16, the differences in the peak directivity and FTBR features are now understood.
Parameter Studies
With these very positive directivity results in hand, the thickness of the plasma and its impact on the magnitude of the field radiated past it into free space became yet another interesting issue. Returning to the TMM code, variations of the plasma thickness and incident angle were investigated to understand how the attenuation changes as the incident angle does. The results are shown in Figure 17. As the plasma thickness increases, the attenuation increases quickly for large off-axis angles. The effect causes the beam to narrow as the plasma thickness increases. This narrowing outcome supports the desire to have a more directive antenna for the application. The previous results for the antenna-MNG layer distance of separation then suggest that this off-axis angle effect might be somewhat counteracted by moving the antenna farther away to increase the resulting directivity and narrowing the beam further. On the other hand, depending on where the receiving antenna is relative to the location of the antenna system in the reentry vehicle, one might want to move the antenna closer to the MNG interface to enhance the probability of receiving the transmitted signal knowing that this separation distance will widen the beamwidth even though an increased attenuation penalty would occur. A system engineering design analysis as to whether or not the position of the antenna should be movable to take advantage of this physics may be advantageous.
As demonstrated in [15], a conjugate match requires that k 1 d 1 = k 2 d 2 or equivalently with µ 1r = 2r d 2 /d 1 for the Region 3 and 4 material properties that we have assumed. Consequently, with the Region 3 permittivity set to 2r = −2.0, this means that if µ 1 is varied linearly with the plasma thickness as µ 1r = −2 d 2 /d 1 , the conjugate match should be maintained. As the SRR design is frequency tunable, the actual value of |µ 1 | could be increased simply by moving its resonance point closer to the operating frequency.
To investigate if there would be any advantage to adjusting its value in anticipation of a particular thickness, the TMM code was used to calculate the attenuation experienced by the fields with the MNG-layer permeability adjusted for the thickness of the plasma. The results obtained by setting d 1 = 1.0 mm and 2r = −2.0 are shown in Figure 18 for d 2 again varying from 1.0 to 20.0 mm for three different values of µ 1r . While they appear to show some slight advantage at further distances, the results actually indicate that the initial level of the field in the plasma region is negatively affected by not matching the plasma next to the vehicle. There is no means to recoup the signal level once it has entered into the plasma region. This result confirms that the best strategy is to match the MNG layer to the permittivity of the ENG layer as it begins. As discussed in many recent reentry plasma studies [23,24], the plasma density of the plasma layer is not constant as the distance from the reentry vehicle increases, i.e., the permittivity of the plasma has a profile. As noted above, it depends on many factors such as the actual vehicle shape, its angle of attack entering the atmosphere, and its speed. To understand the impact of the profile on the transmitted signal, the four region problem was considered with the simple profile for Region 3 shown in Figure 19a. The permittivity has 2r = −2.0 in the 1.0 mm thick layer at the reentry vehicle's surface and assumes a plasma density in the sheath that reaches its maximum at 30.0 mm away from it and then reaches free space at a distance of 50.0 mm. The calculated transmission coefficient with µ 1r = −2.0 is shown in Figure 19b. The main takeaway from this result is that the attenuation level reaches a maximum value that is just slightly less than −40 dB for this profile and then begins to decrease to zero at the outer boundary of the plasma. As noted above, the signal would thus experience a decrease of around that maximum value. While it seems to be a large decrease, even larger attenuation levels are encountered in existing practical systems. For example, the up and down links for satellite communications to and from the earth experience path losses from 100-200 dB each way depending on the signal frequencies, weather, and satellite altitudes [35,36]. These large propagation losses explain why large reflector antennas with extremely high directivities are required to counteract them in such communication scenarios.
Again, the analysis indicates that a proper matching of the permeability of a single MNG layer to the permittivity of the plasma next to the reentry vehicle leads to a potential, practical engineering solution.
Nevertheless, instead of restricting oneself to a single MNG layer that may require a rather large negative permeability value to counteract a high plasma density region next to the vehicle, one could consider replacing the single layer with a multilayer structure that alternates artificially realized ENG and MNG layers [15] until the initial layer of the plasma region is reached to obtain an effective k 1 d 1 value to match the anticipated plasma thickness or negative permittivity value, and therefore its associated larger k 2 d 2 value. In this sense, the plasma layer is always instantiated as the last layer, with the rest of the layers being optimized to provide the conjugate match and the consequent DNG metastructure. This multilayer approach requires the thickness of each layer, d m , for m = 1, ..., M, to be less than about 1/8th of the wavelength in each layer, i.e., d m ≤ λ source /[Re n m ], where n m is the index of refraction in the m-th layer. By changing d m , the multilayer structure can be designed to act as a bandpass filter with the thickness d m determining the center frequency of the filter. This response is well-known; it is what is encountered in one-dimensional periodic structures [37,38] such as Bragg mirrors [39,40].
The S-parameters of a representative 10-layer structure of alternating r = −2.0 and µ r = −2.0 layers for 2 different layer and plasma thicknesses, d m = 3.0 mm ≈ 0.072 λ e f f , and d m = 5.23 mm ≈ 0.126 λ e f f , where λ e f f = 41.57 mm is the wavelength in each layer at 5.1 GHz, are shown in Figure 20a,b, respectively. The band pass response at the desired operating frequency is clearly obtained in the 5.23 mm case. Note that if the dielectric thickness, d m , was increased much beyond a fifth of the source wavelength, then the dramatic evanescent wave decay shown with the previous two layer cases takes over. Even though its response may become quite narrow in bandwidth because the plasma layer may be thick, by designing the resulting Bragg filter to have its maximum transmission at the desired operating frequency, i.e., to have it tuned to the resonance frequency of the Huygens dipole antenna and completely encompass its narrow bandwidth, a relatively strong signal-even a Morse code signal at a specific frequency-could be inserted into the plasma region with a significant chance of reaching the free space region with strong enough signal strength to establish even rudimentary communications during reentry. Similarly, the narrow bandwidth signal emitted by a GPS satellite could reach a well-placed receiving antenna in the vehicle.
Conclusions
The design of a metasurface and a Huygens dipole antenna to facilitate the transmission of signals through a reentry plasma was examined. A 1.0 mm thick SRR-based MNG metasurface was developed whose properties are tunable. It was combined with the plasma layer to form a two-layer MNG-ENG metastructure that produced a passband with very high transmission levels. The DNG properties of this metastructure overcame the deleterious evanescent field behavior when only the ENG plasma layer is present. It was demonstrated with both electrically small dipole and Huygens dipole antennas that significant power could be radiated into the the plasma region with an appropriately configured MNG-ENG metastructure. The MNG-ENG structure was specifically designed to be a conjugate match for the base case with a 1.0 mm thick plasma with ε r = −2.0 next to the reentry vehicle, i.e., the MNG layer was designed to have µ 1r = −2.0 at the C-band operating frequency, 5.1 GHz. The peak realized gain with the Huygens dipole antenna was up to 2.01 dBi higher than the simple dipole reference case.
Further adaptability of the MNG-ENG metastructure was demonstrated on the basis of the anisotropic nature of the SRRs to facilitate a narrowing of the beamwidth of the radiated fields along the direction orthogonal to the linear polarization direction of the antennas while essentially maintaining a consistent beamwidth, but narrower overall pattern in the parallel direction. Moving the Huygens dipole antenna closer to the metastructure creates a broader beam at the expense of some broadside peak realized gain and vice versa. This feature to narrow or widen the beamwidth could help the applicability of the reported approach. It also could allow the pattern of a single system to transition dynamically between a wide and narrow beamwidth if the position of the antenna was allowed to change physically relative to the metastructure.
Several different properties of the complex media system were explored to understand the level of attenuation a radiated signal would face under different plasma conditions. Both constant and profiled plasma permittivity issues were investigated. The results indicated that the best approach was to conjugately match the MNG layer to the properties of the plasma layer next to the reentry vehicle. While the resulting attenuation corresponding to a plausible plasma profile associated with a reentry scenario was considerable, the maximum level nevertheless was not perceived to be insurmountable in practice.
This work has identified a path to potentially overcome the blackout problem faced by reentry spacecraft and hypersonic aerial vehicles. While the tunable two layer MNG-ENG metastructure was proven to be effective for the assumed plasma thickness, extending the range and properties of the metastructure, such as with the multilayer Bragg filter design to conjugately match the plasma permittivity and thickness, is currently under investigation. The simple varactor-based tunability of the MNG metasurface illustrates that it could be readily matched to the various plasma regions that might be faced during reentry. Moreover, it was demonstrated that the Huygens dipole antenna design could be modified to adapt to the projected plasma conditions and resulting MNG-ENG structure. Thus, the developed approach could significantly extend the communication window of the aerial vehicle with its mission control center by crucial seconds or minutes depending on the trajectory and speed of the aerial vehicle as it propagates through the atmosphere. | 10,550.4 | 2020-10-06T00:00:00.000 | [
"Engineering",
"Physics"
] |
Modulated magnetic structure in 57Fe doped orthorhombic YbMnO3: a M\"ossbauer study
In the orthorhombic manganites o-RMnO3, where R is a heavy rare earth (R = Gd-Yb), the Mn3+ sublattice is known to undergo two magnetic transitions. The low temperature phase has an antiferromagnetic structure (collinear or elliptical), which has been well characterized by neutron diffraction in most of these compounds. The intermediate phase, occurring in a narrow temperature range (a few K), is documented for R = Gd-Ho as a collinear modulated structure, incommensurate with the lattice spacings. We report here on a 57Fe M\"ossbauer study of 2% 57Fe doped o-YbMnO3, where the spin only Fe3+ ion plays the role of a magnetic probe. From the analysis of the shape of the magnetic hyperfine M\"ossbauer spectra, we show that the magnetic structure of the intermediate phase in o-YbMnO3 (38.0 K
Introduction
The physics of the orthorhombic (or perovskite) rare earth manganites o-RMnO 3 , where R is a rare earth ion, is governed by the interplay of various interactions: the largest is the Jahn-Teller effect on the 3d 4 Mn 3+ ion, which can lead to orbital ordering; the interionic exchange interaction, which leads to magnetic ordering; and the magneto-electric coupling which couples ferroelectric order and magnetic order in certain given circumstances (for a review, see Ref. [1]). Due to the present interest in multiferroic phenomena, i.e. acting on magnetic moments via an electric field (or vice-versa) [2], the precise determination of the lattice and magnetic properties of these interesting materials, and of their interplay, is of prime importance.
The magnetic phase diagram of the orthorhombic rare-earth manganates has been thoroughly established in Ref. [3] by means of specific heat measurements. For the naturally occurring perovskite phases (R = La-Gd), a single magnetic transition, due to ordering of the Mn moments, is observed. Its critical temperature T N1 decreases with the rare earth radius. For Eu and Gd however, a second transition occurs at T N2 > T N1 . The same happens for all the heavier rare earth manganates, for which the orthorhombic phase is metastable. It is obtained through high pressure annealing of the naturally occurring hexagonal phase. Regarding the ground magnetic structure of o-RMnO 3 , neutron diffraction studies determined it to be of antiferromagnetic (AF) collinear A-type for R=La-Gd [4,5], of transverse spiral type for R = Tb and Dy [6,7] and of AF collinear E-type for R = Ho [8]. In TbMnO 3 and DyMnO 3 , the spiral magnetic order is accompanied by a ferroelectric order and a large magneto-electric effect [2,4]. In HoMnO 3 , the E-type order also bears ferroelectricity [9]. The structure of the intermediate phase (T N1 < T < T N2 ) is documented only for R = Gd-Ho, where it was shown to be incommensurate sine-wave modulated [8]. The Néel temperature T N1 characterizes therefore a lock-in transition below which the magnetic structure becomes commensurate with the lattice spacing.
For the last rare earths of the series, the ground magnetic structure has been determined only for R = Yb [10]: it is of AF E-type, but the structure of the intermediate phase has not yet been elucidated. By analogy with what occurs for R = Gd-Ho, it is expected to be incommensurate modulated. A Mössbauer spectroscopy study of 57 Fe doped o-YbMnO 3 , with the isotopes 57 Fe and 170 Yb, was performed in Ref. [11], but could not reach a definitive conclusion about the structure of the intermediate phase.
We have performed a 57 Fe Mössbauer investigation of this material, doped at a 2% level with 57 Fe, and by analyzing the shapes of the spec-tra in the intermediate phase, we show they are compatible with a collinear incommensurate modulated type. Recently, we have shown the feasability of this method by analyzing the incommensurate magnetic phases in FeVO 4 by Mössbauer spectroscopy [12] (see also Ref. [13]). Stoichiometric quantities of the starting materials were thoroughly mixed and ground together, then pressed into pellets and heated up to 950 • C, in air, for 12h. After an intermediate grinding, the mixture was pressed again into pellets and further heated at 1100 • C for 48h in air, which completed the synthesis. According to the powder x-ray diffraction pattern (see Fig.1 a), the h-YbMnO 3 sample was single phase, with a=b=6.068(1)Å and c=11.364(1)Å.
The orthorhombic phase of YbMnO 3 was prepared by heating the hexagonal phase h-YbMnO 3 under high pressure. The powder was packed into a platinum capsule surrounded by pyrophillite (a good pressure transmitter and thermal insulator) and heated at 1100 • (ramp rate 40 • /min) during 40 min under an applied pressure of 5 GPa. The XRD pattern is entirely compatible with the orthorhombic Pnma space group (see Fig.1 b), with lattice parameters (with the Pnma setting) : a = 0.57988 nm, b = 0.73094 nm and c = 0.52197 nm, in good agreement with published values [10,11]. The Bragg peaks are narrow, showing negligible nonstoichiometry.
Magnetic susceptibility
The polycrystal magnetic susceptibility was measured with a field of 20 G in the temperature range 5 -70 K. It shows a monotonic decrease as temperature rises, with a tiny accident around 40 K (see Fig.2). The structure of the accident is better revealed by the derivative dχ/dT shown in the inset of Fig.2. As temperature is decreased, one observes a jump at 41.5 K, which is therefore identified as T N2 , the transition temperature to the intermediate phase, and a maximum at 38 K, which is identified as T N1 , the transition temperature to the collinear E-type AF phase. Thus, in our o-YbMnO 3 sample, the intermediate magnetic phase occurs in the range 38.0 K < T < 41.5 K. These boundaries match rather well the T N value of 43 K reported in Ref. [10] and the two values T N1 36 K and T N2 40 K inferred from Ref. [3], showing that the intermediate phase transition temperatures vary slightly from sample to sample.
Mössbauer measurements
The Mössbauer spectra were recorded with a linear velocity electromagnetic drive on which is mounted a commercial 57 Co * :Rh γ-ray source. A standard liquid He cryostat with temperature regulation was used.
The Electric Field Gradient tensor
o-YbMnO 3 crystallizes into the orthorhombic space group Pnma, where the Mn 4b-site has triclinic C i point symmetry. Fe substitutes for Mn in the doped material, and since Fe 3+ and Mn 3+ have approximately the same radius (0.645Å), one can reasonably assume that the Fe site is not appreciably distorted with respect to the Mn site. On this basis, the authors of Ref. [11] have performed a Point Charge Model calculation using the o-YbMnO 3 crystal parameters in order to obtain the Electric Field Gradient (EFG) tensor V ij (principal axes and diagonal values) at the Fe/Mn site. This tensor is needed in order to evaluate the quadrupole hyperfine interaction which characterizes the Mössbauer spectrum in the paramagnetic phase. It has zero trace and it is usually determined by two quantities: V ZZ , where OZ is the principal axis and η The splitting ∆E Q of the spectral doublet observed in the paramagnetic phase, due to the electric quadrupolar hyperfine interaction with the I=3/2 excited nuclear state of 57 Fe, with quadrupole moment Q=0.21 barn, is given by: The experimental value ∆E Q 1.54 mm/s, obtained above 43 K in Ref. [11] as well as in the present work (not shown), is in rather good agreement (∆E Q 1.59 mm/s) with the Point Charge calculation of Ref. [11], which also yields η=0.175. Furthermore, the ordered Mn 3+ magnetic moment lies along the orthorhombic a axis in the low temperature E-type collinear AF phase [10]. The impurity Fe moment and the hyperfine field, proportionnal to the moment for Fe 3+ , are expected to lie along a as well. The Point Charge calculation of Ref. [11] has determined the values of the polar and azimutal angles of the hyperfine field (the a axis) in the Electric Field Gradient frame: θ 37.8 • and ϕ 270 • .
Mössbauer spectra in the magnetic phases
Selected spectra are represented in Fig.3, in the ground collinear E-type phase at 4.2 K, and for three temperatures inside the intermediate phase: 38.1, 39.1 and 40.3 K. They are in good agreement with those in Ref. [11]. The spectra show no significant variation in the ground E-type phase, between 4.2 K and 36 K, revealing the presence of a single magnetic hyperfine field, in agreement with the magnetic structure determination of a collinear single moment AF structure [10], with moments directed along the a axis. A good fit is obtained using the quadrupolar parameters and the polar and azimutal angles of the hyperfine field determined in Ref. [11]. The hyperfine field at 4.2 K is 44.3 T, a value lying at the lower end of the typical range for Fe 3+ in insulators (50 ± 5 T). Above 38 K, in the intermediate phase, the spectra show a drastic change: the lines broaden and the resonant absorption strongly increases in the center of the spectrum, i.e. near zero velocity. At 40.3 K, the spectrum is an almost featureless asymmetric doublet. These characteristics point to the presence of a distribution of hyperfine fields with a rather strong weight near zero and low values. Generally speaking, in a magnetically ordered phase, this very specific feature is generated in a Mössbauer spectrum solely by an incommensurate collinear magnetic structure, like that observed in FeVO 4 in the intermediate magnetic phase, for 15.7 K < T < 23 K [12]. Other types of incommensurate orderings (for instance elliptical) do not yield an enhanced weight near zero hyperfine field.
The distribution corresponding to a collinear incommensurate sine-wave modulation of hyperfine fields is shown in Fig.4 a, for a maximum hyper-fine field of 38 T, and the associated simulated spectrum, with the same quadrupolar parameters and orientation of the hyperfine field as determined for o-YbMnO 3 :Fe, in Fig.4 b. Comparison of the simulated spectrum in Fig.4 b and of the experimental spectra in the intermediate phase of o-YbMnO 3 :Fe at 38.1 K and 39.1 K shows a clear similarity: not only, as mentioned above, the presence of a large spectral weight at the center of the spectrum, but also the left-hand central line being broader than the right-hand one. There is however an important difference: the outer and intermediate lines in the simulation of Fig.4 b have an asymmetric shape, whereas those in the spectra are rather symmetrically broadened. Such a broadening could be caused by hyperfine field fluctuations, but the two other spectral features mentioned above clearly cannot, thus excluding relaxation effects as a cause for the observed peculiar spectral shapes.
The simulated spectrum in Fig.4 b actually corresponds to the case of a modulated magnetic structure in a pure material, where there is only one sort of magnetic ion, like in FeVO 4 . In o-YbMnO 3 :Fe, we observe the spectrum of Fe impurities whose presence in the matrix must entail a small perturbation of the magnetic structure of the matrix Mn ions. Furthermore, we assume that the Mn magnetic structure is locally reflected in the magnitude of the Fe impurity magnetic moment, hence of the hyperfine field at the Fe site. This assumption means that a low level substitution of the 3d 4 Mn 3+ ion by a 3d 5 Fe 3+ ion does not essentially perturb the superexchange interaction, but it is reasonable to consider that it can somehow blur the modulation of the Fe moments all over the sample.
For these reasons, we have fitted the spectra in the intermediate phase using the four following assumptions: i) the quadrupole interaction tensor is fixed to its value at 4.2 K. ii) the magnetic structure is collinear incommensurate, yielding a modulated variation for the Fe moment, and the Mn/Fe moment direction, hence that of the hyperfine field, is along the a axis, i.e. it is the same as in the collinear E-type ground structure.
iii) the modulation is described by a Fourier expansion up to the 3 first (odd) harmonics, as a function of the abscissa x along the propagation vector k: in order to account for possible deviations from the pure sine-wave. iv) a small random deviation from the modulated value exists at each site, which accounts for potential defects of the incommensurate structure reflected at the impurity site. This deviation has the form: δH hf = x ∆H hf , where x is chosen at random in the interval [0;1] and ∆H hf is a parameter which must be fitted to the lineshape. The mean value of the deviation is therefore σ = 1 2 ∆H hf . The spectra in the intermediate phase have been successfully fitted this way, as witness the red solid lines in Fig.3. The corresponding hyperfine field (or moment) modulations as shown in Fig.5. At 38.1 K, just above T N1 = 38.0 K, the modulation is somewhat squared, and 3 harmonics are needed to reproduce the spectral shape: h 1 = 38.0 T, h 3 = 4.32 T and h 5 = 0.80 T. At 39.1 K, in the middle of the intermediate phase, the modulation is close to pure sine wave with h 1 = 31.2 T. At these two temperatures, the mean value of the deviation from the modulation amounts to 6% of the maximum hyperfine field. At 40.3 K, just below T N2 = 41.5 K, the first harmonics has decreased to 11 T and the mean value of the deviation is rather large: σ 6.2 T, so that the magnetic hyperfine structure has almost disappeared, leaving an asymmetric doublet with a broad base.
Conclusion
Using 57 Fe Mössbauer spectroscopy, we have shown that the lineshapes in the intermediate magnetic phase (38.0 K < T < 41.5 K) of orthorhombic YbMnO 3 (doped with 2% 57 Fe) are compatible with a collinear incommensurate magnetic structure. The Fe hyperfine field, and hence the Mn spontaneaous moment, has the same direction as in the ground E-type AF phase, i.e. the crystal a axis. The modulation is mainly sine-wave, but we could detect some "squaring" just above the lock-in transition. Since this type of magnetic structure has been found for the intermediate phase in orthorhombic RMnO 3 with R = Gd-Ho [4,5,6,7,8], we think our Mössbauer spectra demonstrate its presence also in orthorhombic YbMnO 3 . Since Mössbauer spectroscopy is a local microscopic technique, it cannot determine the wave vector of the modulation, and this should be done by neutron diffraction. | 3,413.2 | 2018-01-16T00:00:00.000 | [
"Materials Science"
] |
Bagging survival tree procedure for variable selection and prediction in the presence of nonsusceptible patients
Background For clinical genomic studies with high-dimensional datasets, tree-based ensemble methods offer a powerful solution for variable selection and prediction taking into account the complex interrelationships between explanatory variables. One of the key component of the tree-building process is the splitting criterion. For survival data, the classical splitting criterion is the Logrank statistic. However, the presence of a fraction of nonsusceptible patients in the studied population advocates for considering a criterion tailored to this peculiar situation. Results We propose a bagging survival tree procedure for variable selection and prediction where the survival tree-building process relies on a splitting criterion that explicitly focuses on time-to-event survival distribution among susceptible patients. A simulation study shows that our method achieves good performance for the variable selection and prediction. Different criteria for evaluating the importance of the explanatory variables and the prediction performance are reported. Our procedure is illustrated on a genomic dataset with gene expression measurements from early breast cancer patients. Conclusions In the presence of nonsusceptible patients among the studied population, our procedure represents an efficient way to select event-related explanatory covariates with potential higher-order interaction and identify homogeneous groups of susceptible patients.
Background
Since the inception of large-scale genomic technologies, there has been a growing interest in analyzing the prognostic and predictive impact of high-dimensional genomic markers. However, the extremely large number of potential interaction terms prevent from being specified in advance and incorporated in classical survival models. In this context, tree-based recursive partitioning methods such as CART (Classification And Regression Tree [1]) provide well-suited and powerful alternatives. This nonparametric methodology partitions recursively the Full list of author information is available at the end of the article predictor space into disjoint sub-regions (so-called terminal nodes or leaves) that are near homogeneous according to the outcome of interest. This framework is particularly well-suited to detect relevant interactions and produce prediction in high-dimensional settings.
Since the first extension of CART to censored data (termed as survival trees) proposed by Gordon and Olshen [2], many new methods have been proposed so far (for a review see [3]). Broadly speaking, the key components for a survival tree are: the splitting criterion, the prediction measure, the pruning and tree selection rules. The splitting survival-tree criteria rely either on minimizing the within-node homogeneity or maximizing the between-node heterogeneity. They are based on various quantities such as the distance between Kaplan-Meier survival curves [2], likelihood-related functions (e.g. [4]) or score statistics (e.g. [5]) such as weighted or unweighted Logrank test statistics. The final prediction measure, within each terminal node, is typically based on non-parametric estimations of either the cumulative hazard function or the survival function. The pruning and selection rules are applied to find the appropriate subtree and avoid overfitting.
However, the well-known instability of tree-based structures has led to the development of so-called survival ensemble methods such as bagging survival tree and random survival forest [6,7]. The main idea is that the combination of several survival tree predictors has better predicting power that each individual tree predictor. The general strategy is to draw bootstrap samples from the original observations and to grow the maximal tree for each of these samples. This strategy also circumvents the problem of pruning and selection since each tree is grown full size. The final prediction is obtained by averaging the predictions from each individual trees. In practice, the bagging can be viewed as a special case of random survival forests where all the covariates are considered as relevant candidates at each node. These methods also provide a way to define various variable importance measures that can be used for variable selection.
Even though survival trees are non-parametric methods, their constructions rely heavily on the chosen model-related splitting criteria that are based on either parametric or semi-parametric modeling assumptions (e.g. [4,8]). Thus, for a particular problem, the choice of the splitting criteria is crucial to the performance of the tree regarding variable selection and prediction [9]. This problem is particularly appealing in the context of survival data with nonsusceptible individuals where the investigator is interested in identifying homogeneous subgroups according to the time-to-event outcome among the individuals who are susceptible to experience the event of interest. In clinical oncology, these nonsusceptible individuals (sometimes referred as long-term survivors or cured patients) are those who have been successfully cured from the disease by the primary treatment. For infectious and immune diseases, these individuals are those who are resistant to certain pathogens or tolerant to specific antigens. In such mixed population, none of the classically used splitting criteria explicitly focuses on the time-to-event survival distribution among susceptible individuals, which raises some open questions about their performance.
In the literature, various survival models taking into account for a fraction of nonsusceptible patients (also called "improper survival distribution" models) have been proposed. The oldest framework relies on two-component mixture models which explicitly assumes that the population under study is a mixture of two subpopulations of patients (susceptible/nonsusceptible) in a parametric or semi-parametric modeling approach (for a review, see [10]). A different framework proposed more recently defines the cumulative hazard risk as a bounded increasing positive function that can be interpreted from either a mechanistic model (as first introduced by [11] in oncology) or a latent activation scheme [12].
In this work, our aim is to unravel complex interactions between genomic factors that act on the time-to-event distribution among susceptible patients while adjusting for the confounding effect associated to the existence of a fraction of susceptible patients in the population under study.
Thus, we propose a bagging survival tree procedure for variable selection and prediction which is tailored to this situation. The strategy relies on an improper survival modeling which considers a linear part for taking into account for known confounders associated with the nonsusceptible fraction and a tree structure for the eventrelated explanatory variables. The building of the survival trees rely on a model-based splitting criteria that explicitly focuses on susceptible patients. The considered splitting criterion is linked to a recently proposed model-based discrimination index that quantifies the ability of a variable to separate susceptible patients according to their time-to-event outcome [13].
Next, the splitting criteria and the general procedure are presented. We then compare the results obtained with this procedure to those obtained with the classical Logrank statistic as the splitting criteria. We illustrate the clinical interest of this procedure for selection and prediction among patients with early-stage breast carcinoma for whom gene expression measurements have been collected. We conclude with a discussion on the practical use of the procedure, its limitations and the potential extensions.
Notations and improper survival model
Let the continuous random variables T and C be the true event and censoring times. Let X = min(T, C) be the observed time of follow-up, δ = 1 (X=T) the indicator of event and Y (t) = 1 (X≥t) the at risk indicator at time t. Here, we consider that for nonsusceptible individuals T = ∞ + . Thus, the survival function S(t) of T is said to be improper with S(∞ + ) > 0. The hazard function (or the instantaneous event rate) of T is noted: where f (t) is the density function of T. The corresponding cumulative hazard function is noted (t) = t 0 λ(s)ds with a finite positive limit θ such as (t = ∞ + ) = θ < ∞ + . Let Z = (Z 1 , Z 2 ) be the (m 1 + m 2 )-dimensional vector of covariates where Z 1 is the m 1 -dimensional sub-vector of known confounding covariates linked to the nonsusceptible state and Z 2 is the m 2 -dimensional sub-vector of explanatory covariates of interest (associated with the time-to-event outcome).
For each patient i (i = 1, . . . , n), the observed data consists of (X i , δ i , Z i ). We assume noninformative censoring for T and C [14]. For modeling the time-to-event survival distribution, we propose to consider the following tree-structured improper survival model: where the bounded cumulative hazard function t|Z 1i , W (Z 2 ) il depends on Z 1 and Z 2 through a linear and a tree component, respectively. In this latter case, the dummy covariate W = 1 if the i th observation belongs to the l th leave (or terminal node) of the tree (Z 2 ) and zero otherwise.
Here, the cumulative hazard function is modeled such as: where H(t) is an unspecified continuous positive function increasing from zero to infinity which formulates the shape of the time-to-event survival distribution for each terminal node. Thus, the cumulative hazard function t|Z 1i , W is bounded, increases with t and reaches its maximum for θe α T Z 1i where α is an unknown vector of parameters associated to Z 1 and θ is a positive parameter.
At any split, if we assume proportionality between the two child nodes with Z * a binary variable for node membership, the previous model can be written in terms of the hazard function such as: where h(t) = ∂H(t) ∂t and γ is an unknown parameter associated with variable Z * .
Splitting criterion
The classical use of Logrank related statistics in survival trees relies on the fact that these statistics are considered as between-node heterogeneity criteria.
In the context of a mixed population (nonsusceptible/susceptible), we have proposed [13] a pseudo-R2 criterion that can be interpreted in terms of percentage of separability obtained by a variable according to timeto-event outcomes of susceptible patients. This criterion represents a good candidate for the splitting process.
In the following, we give the formula of the splitting criterion through its relationship with the partial loglikelihood score.
Let (X i , δ i , Z i ; i = 1, . . . , m; m ≤ n) be the set of observed data within node τ . We consider splitting the parent node τ of size m into two child nodes τ L and τ R . Let Z * i be a binary variable such as Z * i = 1 if the i th observation belongs to node τ L and zero otherwise, and γ the unknown parameter associated with Z * . The partial likelihood based on (1) is as follows: The score vector deduced from the partial loglikelihood for the improper survival model (1) under the hypothesis of γ = 0 is such as: is a baseline cumulative hazard function bounded by θ under the hypothesis of γ = 0. It is worth noting that when θ tends to infinity (the nonsusceptible fraction tends to zero) then ω (X i ) tends to one. In this latter case, the proposed score corresponds to the classical adjusted Logrank statistic which is appropriate for proper survival model.
The corresponding robust variance estimator [15] is such as: The practical expression of U and V are obtained by replacing 0 , θ, and α by their respective estimatorsˆ 0 , θ , andα. Here, 0 is the left-continuous version of the Breslow's estimator [16,17]. The estimated quantity θ is equal toˆ 0 (t max ) where t max is the last observed failure time and α is the maximum partial likelihood estimator of α under the null hypothesis (γ = 0).
The quantity S = U 2 V /K where K is the total number of distinct event times is a pseudo-R2 measure [13]. This criterion is unit-less, ranges from zero to one and increases with the effect of the splitting variable. It is also not affected by the censoring, the sample size and the nonsusceptible fraction. To a factor K, this criterion can also be interpreted as the robust score statistic obtained from the partial log-likelihood under the improper survival model [15].
Bagging procedure and prediction estimate
We consider a learning set L, consisting of n independent observations: . . , B) denotes the b th bootstrap sample of the training set L obtained by drawing with replacement n elements of L. According to random sampling of observations with replacement, an average of 36.8 % are not part of L * b . Let OOB b = L\L * b be the set of these elements. The observations in OOB b are not used to construct the predictor P b ; they constitute for this predictor the so-called Out Of Bag (OOB) sample.
The bagging procedure is as follows: • Take a bootstrap replicate L * b of the training set L • Build a survival tree such as: * For each split candidate variable Z * (based on the information from Z 2 ) compute the corresponding splitting criterion S(Z * ) presented above. * Do the same procedure for all the split candidate variables. * Find the best split S * which is the one having the maximum value over all the candidates. Then, a new node is built and the observations are splitted accordingly. * Iterate the process until each node reaches a pre-defined minimum node size or be homogeneous. • Calculate the cumulative hazard function (CHF) estimator for each terminal leave of each bootstrap tree T b . * The Breslow-type estimator of the baseline cumulative hazard [16,17] in a terminal node l of the tree whereα is the partial log-likelihood estimator obtained using all the learning data from the tree T b . * The Nelson-Aalen estimator of the baseline cumulative hazard [18,19] in a terminal node l of the tree T b is computed as: Compute the CHF prediction estimator: The CHF prediction estimator for a new patient j with covariate Z j is computed as follows. The patient's covariates Z 2 j are dropped down each tree. Then, the prediction is obtained as the weighted average of the estimated CHF over the learning datasets with the same membership terminal node assignment than the new case: where L(b) is the number of leaves nodes of the tree T b
Measures of prediction accuracy
Various measures have been proposed so far for assessing the estimated survival predictions (e.g. [20,21]). One of the most popular in censored data analysis is the integrated Brier score [22] which is now widely used in survival tree-based methods. The Brier score is interpreted as the mean square error between the estimated survival function and the data weighted by the inverse probability of censoring. Its square root can be interpreted as the expected distance between the predicted risk and the true event status. The Brier score is a pointwise measure which is given at time t by: whereĜ(t) is the nonparametric Kaplan-Meier estimate of the censoring distribution which represents the weights in the expected Brier score. The integrated Brier score over time is given by: Here, we take advantage of the bagging strategy that provides OOB CHF estimator (2) for computing the Out Of Bag IBS denoted by IBS * . This latter quantity is obtained such as:
Importance score
The choice of a measure of importance for a variable can rely on either the prediction capacity or the discriminative ability of the variable through the tree structure. Here, we consider the following importance scores.
Index importance score (IIS)
For each bootstrap tree T b indexed by b = 1, . . . , B, let ν b be a given node for the tree T b . For each component j of the vector Z 2 and for each tree T b , the Importance Score of Z 2 j is computed as the sum of the values of the splitting criterion at each split relying on this variable (S ν b ) times the number of events in the split ( ν b ). This latter quantity corresponds to the value of the robust Logrank score under the improper survival model.
These scores are summed across the set of trees, and normalized to take values between 0 and 100, with sum of all scores equal to 100:
Depth and index importance score (DIIS)
The second criteria is inspired from the Depth Importance measure that has been introduced by Chen et al. [23]. This measure is similar to the Index Importance Score but also considers the location of the splitting.
If d t denotes the depth of the split of node ν b in the tree T b , we define These scores are summed across the set of trees and normalized to sum to 100:
Permutation prediction importance score (PPIS)
The permutation importance is conceptually the most popular measure of importance for ensemble methods which relies on prediction accuracy. It is assessed by comparing the prediction accuracy of a tree before and after random permutation of the predictor variable of interest.
For each tree T b , b = 1, . . . , B of the forest, consider the associated Out Of Bag sample OOB b . Let denote IBS * b the OOB Integrated Brier Score based on the sample OOB b and using the single tree T b as predictor. The IBS * b corresponds to a restriction of IBS * on the sample OOB b (of cardinality equal to |OOB b |) using the predictor T b : Then, for each component j = 1, . . . , m 2 of the vector Z 2 = (Z 2 1 , . . . , Z 2 m2 ) of predictors, the values z 2 ij are randomly permuted within the OOB b samples, and the prediction accuracy IBS * j b is computed once again. The Permutation Importance is the average of increase in prediction error over the B bootstrap samples: Large values of PPIS indicate a strong predictive ability whereas values close to zero indicate a poor predictive ability. In the following, we will denote PPIS-NA and PPIS-BRE the scores obtained using the Nelson-Aalen and the Breslow estimators, respectively.
Basket of important variables
For selecting a subset (hereinafter referred as a basket) of the most important variables, the main problem is to choose a threshold value for the previous scores. Several performance-based approaches have been proposed in the literature to deal with the variable selection in Random Forests comparing either OOB or cross-validated errors of a set of nested models. Most of these procedures share the same methodological scheme and differ only in minor aspects (for a few see [24][25][26]). However, for survival data there is no consensus about which measure of prediction error is the most appropriate. Thus, each measure leads to a particular estimation of the prediction error that ultimately leads to select different subset of variables. Rather than using performance-based approaches, we propose hereafter to consider a strategy based on a testing procedure using a topological index which allows to select a basket of important variables.
In the following and without loss of generality, we suppose that the index score of interest is the IIS.
We then consider a permutation test at a global level α for testing the hypothesis The procedure consists in iterating between the following steps: • Step 1: Use the learning set L to build the bagging predictor as describe in "Bagging procedure and prediction estimate" Section. Compute for each competing variable Z 2 j the index score of importance IIS j as describe in "Importance score" Section.
. . , n} be a partial permutation of L. Use the learning set L σ to build another bagging predictor using the same procedure as in the first step and compute again for each competing variable Z 2 j the index score of importance IIS 0 j . • Step 3: Repeat Step 2 a number Q of times.
• Step 4: Compute the P-values for each competing variable Z 2 j as follows: Step 5: Using a Bonferroni procedure for multiple comparisons, the selected variables are those fulfilling the conditions This procedure is conceptually similar to the one proposed by [27] to correct the bias of the so-called Gini importance in a Random-Forest framework. Nevertheless, in our framework, we have to take into account the covariables Z 1 associated to the nonsusceptible individuals. For this purpose, the permutation scheme used in Step 2 ensures that the existing relationship between the time-to-event observations and the covariates Z 1 is not distorted under the null hypothesis.
Simulation scheme
In order to evaluate the performance of the bagging survival strategy relying either on the classical adjusted Logrank splitting criterion (denoted LR) or the proposed pseudo-R2 criterion (denoted R2), we performed a simulation study as follows.
The data were generated from an improper survival tree using the following model: where The Bernoulli variables G 1 , . . . , G 5 related to the time-to-event variable T are generated using the following scheme: Predictor G 1 is associated with the nonsusceptible fraction while predictors G 2 , . . . , G 5 are associated to the survival distribution of the susceptible fraction through a five risk group survival tree. The underlying improper survival tree is displayed in Fig. 1. The censoring distribution was exponential with parameter chosen to give 10 and 25 % of censoring within the susceptible population. The parameter θ is such as exp(−θ) corresponds to the proportion of nonsusceptible individuals for the reference group (G 1 = 0).
We considered eight different scenarios with, for each, three different values for the number of noise or noninformative covariables (10, 100 and 500), that are independent Bernoulli variables with π = 0.5. Thus, a total of 24 different simulation sets were generated. The first four scenarios are based on model (3), with N = 250 individuals, a proportion of nonsusceptible patients of 25 and 50 %, and the rate of censoring observations within the susceptible population of 10 and 25 %. The last four scenarios are also based on model (3), but with N = 500 individuals and the same setting as the previous ones.
The simulation scheme is summarized in Table 1 where "censoring" represents the proportion of censoring among susceptible individuals and "plateau" the proportion of nonsusceptible individuals in the population. For all scenarios, LR and R2 are adjusted criteria for the known confounding factor G1 linked to the nonsusceptible state. We also evaluate the prediction accuracy using either the Nelson-Aalen (denoted NA) or the Breslow (denoted BRE) estimators. For prediction Table 1 Simulation scenarios for the evaluation of the importance scores and the prediction accuracy For each scenario, we have generated 50 data sets. The bagging procedure with 400 trees was then applied to each data set with the two proposed splitting criteria. We then obtained 50 estimates of the Out Of Bag Integrated Brier Score for each method and each scenario.
We considered an additional scenario designed to mimic a data set that would reflect a situation, such as the one presented in our example, where variables are functionally related through groups (e.g. biological pathway). In practice, we generated correlated variables divided in five blocks of various sizes (ranging from 10 to 30 %) with correlations ranging between −0.2 to 0.3. We considered a situation with 500 individuals, a proportion of nonsusceptible patients of 25 %, a rate of censoring observations within the susceptible population of 10 and 25 % and two different values for the number of non-informative covariables (100 and 500). Figure 2 shows for one scenario and the 50 generated datasets, the Kaplan-Meier curves obtained for the different leaves.
Prediction results
The Box-plots of the 50 values of OOB-IBS are presented in Figs. 3-4 corresponding to scenarios 1-4 and 5-8 respectively.
In the first scenario (first column from left to right of Fig. 3), the OOB-IBS are consistently and slightly lower than their counterparts of scenario 2 (second column from left to right of Fig. 3). This was expected because of the increase in censoring proportion among the susceptible population from 10 % in scenario 1 to 25 % in scenario 2. The OOB-IBS obtained using our proposed "pseudo-R2" splitting criterion are better (lower median value with a smaller variability) than those obtained with the stratified Logrank criterion. The "Pseudo-R2" consistently outperforms the Logrank in term of prediction accuracy for the first two scenarios. For these scenarios, the results obtained with BRE and NA estimators are comparable. The impact of the additional noise variables on the prediction accuracy seems insignificant.
The same remarks can be made for scenarios 3 and 4 using the last two columns from left to right of Fig. 3. The only additional information here is an increase of the global magnitude of the OOB-IBS from the first two columns of Fig. 3 to the last two columns. This is mainly due to the decrease of the proportion of susceptible population from 75 % in scenarios 1-2 to 50 % in scenarios 3-4, leading to a decrease of the number of events observed. These scenarios are more challenging than the previous ones.
The results of scenarios 5-8 (Fig. 4) are slightly better than those of scenarios 1-4. This is mainly due to the increase in the number of individuals from 250 to 500.
In the additional scenario with correlated variables (Fig. 5), the results are comparable to those of scenarios 5 and 6. The "Pseudo-R2" criterion still has an edge on the Logrank criterion in term of prediction accuracy.
Importance scores results
For each scenario and each proposed splitting criterion, we have computed four importance scores indexes: IIS, DIIS, PPIS-NA, PPIS-BRE. The behaviors of the four indexes are displayed in Figs. 6, 7, 8 and 9 using the mean over 50 replicates. Each figure displays the results obtained with the different number of additional noise variables: the blue color with the mark "•" represents the case of 10 additional noise variables; the red color with the mark " " is set for 100 additional noise variables; the green color with marks "+" is set for the case of 500 additional noise variables. For the sake of readability of the figures, the first 4 dots for each color graph represent the scores associated with explanatory variables G 2 , G 3 , G 4 , G 5 , respectively, whereas the remaining dots are for noise variables ranking in decreasing order (for clarity, we only plot the first 20 ordered variables). Figure 6 shows that in the simple Scenarios 1a and 2a with only 10 noise variables (blue color within Fig. 6), the pseudo-R2 splitting criterion attempts a clear discrimination between explanatory variables and noise variables, regardless of the considered importance scores. The same remark can not be made for the Logrank splitting criterion where the PPIS index discriminates only one variable while the IIS and DIIS indexes attempted to discriminate three explanatory variables from the noise ones.
In the more challenging scenarios 1b and 2b with 100 noise variables (red color within Fig. 6), the PPIS behaves poorly with the Logrank splitting criterion while the pseudo-R2 splitting criterion behaves well in discriminating the explanatory variables from the 100 noise variables. Nevertheless, the performances are quite similarly between the two splitting criterion with regard to IIS and DIIS.
In the most challenging scenarios 1c and 2c with 500 noise variables (green color within Fig. 6) we observe a little deterioration of performances, mainly for the PPIS index. The Logrank splitting criterion behaves poorly for all the indexes, while the IIS and DIIS for the pseudo-R2 splitting criterion still attempts a discrimination at low level compare to the previous ones.
The results of scenarios 3-4 are displayed in Fig. 7, where almost the half of the population is nonsusceptible. Combining this amount of plateau with censoring observations results in very few events observed in the scrutinized population. Compare to the previous scenarios, the results are quite similar for the pseudo-R2 splitting criterion with indexes IIS and DIIS. Nevertheless, the figure suggests a decrease in performances for indexes PPIS-NA and PPIS-BRE. Overall the Logrank splitting criterion performs very poorly regardless of the indexes.
The results of scenarios 5-6 are displayed in Fig. 8. These scenarios give more power for identify explanatory variables than the previous scenarios 1-4, since there is an increase in the population size by a factor of 2. As expected, the results are slightly better than all the results of scenarios 1-4. The pseudo-R2 splitting criterion allows a clear discrimination between the noise variables and the The results of scenarios 7-8 are displayed in Fig. 9. These results are quite similar to those of scenarios 5-6 for the pseudo-R2 criterion, mainly for the IIS and DIIS indexes despite the increase of the fraction of nonsusceptible individuals. Also, the PPIS performs poorly with a high number of noise variables.
The results of the additional scenario mimicking a data set that would reflect a situation such as the one presented in our example are displayed in Fig. 10. The pseudo-R2 splitting criterion attempts a clear discrimination between associated variables and noise variables for all the proposed criteria. The Logrank splitting criterion still has a poor performance for the PPIS indexes.
We investigated other scenarios with different values for the parameters related to the explanatory variables that lead to the same trends (results not shown). We also analyzed a scenario (results not shown) with a very small plateau value (5 %). As expected, our procedure outperforms the adjusted Logrank splitting method in terms of prediction accuracy but these gains are smaller than those obtained for higher "plateau" value. This is not surprising since the adjusted Logrank criteria can be seen as the limiting case of our criteria in which all the patients are susceptible. Thus, large power gains are anticipated in a situation where a non-negligible fraction of nonsusceptible patients is expected. However, if the plateau value is very small but identical for all individuals, then the classical unadjusted Logrank criteria should be more efficient. Fig. 4 Box-plot of the Out Of Bag Integrated Brier Score on simulated data set for scenarios 5-8: first column represents scenarios 5a-5c; second column represents scenarios 6a-6c; third column represents scenarios 7a-7c and fourth column represents scenarios 8a-8c
Analysis of breast cancer data Description of the data
We used bio-clinical data extracted from two genomic datasets (GSE2034, GSE2990) publicly available on the GEO (Gene Expression Omnibus) website (http://www. ncbi.nlm.nih.gov/geo/). The GSE2034 dataset corresponds to the expression microarray study conducted by Wang et al. [28] and the GSE2990 dataset to the one conducted by Sotiriou et al. [29]. Both studies investigate the prognostic effect of gene expression changes on the outcome of patients with primary breast cancer. For gene expression analyses, Affymetrix Human Genome U133A Array were used in both studies and estrogen-receptor (ER) status (positive/negative) was available. The clinical outcome considered was the distant metastasis-free survival. Distant metastasis-free survival was defined as the interval from the date of inclusion to the first occurrence of metastasis or last follow-up.
For these two early breast cancer series, surgical resection can be considered as effective at eliminating the tumor burden for a non-negligible proportion of patients whereas, for the others, it leads to a lower tumor burden and thereby prolonged survival without distant relapse. Thus, a nonsusceptible fraction exists, and having a large number of patients followed up more than a decade after the primary treatment allows for an interpretable time sequence for tumor relapse.
For this work, we decided to investigate the impact of estrogen-related genes in predicting metastasis among patients with ER-positive tumors.
The gene expression datasets of the two series were analyzed after a joint quantile normalization. Here, we focused on estrogen-related genes that were defined as those demonstrating, on the whole dataset, a significant gene expression changes between ER-positive and ERnegative for a familywise error rate of 1 % (Bonferroni In order to take into account the difference in the proportion of nonsusceptible patients between the two series, we included this variable as a confounding variable. We applied our proposed bagging survival procedure (with LR and R2 criterion) with 400 trees on the joint dataset presented just above. As can be seen from Fig. 12, the two splitting criteria lead to two different set of variables with very few overlap. As expected from the simulation results, for each splitting criteria, IIS, DIIS and PPIS give quite similar results.
Results
The basket of important variables (based on the IIS importance score) obtained using the procedure selection presented previously leads to select 16 variables for both the pseudo-R2 and the adjusted Logrank criteria (see Fig. 13).
When looking to the first ten genes, no gene was selected in common between the adjusted Logrank and the pseudo-R2 criterion. The first five top-genes selected with the pseudo-R2 criterion are: CBX7, NUTF2, AGO2, RPS4X and TTK.
The CBX7 (Polycomb protein chromobox homolog 7) gene is involved in several biologic processes and recent works indicate a critical role in cancer progression. A relationship between the down-regulation of CBX7 expression and the tumor aggressiveness and poor prognosis has been reported in different cancer. Preliminary studies also indicate a potential role in the modulation of response to therapy [30].
The NUTF2/NTF2 (nuclear transport factor 2) gene encodes a small binding protein. The main function of NTF2 is to facilitate transport of certain proteins into the nucleus. It is also involved in regulating multiple processes, including cell cycle and apoptosis.
The AGO2 (Argonaute 2) gene is a central component of RNA-induced silencing complex which plays critical roles in cancer process through proliferation, metastasis and angiogenesis. AGO2 has been found over-expressed in various carcinomas and associated with tumor cell growth and poor prognosis [31].
The RPS4X (X-linked ribosomal protein S4) gene is involved in cellular translation and proliferation. Low Variable importance results for scenarios 1-2: the first two rows from the top to the bottom represent scenario 1 while the last two represent scenario 2; • represents 10 noise variables, 100 noise variables and + 500 noise variables; "LR" represents the adjusted Logrank splitting criterion and "R2" the Pseudo-R2 splitting criterion Fig. 7 Variable importance results for scenarios 3-4: the first two rows from the top to the bottom represent scenario 3 while the last two represent scenario 4; • represents 10 noise variables, 100 noise variables and + 500 noise variables; "LR" represents the adjusted Logrank splitting criterion and "R2" the Pseudo-R2 splitting criterion Fig. 8 Variable importance results for scenarios 5-6: the first two rows from the top to the bottom represent scenario 5 while the last two represent scenario 6; • represents 10 noise variables, 100 noise variables and + 500 noise variables; "LR" represents the adjusted Logrank splitting criterion and "R2" the Pseudo-R2 splitting criterion RPS4X expression has been shown to be associated with poor prognosis in bladder, ovarian and colon cancer. Level of RPS4X is also a good indicator for resistance to platinum-based therapy and a prognostic marker for ovarian cancer. More recently, RPS4X has been identified as a partner of the overexpressed multifunctional protein YB-1 in several breast cancer cells. Depletion of RPS4X results in consistent resistance to cisplatin in such cell lines [32]. TTK (threonine tyrosine kinase, also known as Mps1) gene is essential for alignment of chromosomes to the metaphase plate and genomic integrity during cell. TTK gene has been identified as one of the top 25 genes overexpressed in tumors with chromosomal instability and aneuploidy [33]. TTK is overexpressed in a various solid cancers, and elevated levels of TTK correlate with high histological grade in tumors and poor patient outcome.
In our analysis, we observed the marginal deleterious effects on distant relapse free survival of high expression of TTK, AGO2, NUTF2 and low expression of CBX7 and RPS4X. The Fig. 14 shows a clear negative prognostic effect of low levels of gene expression for CBX7 and RPS4X genes among patients with ER positive breast tumors. This finding is in accordance with published results than have exclusively focused either on CBX7 or RPS4X genes. The fact that these two markers are not selected when using the Logrank as splitting criteria is not surprising since we can observe a marginal nonproportional time-varying effect of RPS4X. This trend is probably linked to the time-dependent changes in the composition of the population since the fraction of susceptible patients is progressively exhausted as time goes on.
In order to evaluate the variability of the results, we performed the same bagging procedure 50 times with 400 trees for each run. We then obtained 50 estimates of the Out Of Bag IBS for each method. Figure 15 shows the evolution of the OOB-IBS with the number of trees used in one random selected run of the bagging procedure for the four different procedures. It shows that 150 trees is clearly enough to stabilize the bagging predictor for all the criteria. As shown in this Figure, the procedure relying on the pseudo-R2 splitting criterion consistently outperforms the adjusted Logrank splitting method in terms of prediction accuracy. This result is further confirmed in Fig. 16, where the Box-plots of the 50 OOB-IBS are presented for all the procedures.
We also examined the Importance scores and 50 estimates of the importance scores for each procedures were computed. The mean of the 50 values is presented in Fig. 12 for the top 30 variables.
Discussion
The discovery and predictive use of event-related markers have to face two main challenges that are the search over markers acting in complex networks of interactions and the potential presence of nonsusceptible patients in the studied population. In this work, we proposed a new bagging survival procedure devoted to this task. The strategy relies on an improper survival modelisation which considers a linear part for taking into account for known confounders associated with the nonsusceptible fraction and a tree structure for the event-related explanatory variables. The proposed tree-structured modeling differs from the tree-augmented Cox proportional hazards model proposed by Sun et al. [34] in that it is explicitly tailored for mixture population. Moreover, our procedure relies on the use of a splitting criteria which can be interpreted as a time-to-event discrimination index suited to mixed population.
The results of our simulation study show the good behavior of our bagging procedure based on the pseudo-R2 criterion as compared to the one relying on the classical Logrank statistic. For prediction, even though differences between the procedures are small, better predictions were obtained with the proposed procedure. If a difference between the fractions of nonsusceptible individuals is expected then the estimators that use the Breslow estimate should be preferred over those using the Nelson-Aalen estimate.
For variable selection, even in the presence of a high number of nuisance variables, our procedure is able to select the explanatory variables. The performance is obviously better when the number of events which can occur among susceptible patients is increasing. Based on our simulation study, we recommend the IIS or the DDIS criteria. These criteria rely on the discriminative performance of each splitting variable with or without the information related to the depth of the split. By contrast, the PPIS criterion which relies on prediction error is highly dependent on the censoring rate and the number of noise variables. Moreover, it is well-known that there is no consensus on which prediction error criterion should be used for survival data.
The search for markers that predict distant relapse in hormone receptor-positive treated patients is still an intensive area of study. In the analysis of the two series of early-stage breast cancer presented in this article, the proposed procedure is particularly appealing since the majority of the patients are amenable to cure and then will never recur from the disease. The fraction of nonsusceptible patients being clearly different between these two studies, we consider the study as a confounder variable. We obtain a selection of top-genes which is different from those obtained with the classical Logrank statistic. The five top genes selected with our procedure are related to cancer and most of them have only been recently reported to be associated with prognosis. In breast cancer, we know that various pathways related to the tumor process are activated and that there is no unique selection of prognostic factors. However, since our main aim is to select the more powerful set of predictors and obtain the highest prediction, our procedure should be preferred. This model-based selection which takes into account the highorder interactions and focuses on susceptible patients shed light on new markers that could serve as potential drug targets for new therapies.
In this work, we assumed that the hazard functions for the susceptible individuals between two child nodes are conditionally proportional given the node but the proportionality for any two nodes from different parents is not required. Postulating a proportional hazards structure within the whole tree could be an option which requires further development and evaluation. Here, we also considered the case with known confounding variables which is frequently encountered in biomedical research. For a different purpose, we could however consider extending the procedure to unknown confounding variables. Further works are however needed to cope with the potential degree of non-identifiability between failure time distribution of susceptible individuals and the proportion of nonsuceptibles individuals.
Conclusion
In the presence of a mixed population with nonsusceptible patients, our results show that our bagging survival procedure with the proposed splitting criterion has good performance for prediction and variable selection. For measuring variable importance, we recommend the use of either the proposed Index Importance Score or the Depth and Index Importance Score. The proposed tree-building process, which relies on a model-based splitting criteria, can be considered as a convenient hybrid solution that combines multiplicative intensity model and tree-structured modeling. We believe that the proposed survival bagging procedure is very appealing for many clinical genomic studies in which a fraction of nonsusceptible individuals is commonly encountered. This procedure has been implemented in a R package called iBST (improper Bagging Survival Tree) and will be available soon on the CRAN repository.
Endnotes
Not applicable.
Abbreviations
CART, classification and regression tree; CHF, cumulative hazard function; DIIS, depth and index importance score; ER, estrogen receptor; GEO, gene expression omnibus; IBS, integrated brier score; iBST, improper Bagging Survival Tree; IIS, index importance score; LR, log rank; OOB, out of bag; OOB-IBS, out of bag integrated brier score; PPIS, permutation prediction importance score; PPIS-NA, permutation prediction importance score with Nelson Aalen; PPIS-BRE, permutation prediction importance score with Breslow. | 9,872.8 | 2016-06-07T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Quantum Chaos in Time Series of Single Photons as a Superposition of Wave and Particle States
We build a time series of single photons with quantum chaos statistics, using a version of the Grangier anti-correlation experiment. The criteria utilized to determine the presence of quantum chaos is the frame of the Fano factor and the power spectrum. We also show that photons with chaotic statistics are in a balanced superposition of photons with both wave-like and particle like behaviors. To support the presence of quantum chaos, we study both Shannon’s entropy, and the complexity of single photons time series.
Introduction
The dichotomy between the wave and corpuscular nature of light has gone a long way in the history of physics, but we do not really yet understand the meaning of waves and particles at the quantum level. We only have a classical representation of these concepts in mind and some intuitive definitions about them: a quantum wave can produce interference, while a quantum particle can track a path. J. Wheeler proposed the now-famous delayedchoice gedankenexperiment [1] to show that the nature of light, wave, or particle depends on how it is measured. This experiment was carried out in 2006 by Jacques et al. [2,3], which confirmed Bohr's complementary principle. Other similar experiments have been carried out with some variants, and the same results [4][5][6] showing that the states of light can be considered a superposition of wave and particle where the measuring device collapses the state in one of the two behaviors. Radu et al. investigated what happens in the thought experiment if the delayed choice is made through quantum-controlled experiments [7]. This proposal was developed by Jian-Shun et al. [8], who derive a quantum superposition of single-photon wave and particle properties by selecting the quantum detecting device in a superposition state rather than the eigenstates of the delayedchoice experiment. This superposition can be measured indirectly through interference visibility. Then, it is possible to measure simultaneously wave and particle behaviors in single photons with potential applications in coding quantum information [8,9]. In this article, we show that the superposition of wave and particle behavior of quantum systems is linked with quantum chaos statistics.
Among the various definitions that exist about quantum chaos, one, in particular, is well known: A quantum system behaves chaotically if there is a classical analog system that exhibits chaos [10].
However, this is not the only definition because quantum chaos appears to have an elusive behavior in comparison with classical chaos. In order to explain the essence of quantum chaos from the superposition of wave and particle behaviors, we considered the following: There is a relationship between the interference visibility of quantum particles and its transition from regular to chaotic behavior [11,12]. On the other hand, there is a change in the interference visibility depending on its degree of superposition between wave and particle behaviors, as mentioned above [8]. This reasoning is closed if quantum chaos is related to the superposition of both wave and particle behaviors in some systems. We will explore the latter relationship experimentally.
The methodology that we use to identify quantum chaos relies on the framework of the Fano factor [13][14][15][16] and the power spectrum [16][17][18]. This article is organized as follows. In Section 2, we explain the construction of wave and particle states on single photons. In Section 3, we discuss the relationship between the second-order correlation function and the Fano factor in order to show that this experiment always preserves the single-photon properties. Next, in Section 4, we present the experimental details. Our results are presented in Section 5, where we analyze the different statistical limits obtained. In Section 6, we describe the results in terms of regularity-complexity parameters, and Shannon's entropy. Finally, in the last section, we present our conclusions.
Wave and Particle Superposition with the Statistical Criterion
Gedanken-experiments help us understand the dual nature of quantum particles. Moreover, thanks to this, we are able to understand the role of varying the degree of superposition of these behaviors. The state of a photon can be defined as a quantum superposition of a wave (|w ) and a particle (|p ) [7][8][9]19]: where C w and C p are probability amplitudes, with P w = |C w | 2 , P p = |C p | 2 the probabilities for each photon to be detected in one or the other behavior. In order to analyze the particle-wave superposition, we reproduce a version of the anti-correlation experiment of Grangier [20] where single photons cross a polarizing beam splitter (PBS). Selecting the beam splitter proportions (P T ,P R ), transmission, and reflecting probabilities, depending on the polarizing angle of the incoming single photons, we will have predictable trajectories for (1, 0) and (0, 1). For any other case, we will have some degree of unpredictability, having the maximal for (0.5, 0.5). We use 'wave behavior' to refer to this maximal unpredictability and 'particle behavior' to refer to the predictable trajectories of both cases (1, 0) and (0, 1). A general quantum superposition of both limits represents a certain degree of unpredictability that can be measured by photon-counting fluctuations under the shot-noise limit. Because the Grangier experiment lacks the second beam splitter in comparison with the delayed-choice experiment, apparently, the behavior of each photon will always be detected as 'particle,' but we can show that there is an equivalence. Rotating the linear polarization of the incoming single photons (using a λ/2) plate, the probabilities in the two output ports of the PBS are where φ/2 is the angle of the λ/2 wave plate. These probabilities are equivalent to the output probabilities in the Mach-Zender interferometer if the usual phase e iθ in one of the two arms is θ = φ/2. The trigonometric identity for the double angle indicates that P T = 1 2 (1 + cos(φ)) and P R = 1 2 (1 − cos(φ)), the interference pattern of single photons [21]. Then, the wave or particle behaviors of single photons must also be codified in the photon statistics, particularly in the noise associated with the counting of single photons.
Sub Shot-Noise, Second-Order Correlation Function, and Fano Factor
The shot noise of electrons emitted in vacuum tubes has a standard deviation equal to the average of electrons emitted in a fixed time ∆t. Therefore, the Fano factor in this case is F = 1. This is the Poisson limit. The photons emitted by a laser fulfill this condition. To obtain noise with statistics below the shot noise, a control over the emission of photons is necessary; in other words, a single photon source is necessary. The second-order correlation function for photons crossing a beam splitter is defined as [21]: where implies a temporal average in the interval ∆T,â andâ † are the annihilation and creation operators respectively, and τ is the delay produced by the difference in the optical path between the two signal detected. The Fano factor is then defined as [16,22,23]: From Equations (4) and (5) we find that there is a relation between the second-order correlation function and the Fano factor, When n = 1, the second-order correlation function has the same value as the Fano factor. In this case, the Grangier's anti-correlation experiment [20] allow us to control the noise in the interval [0 ≤ g (2) (τ) ≤ 1]. One way to control the transition from quantum to classic statistics is by increasing the coincidence window size τ; or by increasing the average number of photonsn in the interval ∆T. In both cases g 2 (0) = 0 must be satisfied. To know the dependence between the transmitted Fano factor and the angle of the half-wave plate (HWP), we consider that g (2) (τ) = 0 for single photons, and that n T,R = P T,R in Equation (7). Then, using Equations (2) and (3) we find that F T = sin 2 (φ/2), and F R = cos 2 (φ/2). Moreover, from Equation (6) the variance can be expressed as (∆n) 2 = sin 2 (φ/2)cos 2 (φ/2).
Theoretically, if the single photons are horizontally polarized, the probability of being transmitted in a PBS is P T = 1. In this case, we say that the trajectory of the photons is well-defined, and it presents 'particle' behavior. The same argument is valid when vertically polarizing photons are reflected in the PBS with P R = 1. If the polarization of photons can be controlled by one HWP, the photons' particle-like states outcoming from the PBS can be written as: where we labeled each state with the polarization angle φ, and the labels inside the ket, correspond to transmitted and reflected states |H, V , respectively. When φ = 0 the photon that goes through the HWP is horizontally polarized and it keeps its original polarization. If φ = π/2, the original state |H, 0 is converted to |0, V . On the other hand, when the horizontal polarization is rotated π/4, the final state is: In this case, each photon has the same probability of being transmitted or reflected, so it has the maximum delocalization and presents wave-like behavior. Hence, the superposition of wave and particle states can be rewritten as: where C p and C w are the probability amplitudes in whichthe photon is registered as a particle or as a wave; however, we need to be careful here because a wave-like state |ψ φ=π/4 w corresponds to a single photon that can be detected with equal probability in one of two places, which are positioned at the detector as transmitted or reflected (D T ,D R ). In the case, we use only one detector, D T or D R , we can characterize both wave or particle behaviors through its statistical noise, as we will see.
The fundamental step in the realization of this experiment consists of the separate analysis of photon counting of the time series of photons transmitted or reflected separately. The reason is simple, as we have anti-correlation at each moment, which implies g (2) = 0, and thus F = 0, i.e., the noise associated to the photon counting when adding both outputs (T and R) is (∆n) 2 = 0. Analyzing without correlations (separately), this noise suppression is no longer valid. We then, analyze one (or the other) of the two output time series of photons. The superposition of particle and wave photon-behavior expressed in Equation (11) can be written as a function of the particle trajectory (transmitted or reflected): We place particular focus on the transmitted states in Equation (12), where it must be clear that |H, 0 φ=π/4 and |H, 0 φ=0 are not statistically equivalent and, as a consequence, cannot be factored as (C p + C w √ 2 )|H, 0 . The same argument is valid for the reflected state.
Experiment
The proposed experiment is a version of the Grangier experiment [20,24]. We send individual photons with linear polarization to a polarizing beam splitter. The state of the photons that cross the HWP whose fast axis is at an angle φ/2 are prepared in the state |ψ = cos(φ)|H + sin(φ)|V . The probabilities of detecting photons that exit the transmitted and reflected output ports are P T = cos 2 (φ) and P R = sin 2 (φ), respectively. Figure 1 shows the experimental setup. We used a photon pair source based on the SPDC − I spontaneous parametric down-conversion process. A violet laser, (Crystalaser, λ = 405 nm), excites a non-linear crystal type I (Newlight photonics), with a thickness of 2 mm. The infrared photons come out, in a 3 degree angle with respect to the axis of the experiment. The signal photons are directed towards a HWP and a PBS. At the output ports of the PBS, we placed two polarizers to reinforce the polarization selection of the PBS. At the output ports of the beam splitter and after the two polarizers, two avalanche photodiodes (APDs, Excelitas) with a quantum efficiency of 60% register the photons (D T and D R , respectively). The idler photons are directed directly to the idler APD (D I ).
The three APDs are connected to homemade electronics that count individual events and coincidences. Our electronics are prepared to count only coincidences between the idler photon and the signal photon at the beam splitter output ports T and R, which verify the preservation of the probability of the presence of the photon. The coincidences are detected in a time window of τ = 10 ns. We start with g 2 (τ) = 0.025 ± 0.015, which implies anti-correlation in the photon detection over the T and R outputs. Our electronics detect all the photons that present the anti-correlation condition. For this to happen, the presence of the idler photon is necessary. Therefore, two binary sequences are stored for a given φ, for example, (1, 0, 0, 1, 1, 0, ..) T and (0, 1, 1, 0, 0, 1 Signal photons are sent to a λ/2 wave plate followed by a PBS. In order to compensate for the polarizing beam splitter imperfections, we placed two polarizers with their horizontal and vertical axes on the transmitted and reflected outputs, respectively. Finally, interference filters(F) for 810 nm were placed before the avalanche photodiodes (D R , D T ). Figure 2 shows the counting tests of 1 s and 0 s registered by D T traduced to the probabilities P 1 and P 0 to detect or not photons. For each phase φ/2 of the HWP, we take 102,400 bits. Theoretically, for φ = 0, all the bits detected by D T should be one, while for φ/2 = π/4, all the bits detected by D T should be zero. We draw special attention to the cases in which the 1 s and 0 s curves intersect because at these points, we expect a wave-like behavior and equal probability of transmission and reflection P = 1/2. (1) to detect or not (P 0 ) single photons in the detector P T . P 0 (green x symbols) and P 1 (red + symbols) as a function of the HWP angle φ/2 can be interpreted as the normalized proportions of 1 s and 0 s stored in the time series to make the statistical analysis of the photon counting. Figure 3 shows the Fano factor F for the output ports T and R, as function of the probability P T = cos 2 (φ/2) and P R = sin 2 (φ/2). The statistic curve goes from F = 0 to F = 1, showing a transition from particle to wave behavior in the sector 0 ≤ F ≤ 1/2. The maximal transmitted probability, P T = 1, gives us F = 0, described by the state: |ψ φ=0 p = |H, 0 φ=0 . In this case, the photons are localized in D T and have a particle-like behavior. The Fano factor F = 1/2 is found when the probability that the photon is transmitted (or reflected) is P T = 1/2, i.e., when photons behave like waves (Equation (10)). We note that the prediction for quantum chaos in the sub-shot noise sector has a Fano factor of F = 1/4 [13][14][15][16]. Because the probability P T is related with the average of the number of detecting photons in D T , experimentally, the chaotic Fano factor for the transmitted time series of photons is located at P T = 3/4. In such case, in order to accomplish with the probabilities of the quantum chaos statistics, the HWP must rotate the polarization angle of photons to φ = π/6. In this way, the photons crossing the PBS can be described by:
Results
However, it is interesting to look closely at the value F = 3/4, obtained for P T = 1/4, because it implies quantum chaos for the reflected output.
Here, we have noise symmetry because P T = 3/4 produces quantum chaos statistics (F = 1/4) in the transmitted output, but F = 3/4 in the reflected one, and vice versa for P T = 1/4. We will see below that complementary probabilities have the same variance for photon counting measurements. This is why we argue that F = 3/4 is also a criterion of quantum chaos. The experimental values that we obtained for the Fano factor for chaotic probability of transmission are the following (see Figure 3): F = 0.25 ± 0.01 for P T = 0.75 ± 0.01; F = 0.74 ± 0.01 for P T = 0.26 ± 0.01. On the other hand, the Fano factor obtained for the wave behavior of photons was F = 0.53 ± 0.01 for P T = 0.47 ± 0.01, while for the particle behavior, F = 0.005 ± 0.001 for P T = 0.995 ± 0.001 was obtained. We also applied the power spectrum criterion to verify that there is indeed a quantum chaos behavior [17,18] beyond the criterion F = 1/4, 3/4. We apply a simplified version of the power spectrum for binary time series. We proceeded as follows: We separated the time series in partitions of 2 n combinations of n bits, bearing in mind that each bit is detected in the coincidence time τ, then each combination implies the time nτ. Then, the time series of photons have a frequency f related to the number of photons counted during time t = nτ. For n = 2 we can do the partition of the time series with 102,400 bits into 2-bit combinations, of which there are 4: (0, 0),(1, 0),(0, 1) and (1, 1). Each combination has a frequency based on the number of photons registered. For example, the element (1, 1) has two photons, and therefore it corresponds to f = 2 with the amplitude A f related to number of times that appears f = 2. The power spectrum is defines by PS( f ) = |A f | 2 . Since (1, 0) and (0, 1) have the same number of photons gives f = 1, they represent the same point in the power spectrum plot with degeneration equal to 2. The state (0, 0) corresponds to f = 0. If the power spectrum follows a power law, the ln(PS( f )) as a function of the frequency f outline a straight line whose slope β is the power of the frequency in PS( f ) ∝ f β . Next, we obtain β as a function of P T .
The chaotic 1/ f or pink noise appears for P T = 1/4, while its complementary signal f appears for P T = 3/4. This complementarity can be showed also through the counting photon variance using the definition (4). For P T = 1/4, 3/4, and its respective averages n = 1/4, 3/4, the variance for the quantum chaos signals have the same value (∆n) 2 = 3/16. Experimentally, for P T = 0.75 ± 0.01, and P T = 0.26 ± 0.01, we obtain (∆n) 2 = 0.18 ± 0.01, 0.19 ± 0.01, respectively (see Figure 5), in agreement with the two chaotic times series having the same level of information as we will see in Section 6.
Complexity, Shannon's Information, and Quantum Chaos
We have shown that there is quantum chaos behavior in the time series of single photons for the probabilities P T = 1/4, 3/4. Now, comparing Equations (12) and (13) with Equations (14) and (15), respectively, it is easy to see that the superposition of wave and particle behaviors builds quantum chaos condition F = 1/4, 3/4 when reproduce the probability P T (H) = 3/4 with F = 1/4, and: for P T (H) = 1/4 with F = 3/4. In other words, we interpret quantum chaos behavior (F = 1/4, 3/4) when a single photon behaves half of the time as a particle and half of the time as a wave. Moreover, information entropy measures the degree of disorder or randomness in the system. Since our system is binary, we used Shannon's entropy, with the notation p i (φ)), where i = 1, 0, the probabilities of each photon being detected or not, for some polarization angle φ. Figure 6 shows the behavior of three quantities: the Shannon's entropy, S = − ∑ i p i (φ)log(p i (φ)), the complexity C = 4S(S − 1), and regularity 1 − S [25,26]. An intuitive definition of complexity may be associated with a composite systems with interacting components, where the balance between regularity and disorder comes from emergent interactions. The complexity invoked here has to do with an 'optimal' mix of regularity and disorder. The experimental behavior of this balance shows two peaks of complexity. There is no mathematical expression as yet that relates complexity and chaos, and we can only say that the two quantum chaos behaviors at F = 1/4, 3/4 can be associated with the two maxima of complexity C = 1 appearing here at S = 1/2, while we obtain S = 0 for the particle-like behavior and S = 1 for the wave-like behavior. A superposition of both behaviors, wave and particle, is found here when S ≈ 0.81, which corresponds to quantum chaos.
Conclusions
We have shown that it is possible to obtain time series of single photons with different statistics, in particular, chaotic statistics. This was tested using the Fano factor and the power spectrum method criteria. In addition, we obtain a complexity maxima near each quantum chaos probability region, as a signature of their relation. Most importantly, we find that quantum chaos can be interpreted as the balanced superposition of particle and wave states of single photons. As a consequence, a single photon can be thought to behave in a quantum chaotic way and characterized by 1/ f or pink noise for F = 3/4(P T = 1/4) and noise f for F = 1/4(P T = 3/4). In the same way, maxima of complexity appear for S = 1/2 as an 'optimal' balance between order and disorder. We have also shown that both signals are complementary to each other using photon-counting signals since their root mean square deviation around the counting average is the same for F = 1/4, 3/4, which entails (∆n) 2 = 3/16. Finally, we believe that the chaotic time series of superposed states of single photons discussed in this paper may have interesting applications in quantum cryptography, for which pulsed lasers would be needed.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 5,280.8 | 2021-08-01T00:00:00.000 | [
"Physics"
] |
A pipeline for complete characterization of complex germline rearrangements from long DNA reads
Background Many genetic/genomic disorders are caused by genomic rearrangements. Standard methods can often characterize these variations only partly, e.g., copy number changes or breakpoints. It is important to fully understand the order and orientation of rearranged fragments, with precise breakpoints, to know the pathogenicity of the rearrangements. Methods We performed whole-genome-coverage nanopore sequencing of long DNA reads from four patients with chromosomal translocations. We identified rearrangements relative to a reference human genome, subtracted rearrangements shared by any of 33 control individuals, and determined the order and orientation of rearranged fragments, with our newly developed analysis pipeline. Results We describe the full characterization of complex chromosomal rearrangements, by filtering out genomic rearrangements seen in controls without the same disease, reducing the number of loci per patient from a few thousand to a few dozen. Breakpoint detection was very accurate; we usually see ~ 0 ± 1 base difference from Sanger sequencing-confirmed breakpoints. For one patient with two reciprocal chromosomal translocations, we find that the translocation points have complex rearrangements of multiple DNA fragments involving 5 chromosomes, which we could order and orient by an automatic algorithm, thereby fully reconstructing the rearrangement. A rearrangement is more than the sum of its parts: some properties, such as sequence loss, can be inferred only after reconstructing the whole rearrangement. In this patient, the rearrangements were evidently caused by shattering of the chromosomes into multiple fragments, which rejoined in a different order and orientation with loss of some fragments. Conclusions We developed an effective analytic pipeline to find chromosomal aberration in congenital diseases by filtering benign changes, only from long read sequencing. Our algorithm for reconstruction of complex rearrangements is useful to interpret rearrangements with many breakpoints, e.g., chromothripsis. Our approach promises to fully characterize many congenital germline rearrangements, provided they do not involve poorly understood loci such as centromeric repeats.
Background
Various germline DNA sequence changes are known to cause rare genetic disorders. Many small nucleotidelevel changes (one to a few bases) in 4209 genes have been reported in OMIM (https://www.omim.org/) (as of Jan 21, 2020), which are known as single gene disorders. In addition to these small changes, large structural variations of the chromosomes can also cause diseases.
Previous studies on pathogenic structural changes in patients with genetic/genomic disorders found chromosomal abnormalities by microscopy, by detecting copy number variations (CNVs) using microarrays [1], or by detecting both CNVs and breakpoints using highthroughput short read sequencing [2]. However, there are difficulties in precisely identifying sequence-level changes especially in highly similar repetitive sequences (e.g., simple repeats, recently integrated transposable elements) or in finding how these rearrangements are ordered [3]. Long read sequencing (PacBio or nanopore) is advantageous for characterizing rearrangements in such cases and is recently beginning to be used for patient genome analysis to identify pathogenic variations [4][5][6]. In addition, if rearrangements are complex (e.g., chromothripsis), long read sequencing (reads often exceed 10 kb in length) has a further advantage, because one read may encompass all or much of a complex rearrangement [7]. Chromothripsis is a chaotic complex rearrangement, where many fragments of the genome are rearranged into derivative chromosomes. Current approaches to analyze chromothripsis usually require manual inspection to reconstruct whole rearrangements. Detection and reconstruction methods for complex rearrangements are needed to characterize pathogenic variations from whole genome sequencing data.
Rearrangements arise in various ways such as gene conversion, processed pseudogene integration, aberrant DNA replication with template switching [8,9], and probably as-yet unknown mechanisms. Regardless, the result is duplicated, deleted, re-ordered and/or reoriented fragments (Fig. 1a). No matter how complex the rearrangement, there is a simple relationship between ancestral and derived sequences: every part of the derived sequence comes from a unique part of the ancestor [10]. (The unusual exception is "spontaneously generated" sequence not descended from an ancestor, a.k.a. non-templated insertion: we allow for it by allowing parts of the derived sequence to not align anywhere.) Thus, a rearrangement can be displayed as in Fig. 1b: the derived sequence is shown vertically, and we can see from top to bottom where each part came from (red diagonal lines).
Unfortunately, we do not have an ancestral genome sequence (further discussed in Additional file 1). The reference genome has its own rearrangements: this makes it qualitatively harder to identify segments descended from the same segment in the most recent common ancestor of the genomes (red diagonal lines in Fig. 1c). Even if we could identify them, the result is hard to understand. To make the problem tractable, we assume the reference is ancestral: though false, this works well enough to be useful.
Concretely, we compare long DNA reads to an assumed-ancestral reference genome, by inferring which part of the reference each part of the read comes from. Thus, we need to accurately divide the read into (one or more) parts and align each part to the genome. To do this, we first learn the rates of small insertions, deletions, and each kind of nucleotide substitution between reads and genome (e.g., Fig. 2) [11], then find the most-likely division and alignment based on these rates [10,12]. We can also calculate the probability that each base is wrongly aligned, which is high when part of a read aligns almost equally well to several genome loci. This approach was previously used to characterize rearrangements that are "localized," i.e., encompassed by one DNA read [10].
Here we extend this approach, to find arbitrary (nonlocalized) rearrangements, subtract rearrangements found in control individuals, then order and orient rearranged DNA reads to fully reconstruct complex rearrangements in derivative chromosomes. To the best of our knowledge, there is no other tool to fully reconstruct complex rearrangements from only long reads and filter out benign changes. Chromothripsis has been analyzed by NanoSV [7], but its website states: "we decided to call only breakpoints instead of SV types (such as inversions, deletions, etc.)." Indeed, it is hard to tell whether (e.g.) a split alignment of a DNA read to both strands of a chromosome indicates a simple inversion, or part of a more complex rearrangement (see examples below).
Recently, long read sequencing was used to detect structural variants in human genomes, but focusing on simple insertions, deletions, and inversions [13]. However, another study used linked-read sequencing to document more complex types of rearrangement such as del-INV-del and del-INV-dup [14]. There have been several approaches to characterize pathogenic complex rearrangements in congenital diseases [7,15,16]. Beck et al. used long reads to detect chr17p11.2 recurrent rearrangement using a targeted approach [15]. Targeted approaches are limited and hard to use for complex chromothripsis. Eisfeldt et al. analyzed three patients with complex chromosomal rearrangement (CCR) [16]. This approach required several different methods to fully understand the CCR (including short read sequencing, optical mapping, and linked-read sequencing). For clinical application, a single method that can characterize complex rearrangements would be useful. Our approach can characterize pathogenic rearrangements using only long read whole genome sequencing and thus should be useful for further clinical applications.
Moreover, we show that complex rearrangements can have emergent properties, such as deletions, that are knowable only after fully reconstructing the whole rearrangement. Finally, we believe our pipeline for long DNA reads is unique in discarding rearrangements shared by other genomes (controls), which is critical for practical utility, because human genomes typically differ by thousands of, presumably benign, large-scale rearrangements. Fig. 1 Illustration of genome evolution with rearrangements. a Starting from ancestral DNA, evolution with rearrangements results in derived sequences with deletions, duplications, and re-ordered fragments. Each colored block represents a piece of a chromosome, e.g., a few thousand basepairs. The blocks labeled n and s are similar repeated sequences (same color). b Comparison of derived sequence (vertical) to ancestral sequence (horizontal). The diagonal red lines show which ancestral basepair each basepair in the derived sequence is descended from: starting at a derived basepair, go horizontally to the right until hitting a red line, then go up vertically to find the ancestral basepair. The diagonal black line indicates a misleading (paralogous) similarity between the ancestral and derived sequences. c Comparison of the same derived sequence (vertical) to a derived reference genome (horizontal). Diagonal red lines show basepairs in the horizontal and vertical sequences that are descended from the same basepair in the most recent common ancestor of the sequences. The diagonal black line shows similar segments that are not descended from the same part of the most recent common ancestor
Patients
We studied 4 patients whose breakpoints were previously not fully detected by high-throughput analysis, among 9 patients with chromosomal abnormalities [17]. Patients 1 and 2 have primary ovarian failure (detailed clinical information in Additional file 1 and elsewhere [18,19]). Patient 3 has split-hand-foot malformation (detailed information was published elsewhere [20]). Patient 4 has intractable epilepsy and is suspected to have a chromosomal translocation breakpoint in centromeric repeats [21].
Controls
We used 33 human controls to filter out benign rearrangements in the patients. Because genome-wide long read sequencing remains expensive, we re-used data from previous studies [6]. Thus, many of these controls have genetic disorders (Additional file 1: Table S1), which are unlikely to be related to those of the 4 patients.
Data analysis
Our task is to find and fully characterize rearrangements in a patient's genome that are absent in control The 4 × 4 matrix shows substitution probabilities: rows correspond to genome bases and columns correspond to read bases. The rates in Fig. 2 are a combination of sequencing errors and real differences genomes. By "fully characterize," we mean to determine which part of the reference genome each part of the rearranged sequence comes from and determine the order and orientation of these parts. We do so by these steps (details in Additional file 1: Supplementary Methods and Fig. S1-4), using software named dnarrange that was developed for this study.
1. Align the DNA reads to the reference genome, by probability-based split alignment. This gives us rearranged reads, but there are two difficulties: (i) There seem to be many artifactually rearranged reads, at least in some datasets [10]. Some putative artifacts are shown in Additional file 1: Fig. S3. These artifacts seem to be mostly sporadic [10], so they can be excluded by requiring at least 2 or 3 reads to cover the same rearrangement. (ii) It is hard to tell whether a rearranged read covers a whole rearrangement, or part of a larger rearrangement, or multiple independent rearrangements. We defer making this judgment, and eventually do so manually. 2. Discard any patient read that has any two rearranged fragments in common with any control read. Ideally, we would discard whole rearrangements rather than reads, but whole rearrangements have not been determined yet due to difficulty (ii). 3. Discard any patient read that has any rearrangement not shared by any other read from the same patient. This aims to remove artifacts. 4. Group reads from one patient that cover the same rearrangement (i.e., have two rearranged fragments in common). Discard groups with fewer than 3 reads: this also aims to remove artifacts.
In the following results, we at first omit step 2 to show the results without control filtering, then re-run steps 2-4 to show the results with filtering. Steps 2-4 can be done with one simple "dnarrange" command: dnarrange patient-file : control1 con-trol2 ... > groups
Examine dotplots showing how each read group
aligns to the reference genome. Manual examination is feasible because the number of groups, after filtering, is typically a few dozen. In practice, we can often tell that a group of reads covers a whole rearrangement of a specific type, e.g., integration of a processed pseudogene, transposable element, or NUMT (nuclear mitochondrial DNA). Other read groups are suspected to cover parts of larger rearrangements.
6. Merge each group of reads into a more accurate consensus sequence, using lamassemble [22], and re-align these consensus sequences to the genome. This step has a chance of characterizing rearranged fragments more accurately, but in practice, it rarely changes the picture and is not critical. In previous work, such consensus sequences were important for revealing the sequences of tandem repeat expansions [6]. 7. Infer the order and orientation of read groups that are suspected to cover parts of a larger rearrangement. This is done by a parsimony argument: we find an order and orientation that links the groups into a minimal number of rearranged chromosomes. We could always suggest a trivial solution where the genome is highly aneuploid and each read group is on a separate chromosome, but that is not parsimonious and does not match the patient karyotypes determined by microscopy. There could be more than one mostparsimonious solution (in which case we fail at full characterization), but sometimes it is unique.
Sanger sequence confirmation of breakpoints
PCR primers for breakpoints estimated from rearrangements were designed using primer3 plus software (Additional file 1: Table S2). PCR amplification was done using ExTaq, PrimeSTAR GXL, and LATaq (Takara), then amplified products were Sanger sequenced using BioDye Terminator v3.1 Cycle Sequencing kit with 3130xl genetic analyzer (Applied Biosystems, CA, USA).
Nanopore sequencing of 4 patients with chromosomal translocations
We sequenced genomic DNA from 4 patients with reciprocal chromosomal translocations using a nanopore long read sequencer, PromethION (Additional file 1: Table S1). We applied newly developed software, dnarrange (https://github.com/mcfrith/dnarrange), to find and characterize DNA sequence rearrangements in these patients. dnarrange finds DNA reads that have rearrangements relative to a reference genome, and groups reads that overlap the same rearrangement (Additional file 1: Supplementary Methods). It also filtered out rearrangements that are seen in any of 33 control individuals (Fig. 3, Additional file 1: Table S1). The number of read groups decreased exponentially with the first several controls, then stabilized, which suggests that there are numerous commonly shared rearrangements in the population (Figs. 4b, 5b, 6b, and 7b; Additional file 1: Table S3). Because we are not interested in simple deletions, we ignored gaps < 10 kb; we also tested a lower gap threshold (100 bp) which produced vastly more output at first, but after discarding rearrangements shared with the controls, the output size became closer to the default (g = 10 kb), suggesting that many of these gaps are shared with controls (Additional file 1: Fig. S5). Next, we merged (a.k.a. assembled) the reads of each group into a consensus sequence using lamassemble (https://gitlab.com/mcfrith/lamassemble) and realigned to the reference genome. Representative examples of detected rearrangements are shown with raw reads and consensus sequences in Additional file 1: Fig. S6. Computation time measurements for this method (including filtering with 33 controls) and comparison to different methods are shown in Additional file 1: Tables S4 and S5. Finally, we used dnarrange-link to infer the order and orientation of multiple read groups, to understand the whole rearrangement (Figs. 4c, 5c, e, and 6d, e). [18,19] has de novo reciprocal translocation between chr2 and chrX, 46,X,t(X;2)(q22;p13) (Fig. 4a). The breakpoints were not detected by short read sequencing Fig. 3 Schematic diagram of chromosomal rearrangement analysis pipeline. Long DNA reads are aligned to a reference genome using LAST (blue box), then dnarrange finds rearranged reads and groups reads that overlap the same rearrangement (pink box). lamassemble merges/assembles each group of reads into a consensus sequence (yellow box). When there is a "complex" rearrangement (more than one group of rearranged reads is needed to understand the full structure of the rearrangement), dnarrange-link was used to infer the order and orientation of the groups and thereby reconstruct derivative chromosomes (green box) [17] though they were detected by more-painstaking breakpoint PCR [19], so we tested whether we could find this rearrangement with long reads. We performed Pro-methION DNA sequencing (112 Gb) and found 2773 groups of rearranged reads compared to human reference genome hg38. After subtracting rearrangements present in 33 controls, we found 80 patient-only read groups, of which two involve both chr2 and chrX (Fig. 4b). These are exactly the reciprocal chr2-X translocation (Fig. 4c, Additional file 1: Fig. S7a). The breakpoints agreed with reported breakpoints determined by Sanger sequencing (Additional file 1: Fig. S7b) [19]. The other 78 groups of rearranged reads are mostly tandem multiplications (duplications, triplications, etc.), tandem repeat expansion/evolution, deletions, retrotransposon insertions (five L1HS, four AluYa5, two AluYb8, three SVA, and one or two ERV-K LTRs), and other non-tandem duplications (Fig. 4d, e, Additional file 1: Table S6, Additional file 2: Table S12, Fig. S8). These types of retrotransposon are known to be active or polymorphic in humans [23][24][25]. We checked three AluYa5 insertions by PCR: all were confirmed (Additional file 1: Fig. S9). One insertion appears to be an orphan 3′-transduction from an L1HS in chr20: the L1HS was transcribed with readthrough into 3′ flanking sequence, then the 3′-end of this transcript (without any L1HS sequence) was reverse-transcribed and integrated into chr10 (Fig. 4e). Such orphan transductions can cause disease [26]. We also found an insertion of mitochondrial DNA (NUMT) into chr2 (Fig. 4e). Some of these rearrangements have been previously found in other humans, e.g., the ERV-K LTR inserted in chr12 [27]. Thus, our subtraction of rearrangements found in other humans was not thorough, especially because patient 1 is Caucasian whereas most of our controls (32/33) are Japanese.
We performed PromethION DNA sequencing (117 Gb) and found 3336 groups of rearranged reads relative to the reference genome, which reduced to 33 groups after control subtraction (Fig. 5b). Only 2 out of 33 groups involve both chr4 and chrX: they show a reciprocal unbalanced chromosomal translocation exactly as described previously and confirmed by Sanger sequencing [17,19] (Fig. 5c, Additional file 1: Fig. S10a,b). We examined DNA of the patient and parents by breakpoint PCR and confirmed that the translocation breakpoints occurred de novo (Additional file 1: Fig. S10b, c). Another of the 33 read groups shows a 43-kb deletion near the translocation site at chrX:107943899-107986412 (Fig. 5c, Additional file 1: Fig. S10a), which eliminates the TEX13B gene (Additional file 1: Fig. S10a), and was not previously described [17]. We found that this deletion is inherited from the father (Additional file 1: Fig. S10b, c). About half of the other rearrangements were tandem multiplications and retrotranspositions (Fig. 5d, Additional file 1: Fig. S11, Table S6, Additional file 2: Table S12). Three of the 33 groups lie near each other in chr11q11 (Fig. 5e): they have a unique order and orientation that produces one linear sequence, whereby we fully inferred the structure of this previously unknown rearrangement (Fig. 5e). This rearrangement has translocated and inverted fragments and three deletions, including a 10-kb deletion that removes most of the TRIM48 gene. Breakpoint confirmation of this rearrangement by PCR and Sanger sequencing showed inheritance from the mother (Additional file 1: Fig. S12a, b).
Patient 3: complex rearrangements at chr7-chr15 translocation
We next analyzed patient 3 whose precise structure of chromosomal translocations was only partly solved before [17,20]. Patient 3 was reported to have reciprocal chromosomal translocation between chr7 and chr15 and also between chr9 and chr14, t(7;15)(q21;q15) and t(9, 14)(q21;q11.2) (Fig. 6a) and has 4.6-Mb and~1-Mb deletions on chr15 and chr7, respectively, which were predicted by microarray, although the precise locations of breakpoints were not detected in detail. We performed whole genome nanopore sequencing (95 Gb) on this patient and found 3351 groups of rearranged reads relative (See figure on previous page.) Fig. 4 Chromosomal rearrangement in patient 1 with 46,X,t(X;2)(q22;p13). a Ideograms showing patient 1's translocation between chrXq22 and chr2p13. Chromosome images are from NCBI genome decoration (https://www.ncbi.nlm.nih.gov/genome/tools/gdp). b Filtering out rearrangements shared with 33 controls. Finally, 80 groups of reads with patient-only rearrangements are found. Two of the 80 groups show reciprocal chr2-chrX translocation. c Dotplot of reconstructed derivative chromosomes shows reciprocal balanced chromosomal translocation (upper panel: horizontal dotted gray lines join the parts of each derivative chromosome; lower panel: vertical dotted gray lines join fragments that come from adjacent parts of the reference genome, showing there is no large deletion or duplication). d Pie chart of the types of rearrangement. TSDel target site deletion, NUMT nuclear mitochondrial DNA insertion. e Examples of retrotransposition and NUMT insertion (the alignments to retrotransposons, e.g., the AluYa5 in chrX, often have low confidence, indicating uncertainty that this specific AluYa5 is the source) to the reference genome, which reduced to 43 groups after control subtraction (Fig. 6b). Fifteen out of 43 groups are involved in the two translocations: dnarrange-link found a unique way to order and orient them without changing the number of chromosomes (Fig. 6c, Additional file 1: Fig. S13). At first, there seem to be two read groups involving both chr9 and chr14, which accurately indicate the balanced chr9-chr14 translocation described previously [17]. However, dnarrange-link additionally identified a complex rearrangement for t(9, 14)(q21;q11.2). A part of chr4 was unexpectedly inserted into derivative chr9 (Fig. 6d). This rearrangement was not investigated in the previous analyses, as chr7q21 was the primary locus for split-foot. In addition to this, dnarrange identified 8 out of 43 groups involving chr7 and chr15 (Fig. 6c, Additional file 1: Fig. S13). The order and orientation of these groups was difficult to determine by manual inspection, but dnarrange-link found only one possible way to connect them without changing the number of chromosomes (Fig. 6c). Finally, dnarrangelink could automatically reconstruct the whole rearrangements (Fig. 6d, e). The reconstructed rearrangements show that 3 fragments (breakpoint-to-breakpoint, asterisks in Fig. 6d, e) from chr4 and 1 fragment from chr14 were inserted into derivative chr9 (Fig. 6c, d), and 3 fragments from chr7 and 6 fragments from chr15 were inserted into derivative chr15 (Fig. 6c, e). They show 677kb and 4.7-Mb deletions on chr7 and chr15, respectively, which were detected by microarray (Fig. 6e). Note that these deletions are not present in any part of the rearrangement, but only in the fully reconstructed rearrangement: they are holistic properties of the complex rearrangement. One candidate gene for split-foot, SEM1, was not disrupted, nor had altered expression in lymphoblastoid cells (Additional file 1: Fig. S14a, b). A striking feature of these rearrangements is that the rearranged fragments come from near-exactly adjacent parts of the ancestral genome (Fig. 6d, e). This suggests that the rearrangements occurred by shattering of the ancestral genome into multiple fragments, which rejoined in a different order and orientation with loss of some fragments. Such shattering naturally explains why the fragments come from adjacent parts of the ancestor [10].
The other rearrangements are mostly local tandem duplications or insertions (Additional file 1: Table S6, Fig. S17, Additional file 2: Table S12). We found one processed pseudogene insertion, where exons of the MFF gene (chr10) were inserted into chr15 (Fig. 6g). Interestingly, there is also an AluYa5 insertion into chr15 nearby (Fig. 6g). Both Alu and processed pseudogene insertions are thought to be catalyzed by LINE-1-encoded proteins [30]: thus, we speculate that these two insertions did not occur independently.
Patient 4: difficult case with translocation breakpoint in centromere repeat
Patient 4 had a reciprocal translocation between chr1 and chr9 (Fig. 7a). Breakpoints in chr1 were previously described at chr1:206,401,153 and chr1:206,402,729, which disrupted SRGAP2, by intensive investigations using fluorescent in situ hybridization (FISH), Southern hybridization and inverse PCR [21], or short read whole genome sequencing [17]. Chr9 breakpoints have not been found and were suspected to reside in repetitive centromeric heterochromatin. We performed Pro-methION DNA sequencing (41 Gb) and found 2523 groups of rearranged reads relative to the reference genome, which reduced to 14 after control subtraction, none of which indicates chr1-chr9 translocation (Fig. 7b, c, Additional file 1: Fig. S18, Additional file 2: Table S12). Dotplot pictures of reads that cross the chr1 breakpoint suggest that there is a reciprocal translocation, but the other half of the read aligns (with low confidence) to satellite or simple repeat sequences at centromeric regions on multiple different chromosomes (Fig. 7d, two example reads are shown). This limitation might be overcome by obtaining reads long enough to extend beyond the centromeric repeats, or perhaps by obtaining a reference genome that is more accurate in centromeric regions.
Comparison to other tools
We also tried two existing structural variant (SV) detection methods: LAST-NanoSV [7] and ngmlr-Sniffles [31] (Additional file 1: Supplementary Methods). These methods mainly detect breakpoints and categorize them into 4 SV types (insertion, deletion, inversion, and duplication) or breakpoints (described as "BND"). Because there is no method to filter SVs that are present in controls using these tools, we manually examined breakpoints in the translocation sites predicted by G-band analysis.
In patient 1, ngmlr-Sniffles called two candidate breakpoints in the translocation site, but they were ±~600 bp different from the Sanger sequence results and the reciprocal change was not detected (Additional file 3: Table S13). LAST-NanoSV could detect the breakpoints accurately, similarly to dnarrange (+lamassemble), with only − 1~+ 6 bp differences (Additional file 3: Table S13). It is not too surprising that LAST-NanoSV can detect the breakpoints similarly to dnarrange, because they are based on identical LAST alignments. We also examined LAST-NanoSV results for four TE integration examples in Fig. 4e (Additional file 1: Table S8). The AluYa5 integration was described as insertion (though "duplication" would be more precise); however, others are reported as BND and it was difficult to know if these are TE integrations. The AluYb8 integration has two different calls (insertion and BND) which could lead to misinterpretation (Additional file 1: Table S8).
In patient 2, ngmlr-Sniffles called four candidate breakpoints near the translocation sites but there were ± 500 bp discordances from the Sanger sequence results. The most critical thing is that orientations were wrong and could cause misinterpretation of this reciprocal chromosomal translocation (Additional file 3: Table S13). LAST-NanoSV accurately detected the breakpoints similarly to dnarrange.
In patient 3, ngmlr-Sniffles missed several breakpoints which made it impossible to reconstruct this patient's complex rearrangement (Additional file 3: Table S13). LAST-NanoSV detected all breakpoints (16 out of 18 with high confidence, i.e., "PASS"); however, it has no further function to reconstruct the rearranged genome: so it would be hard to understand this rearrangement, especially without filtering the numerous rearrangements shared with controls (Additional file 3: Table S13). We also checked the processed pseudogene/AluYa5 insertion in patient 3 (Fig. 6g) in NanoSV calls. The MFF gene (chr2) insertion into chr15 was described in 9 calls including deletion, insertion, and BND. The AluYa5 integration was not detected by NanoSV (Additional file 1: Table S8). This also illustrates the importance of understanding the whole rearrangement: NanoSV misleadingly reports deletions in chr2 for some of the removed introns in the processed pseudogene, and the distinctions between "insertion", "BND", etc. may be more confusing than helpful.
Trio analysis
Among the control datasets, controls 1, 2, and 3 are a parent-child trio (Additional file 1: Table S1). By the same filtering as for patients 1-4, but without using controls 1-3, we obtained 27 groups of rearranged reads in the child (control 1). If the mother (control 3) is used for further filtering, nearly half of the groups (n = 14) are removed, and if the father is used as a control, the others (n = 12) are removed, except one (group23, Additional file 1: Table S9, Additional file 1: Fig. S19). The one remaining rearrangement is actually present in the mother, but not automatically filtered. This is an insertion of an SVA repeat, so its alignment to the genome is highly ambiguous and inconsistent between reads; thus, the shared rearrangement was not automatically recognized. We recognized it by manually investigating dnarrange results for reads aligned to this region. In summary, trio analysis is a powerful way to filter rearrangements.
Re-analysis of deletions found from long reads
As a further test and comparison, we checked large deletions (more than 5 kb) in one human genome (NA12878) that were reported previously [13]. We used publicly available nanopore sequencing data (rel6, https://github.com/ nanopore-wgs-consortium/NA12878/blob/master/Genome.md). Our pipeline without control filtering found rearrangements at the sites of all 30 reported deletions (Additional file 1: Table S10, Fig. S20). At 20/30 sites, we confirmed the presence of a simple deletion. Two other sites (1 and 13 in Additional file 1: Fig. S20) do not have deletions in NA12878 relative to the ancestral state, but rather have retrotransposon insertions in the reference Filtering out rearrangements shared with controls produces 14 groups of reads with patient-only rearrangements. There is no group supporting chr1-chr9 translocation. c Pie chart of patient-only rearrangements. d Dotplot of two reads that cross the chr1 breakpoint genome (hg38). Sites 3 and 9 do not have simple deletions: they are more-complex rearrangements that include loss of sequence. Site 28 has a more-complex rearrangement with a larger deletion than reported. Sites 8 and 20 appear to have gene conversions, not simple deletions. At three sites (16,18,29), we find extremely complex rearrangements: these are in segmental duplications (large, recent duplications) and near assembly gaps in the reference genome. The rearrangements suggest rampant homologous recombination between the segmental duplicates, which is plausible, but the reference genome may not be reliable at these loci. In summary, we mostly confirm the previous results, but find greater complexity in some cases.
Discussion
We analyzed a variety of chromosomal translocations in 4 patients, who were selected because previous studies had difficulty in determining precise breakpoints by conventional approaches including microarrays and short read sequencing. Especially, complex rearrangements in patient 3 were not solved even by intensive analysis [17,20]. Our method could not only precisely detect breakpoints but also characterize how shattered fragments were ordered and oriented. To the best of our knowledge, there has been no method to filter patient-only rearrangements and connect them to reconstruct rearranged chromosomes from long read sequencing by an automatic algorithm. As we have shown, existing methods for long read sequencing (e.g., NanoSV) could only find breakpoints instead of SV types, which can be confusing in some cases (e.g., TE insertions; shown in Additional file 1: Table S8). In contrast, our method could semi-automatically find patient-only rearrangements and their types, which is indeed advantageous when looking for a potentially pathogenic rearrangement.
Recently, long read sequencing is becoming available for individual genome analysis due to a decrease in cost and increase in output data size. Accordingly, there have been a few approaches to use long read sequencing to detect structural variations [7,10,31], including tandem repeat changes in rare genetic diseases [6], providing evidence that long read sequencing has a clear advantage in precisely detecting rearrangements. We observed that multiple breakpoints were jointly detected in a single read in patient 3 (Additional file 1: Fig. S15d, e), because long enough reads can cover several breakpoints, which is helpful to phase and order rearrangements. There are continuous efforts to obtain longer nanopore reads; however, in case of complex rearrangements (e.g., chromothripsis), it is not easy to cover whole rearrangements, as seen in patient 3, by current read lengths. Our new tool, dnarrange-link, is useful to infer a complete picture of complex rearrangements. In addition, dnarrange-link can provide a clear visualization of reciprocal chromosomal translocations, inversions, or complex rearrangements with or without loss of sequence as seen in patients 1, 2, and 3. Most importantly, sequence loss was indicated after reconstructed derivative chromosomes were compared to the reference genome. We have shown that sequence losses in patient 3 agree with previously described microarray results. Previous studies on patient 3 predicted 802-kb deletion (microarray could only suggest~1-Mb deletion due to low resolution), because a small inversion (arrow in Fig. 6e) was missed by previous studies using long PCR. We also presented an example in patient 1, who has an inverted duplication on chr16, which was only understood as copy number gain, or simply inversion, by microarray or conventional sequencing technologies (Additional file 1: Fig. S8, Additional file 2: Table S12). In summary, our approach using dnarrange and long read sequencing is superior to conventional approaches (e.g., microarray) because it can (1) connect multiple rearrangements, (2) subtract shared rearrangements, and (3) detect balanced chromosomal rearrangements (e.g., inversion). Recently, our pipeline fully characterized another chromothripsis more complex than that of patient 3, enabling diagnosis [32]: this shows our method is robust and useful in actual medical settings. We also showed a limitation of our method: detecting rearrangements in large repetitive regions beyond the length of long reads in patient 4. To date, there is no good method to detect rearrangements in large repetitive regions (e.g., centromeric or telomeric repeats) genome-wide. We hope our understanding of these still-intractable regions will expand as sequencing technologies advance.
Our approach in this study narrowed down patientonly rearrangements using 33 controls. The number of rearrangements decreased exponentially with the first few samples to a few hundreds. This may be due to the presence of common rearrangements in the population. We suspect large numbers of controls will not be needed if there is a target rearrangement locus (e.g., 4p15.2). In all 4 patients, patient-only (not present in at least 66 autosomal alleles of 33 controls) rearrangements were fewer than 100. If we were to further narrow down to ultra-rare variations that may cause rare congenital disorders, a larger number of controls may be considered. Patient 1 has more patient-only groups of rearranged reads (80) than the other patients (33, 43, and 14). This is because the patient is Caucasian and most of the control data used were Japanese (32/33 datasets). Applying ethnicity-matched controls, or parents or other relatives, will be useful to further remove benign rearrangements.
We noticed that large fractions of these rearrangements are insertions or tandem multiplications (Additional file 1: Table S6). Perhaps surprisingly, patient- | 7,793.8 | 2020-07-31T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC).
Introduction
The origins of Model Based Predictive Control can be traced to Model Algorithmic Control (MAC) [1] and Dynamic Matrix Control [2][3][4]. In this type of controller a model is used to predict a process' behavior, control actions are calculated aiming to minimize a cost function which is generally the quadratic error between a desired future setpoint and process' output. MBPC is advantageous in relation to other control techniques such as PID (Proportional Integral Derivative) and LQR (Linear Quadratic Regulator) controllers since it can consider actuator constraints, process' constraints and handle non-minimal and unstable processes as well as multivariable systems [5,6]. MBPC using Hammerstein models is still being investigated and applied both in numerical and experimental case studies, as depicted in Fig 1 (Query used in Scopus database: TITLE-ABS-KEY("Model Predictive Control" hammerstein)), showing the number of related publications on journals and conferences over the last decades and demonstrating scientific and academic interest in the research and development of this area in process control.
Many processes, in particular distillation columns and chemical reactors, can be modeled by Hammerstein models [7], where a nonlinear and memoryless static function precedes linear dynamics. An appropriate model of the Hammerstein nonlinearity is necessary to achieve adequate process control. Different approaches can be found in literature regarding this problem. [8] employs an Artificial Neural Network (ANN) to model the inverse function of the Hammerstein nonlinearity in a self-tuning configuration, demonstrating good results regarding representation of Volterra series expansion of Hammerstein model. Similarly, [9][10][11][12] present results and methods regarding applications of ANNs in modeling and control of Hammerstein processes. [13] employs two independent DMC controllers, embedding nonlinear equations to deal with static nonlinearities. [14,15] investigate the application of an output compensator based on the nonlinearity's inverse function with DMC, in particular, [15] proposes a decision rule in the event of multiple solutions for the inverse nonlinearity. [16] follows similar approaches but uses fuzzy models identified by recursive least squares (RLS), the resulting fuzzy Hammerstein models are either a single-input single-output (SISO) or multiple-input single-output system, which can be analytically inverted to obtain the inverse nonlinearity. [17] models a solid oxide fuel cell's nonlinearities using a multivariable fuzzy system and defines, similarly to [16], the inverse to be a straightforward analytical procedure since the resulting system is either SISO or MISO. In this paper we present an alternative solution to the control problem of systems with Hammerstein non-linearities. This solution can be applied to both monovariable or multivariable systems. Our solution is divided in two major steps. First, a general interpolator based on Takagi-Sugeno fuzzy logic is theorized and developed, named Fuzzy Logic Hypercube Interpolator, or FLHI, motivated by the necessity to adequately model static nonlinearities and its inverses. Second, FLHI is applied to modeling static nonlinearities and its inverses. These inverse models are employed as an output compensator for the predictive controller, resulting in an pseudo-linear system and allowing conventional linear control theory to be applied [8] to SISO, MISO or MIMO problems. Our proposal is as depicted in Figs 2 and 3, where e is the error signal between desired set-points y r and process outputs y, w à is the control action from DMC, u is the control action modified by FLHI considering the nonlinear static gain from block NL and w is the output from the static nonlinearity. In ideal situations w à = w, however, modeling uncertainties account for differences between the two signals.
Similar approaches can be found in literature such as [16,17], however, the following innovations are present and differentiate our work: i) fuzzy inference is defined on fuzzy logic operations, allowing changes such as choice of conjunction operator (t-norm) or choice of membership function to have major impacts on final results; ii) membership functions are used as kernel functions, allowing the interpolator to behave as nearest-neighbor, linear or cubic interpolator, to name a few; iii) the model's point cloud input space is divided in convex regions and each region is projected (as in, mapped) to an unitary hypercube space, where interpolation occurs, this standardizes the input space from the fuzzy inference perspective and facilitates both interpolation and obtaining the inverse function; iv) inverse function is achieved by the solution of a nonlinear optimization problem for a known region, since the resulting fuzzy model is highly nonlinear depending on choices of conjunction operator or membership functions, as such, its inverse is not a "straightforward analytical procedure" as it is for other approaches in literature; v) multivalued (or multiset) inverse functions are adequately handled, as in multiple solutions are obtained if they exist, which allows for greater flexibility on control actions; vi) MIMO systems can be handled by our approach.
Results are presented for three study cases, a SISO system [18], a MISO system with uncoupled nonlinearities [19] and a MIMO system with highly coupled nonlinearities [20], presenting good results regarding control objectives such as reference tracking and minimization of control variation. Robustness considerations are also presented for cases where a mathematical model of the non-linearity is available.
The rest of this paper is organized as follows. DMC algorithm is presented for both monovariable and multivariable cases, further considerations are given in case of constraints as well as usage of output compensator for DMC. In what follows, foundations of the proposed Fuzzy Logic Hypercube Interpolator are presented. Then an overview and summary of the control problem using FLHI is given. Followed, are given considerations on robustness and performance metrics. Results are then presented for a SISO system, a MIMO system considering uncoupling and coupling of inputs and a MIMO system. In conclusion, final remarks are presented about the paper, the proposed method and future work.
Dynamic Matrix Control
Dynamic matrix control is one of the first model based predictive controllers, developed by Cutler and Ramaker [2][3][4]. Its internal model, the step response, is easily obtainable which allowed it to enjoy wide acceptance and industrial application, in particular chemical and oil [21] but also others such as automotive, food and aerospace [22]. Other advantages which contributed to its popularity are: applicability to multivariable systems; consideration of process constraints on inputs or outputs; prevention of excessive control actions; predictive reference tracking and disturbance rejection; to name a few [22,23]. DMC's Finite Step Response (FSR) internal model limits applications of the controler to open loop stable processes, however, alternatives are presented in literature [23,24] for unstable processes.
In what follows the DMC algorithm is detailed according to [6,23,25], first for the SISO problem and then extended to the MIMO problem.
SISO DMC Design. DMC aims to reduce future tracking error and control action increments by minimization of the cost function: whereŷ is the predicted process output j steps ahead given by a process model, y r is the desired set-point, N y is the prediction horizon, N u is the control horizon and λ is the move suppression factor. Process output prediction is given by the finite step response model: f is the free response, dependent only on past variables: Eqs (2) and (3) can be combined and rewritten in matrix form as: where:ŷ In Eqs (4) and (5) S T is an unitary vector with dimensions N y × 1, G is the dynamic matrix with dimension N y × N u and H is a matrix with dimension N y × N − 1: ; ð6Þ ðg N y þ1 À g 1 Þ ðg N y þ2 À g 2 Þ Á Á Á ðg NþN y À 1 À g NÀ 1 Þ : ð7Þ The functional Eq (1) can be rewritten in matrix form as: optimization of the control law is given by the minimization of this quadratic cost function in terms of control action increments. This is achieved by differentiating J with respect to control action increments vector u and equating to zero, i.e. @J/@Δu = 0. The resulting control law is given by: In practice, Eq (9) results in N u control action increments, however, only Δu(t) is used at each instant t. In the next instant t + 1 a new control action is calculated, this is known as sliding horizon control. Hence, only the first line of the gain matrix K dmc is needed, which helps reduction of computational effort.
MIMO DMC Design. For MIMO processes the effect of each input variable to each output variable is described by its FSR. Eqs (1), (2) and (3) are affected and must be rewritten to account for these extra variables. This can be accomplished in matrix notation of Eq (4), which helps in obtaining a low verbosity solution.
Considering a system with m inputs and n outputs, Eq (5) is rewritten to: Eqs (6) and (7) are rewritten in terms of G ij and H ij , the SISO matrices, for the i-eth output and j-eth input, as: ; ð12Þ Finally, the control law from Eq (9) can be applied considering a change of the vectors involved in prediction error: Constraints. When constraints are considered the optimum solution is no longer the analytical solution of Eq (9). In this case, iterative methods for quadratic programming are necessary [26] and the control problem can be rewritten as: A and b can be chosen to reflect limits on system variables such as, for example, control magnitude, process output magnitude or control increments [26].
Fuzzy Logic Hypercube Interpolator
The main goal in control by output compensator approach is precise identification and modeling of a process' non-linear characteristics so conventional linear control theory may be applied [8]. This problem motivated the creation of a general interpolator which exhibits desirable characteristics such as: modeling both a function and its inverse function; multivariable inputs and outputs; flexibility regarding interpolation characteristics; and, high computational efficiency.
In this section FLHI is presented according to it's working algorithm, which is separated in three different parts. In the first part, a user provided point cloud is verified for consistency and defines an internal model which is used to feed posterior calculations. All pre-calculations occur in the first part, which acts as a setup for the interpolator. In the second part, function interpolation is defined by a Takagi-Sugeno fuzzy inference system in an unitary hypercube space. In the third part, inverse function interpolation is defined as a root finding problem in hypercube space in terms of an optimization problem.
Interpolant Setup. At this initial stage, the expected user input is a point cloud and the output are regions for interpolations and respective hypercubes, the main component of FLHI interpolant. The point cloud is a set , where N is the number of points in the point cloud, xi = (xi 1 , . . ., xi m ) is a set of input coordinates of size m and xo = (xo 1 , . . ., xo n ) is a set of output coordinates of size n such that the generating function of the point cloud is a mapping f: < m ! < n . In this context, hypercubes are interpolation regions where input coordinates xi are mapped to an unitary space in the range [0, 1]. The algorithm for interpolation is defined in Algorithm 1.
Algorithm 1: FLHI Interpolant Setup Algorithm
Input : a pointcloud P Output : a set of regions // apply lexicographical ordering to the pointcloud according to input dimensions. 1 P = lexicographical_order(P) // initialize as empty set. A main characteristic of the point cloud is the distance between points for each dimension. A regular grid is determined by points which are equidistant across dimensions, that is in other words, with predetermined and uniform distances across dimensions. A semi-regular grid is determined by points with predetermined and non-uniform distances across dimensions. An irregular grid, i.e. scattered data, lacks structure or order regarding relative location of points.
Conversion from problem coordinates to unitary hypercube coordinates can be realized for a region by considering the base point as the null coordinate (0, 0, . . .) and adjusting each dimensions in all neighbor points in the region as either 1 if the neighbor's dimension moves away from the base point or 0 if it remains unchanged. This can also be achieved by mapping all points in a region to hypercube coordinates by: pointðiÞ:xiðjÞ ¼ pointðiÞ:xiðjÞ À base point:xiðjÞ xi stepðjÞ ; ð16Þ where m is the amount of input dimensions and xi step is the step size of a dimension, note base_point = point(1) in a region. Current proposal focuses on a point cloud forming either a regular grid or semi-regular grid. Irregular grid, i.e. scattered data, remains as future work but some considerations are presented for such scenario in this paper. Two main challenges follow irregular data: i) tesselating necessary regions for interpolation; and ii) mapping an irregular region to a regular hypercube.
Tesselation of regions for surface reconstruction from scattered data is an open research topic [27] and further investigations are necessary to find or develop a suitable algorithm for application in FLHI. This is made further challenging by the fact current methods in literature focus on triangulations that require 3 m points for a region, while FLHI is based on quadrangulations with requirement of 2 m points.
Mapping of an irregular region to a regular hypercube is feasible by current algorithms applied in finite element methods, such as projective transform or bilinear transform [28] mappings.
Interpolation. FLHI Interpolation can be separated in three major procedures, summarized in Algorithm 2, Algorithm 3 and Algorithm 4.
In Algorithm 2, a search occurs to determine which region produced by FLHI setup Algorithm 1 delimits desired interpolation coordinates, in problem domain, xi. Once the region is determined, desired interpolation coordinates must be mapped to the hypercube established by the region.
Algorithm 2: FLHI Interpolate
Input : a set of regions, a set of input coordinates xi Output : a set of output coordinates xo // find the region which contains the desired input coordinates. 1 region = find_region_input_contains(xi) // convert the desired input coordinates to hypercube space.
With all coordinates now in hypercube domain, bounded by [0, 1], interpolation occurs in the hypercube as presented in Algorithm 2. The main concept of FLHI is that each of the 2 m boundary points of the hypercube contain information about local geometry. Conjunction of information from all boundary points allows inference regarding true function value at any arbitrary position inside the hypercube. Thus, each of these points contribute with moment regarding whole hypercube area. Influence of each boundary point moment regarding any arbitrary position in the hypercube is inversely proportional to the distance between the boundary point and this arbitrary position.
Consider Fig 4, where two boundary points p1 and p2 are represented in the same input dimension x with unitary distance between each other. Each boundary point exhibits maximum, logical unitary, information at it's own position, for it is a sampled function value, and this information's contribution diminishes the further away from the point. Information contribution may be represented by a membership function that exhibits a maximum at the point's location and diminishes the further away from the point. Furthermore, it is unnecessary to define two different membership functions on a single dimension for it can be determined one is the logical complement of the other, that is: Previous logic can be extended to any number of dimensions in a logical hypercube, such that a point has multiple local membership functions lμ, one for each dimension of which it is composed. Global membership for a boundary point can given by the logical conjunction of all local membership functions for that point. Thus, global membership μ of a boundary point is given by: where T is the triangular norm (t-norm). The applied norm can be any of Godel, Lukasiewicz, Hamacher, Product, etc. In this paper the applied t-norm for all cases was the product norm. Finally, for each output dimension, an interpolated value can be obtained by first moment of area deffuzification: Membership functions can be defined arbitrarily and different interpolators may be obtained by appropriate choice of membership function. In this paper the following membership functions are explored: nearest neighbor, linear, cubic, lanczos and spline.
Inverse Interpolation. Inverse interpolation in FLHI occurs as described in Algorithm 5 and begins by searching which regions may output the desired interpolation set xo. If the desired output set xo fits in maximum and minimum output coordinate boundaries for a region, by the intermediate value theorem this region may produce the desired output. This process can be computationally sped up if, in FLHI setup, maximum and minimum output coordinate boundaries are determined for each region as a priori knowledge.
It is important to note that choices of t-norm and membership functions that lead to well defined logical hypercube space, where global memberships are bound in [0, 1], limit this process to the evaluation of maximum and minimum values of xo for each boundary point. However, in ill-behaved logical hypercube spaces, particularly for parametric membership functions such as cubic, a search must be performed to determine maximum and minimum values for each region.
When a region is determined as being able to interpolate the desired output coordinate xo, a root search procedure is performed in terms of xi. This is defined as the minimization of sum of squared residual errors between interpolation in this region and the expected output coordinates:
Algorithm 5: FLHI Interpolate Inverse
Input : a set of regions, a set of desired output coordinates xo Output : a set of input coordinates xi // initialize as empty set 1 xi = ; // check each region to see if its maximum and minimum outputs contain xo when continuous t-norm and membership functions are applied, interpolated space is continuous and existance of xo is guaranteed by the intermediate value theorem.
for each region in regions do 3 if region_output_contains(region, xo) then
// solve a minimization problem on variable xi using objective function, the sum of squared residuals. 4 x = minimize(objective_function, region.hypercube, xo) // convert hypercube coordinates to original problem coordinates. 5 x = convert_to_problem_coordinates(x) // add to set of solutions. 6 xi.add(x) 7 end 8 end Multiple regions may contain the desired output coordinates. As such, inverse interpolation is a multivalued function and may return multiple sets of solutions.
Control Algorithm Summary
This is the set of steps which summarize the proposed control approach in practice: • Setup a DMC controller with desired parameters N y , N u , N, λ and a step model from the linear block of the Hammerstein model; • Setup a FLHI interpolant with nonlinearity data from the static nonlinearity block of the Hammerstein model; • At each control instant, obtain process output and calculate the necessary linear control action of w(t) considering constraints on upper and lower bounds of nonlinearity; • Apply FLHI inverse interpolation with desired membership function on w(t) to produce the desired control signal u(t); • In case of multiple solutions from inverse interpolation, choose the one which minimizes control variation Δu(t); • Control loop is repeated as necessary.
Algorithm 6: FLHI Objective Function
Input : a set of test input coordinates xi, a set of expected output coordinates exo, an hypercube Output : sum of squared residuals error between exo and the evaluation of FLHI at xi // interpolate hypercube in desired position. 1 xo = interpolate_hypercube(hypercube, xi) // output sum of squared residuals.
Considerations on robustness of output compensation-Multiplicative gain uncertainty
Cancellation between the static nonlinearity function f and inverse model f −1 of FLHI is given by: such that in ideal conditions w(t) = w à (t). However, in practice, accuracy of FLHI's inverse model is not perfect but rather an approximation of missing information from the point cloud. This model uncertainty can be represented in Fig 3 by re-arranging the blocks as in Fig 5, where Δ m is an input gain uncertainty: where ideally the input gain uncertainty Δ m = 1. Output Compensation Stability Theorem. A system controlled by output compensation is asymptotically stable if the following necessary and sufficient condition is met: where GM and PM are respectively the gain margin and phase margin of the open loop gain H = DMC Á L, and |Δ m | 1 is the H-infinity norm of the input gain model uncertainty. Proof. A feedback, closed loop, system is asymptotically stable if the Bode-Nyquist stability criterion is met: Considering GM = 1/|H(jw pc )|, where jw pc is the phase crossing frequency and H is the open loop gain of an arbitrary system, Eq (24) becomes: Considering the input gain uncertainty Δ m is itself a static nonlinear gain that does not depend on frequency response, its worst case is given by its H-infinity norm ||Δ m || 1 and H = DMC Á L: Finally, by arranging terms Eq (23) is obtained: Measuring worst case model error. A definition of stability with output compensation control is given by Eq (23) considering worst case model error ||Δ m || 1 as a robustness metric in relation to gain margin. Model absolute relative error (MARE) can be represented by: then, worst case model error becomes a maximization problem: where ub and lb are respectively upper and lower bounds of input space x i . In practice the true nonlinearity NL is unknown but it is either mathematically or computationally modeled. In cases where only a point cloud from a real data set is available, this approach can be useful to measure the trade-off between a simple and a more complex model.
Given the locality nature of FLHI, originated from regions of interpolations, local optimization techniques are neither capable or satisfactory in solving Eq (30). Global search methods are necessary such as [29].
Performance metrics
In this section a performance metric is proposed to evaluate the effectiveness of different membership functions in FLHI models and its effects on set point tracking and control action. A fair assessment can be realized when the performance metric mimics the cost function Eq (1) of the model based predictive controller.
Set point tracking is evaluated by Integral Squared Error (ISE) of all outputs and output references: Control efforts are measured by the Integral Squared Variation of Control (ISVC): The last performance metric aims to mimic DMC's cost function Eq (1) and its purpose is to provide overall assessment of results: A final remark of caution is presented in regards to analysis of results. All results include an ideal case where only the linear process is controlled, disregarding nonlinearities. This ideal linear case is included to provide an estimate of optimal set point tracking and control variation, however, ISVC and J metrics for ideal cases consider the linear control signal w(t) instead of nonlinear control signal u(t), which is inexistent in these scenarios. Therefore disparities can be observed regarding ISVC and J metrics of ideal cases in contrast to nonlinear cases since different control magnitudes are involved, due to effects of static nonlinearities.
Results
In this section results are presented for three case studies in order to demonstrate the proposed method. The first study case regards a SISO system described in [18,30] where the nonlinearity is a fourth order polynomial. The second is a MISO system described in [19], its input nonlinearities are described by third order uncoupled polynomials. The third is a MIMO system described in [20,31], this system exhibits highly coupled input nonlinearities with exponential and polynomial terms.
For all case studies FLHI is used to model the system's nonlinear portion and its inverse for application in output compensation, then, DMC is designed considering the model's linear dynamics. Control action and process output dynamics are presented for all study cases as well as comparisons on the effects of different membership functions on DMC's cost function of Eq (1) and its relating performance indices ISE and ISVC. Results include the ideal scenario, where nonlinearities are ignored and only the linear system is controlled, as well as different membership functions such as nearest neighbor, linear, cubic, lanczos and spline.
SISO Study Case
A distillation column is modeled as a Hammerstein system and presented in [18]. In the original work [18], two models are given, one with a third order polynomial for the input nonlinearity and another with a fourth order polynomial. Both models exhibit a first order linearity.
A typical application of output compensation control [8] in this scenario would focus on the third order polynomial model since this can be trivially inverted and guarantees at least one real solution, being of uneven order. Despite being more accurate, the fourth order polynomial presents a problem in practical applications since its analytical inversion could lead to imaginary roots and an absence of feasible solution.
Our proposed method in this paper does not suffer from the problem of imaginary roots since model inversion occurs in the problem's universe of discourse. As such, parity of model order is not a problem for our approach. From [18,30], the fourth order polynomial Hammerstein model is: wðtÞ ¼ 1:04uðtÞ À 14:11uðtÞ 2 À 16:72uðtÞ 3 þ 562:75uðtÞ 4 ; Control results for FLHI with linear membership functions are presented in Fig 7. A comparison of the effects of different membership functions considering DMC's cost function is given in Table 1. For this study case, FLHI with nearest neighbor membership function fails to achieve reference tracking for all set-points and exhibits control ringing on the first set-point.
Robustness considerations are given for this study case in order to compare the effects of different membership functions on robustness. Robustness indices for this study case are GM = 10.3221, PM = 72.253 and maximum sensitivity M s = 1.226. Worst case model error considering MARE metric for each membership function is presented in Table 2, demonstrating nearest neighbor as the worst model and linear as the best. Stability criteria of Eq (23) is well met for all membership functions.
MISO Study Case
A Hammerstein system is proposed and employed in [19] for benchmarking a model identification method. This system, given in Eq (35), presents two inputs with uncoupled third order polynomial nonlinearities. The linear subsystem presents coupling between inputs [32,33] according to its Relative Gain Array (RGA) in Eq (36).
Uncoupled model and control. When knowledge of uncoupling of nonlinearities is available it can greatly reduce computational needs. As such, in this first instance, this problem is modeled considering this knowledge. Control block diagram of this first approach is presented in Fig 8. A representation of the nonlinearity in Eq (35) Table 3. Regarding computational load, FLHI required a point cloud of 41 points and unitary hypercube dimension.
Coupled model and control. When coupling of nonlinearities is present or unknown, FLHI can be employed to model multivariable relationships. In this second instance, FLHI is applied considering both input variables are coupled even though in practice they are uncoupled. Control block diagram of this approach is presented in Fig 12. A representation of the nonlinearity in Eq (35) is presented in Figs 13 and 14, considering coupling and a nonlinear multivariable model. Process output and control actions are depicted Table 4. Regarding computational load, FLHI required a point cloud of 1681 points and a two dimensional hypercube, respective to the system's input dimensions.
MISO Remarks. Set point was tracked in all cases and no ringing or abnormal control actions were observed. High ISE and J indices are explained by the very large jumps between set points.
Results were as expected regarding identical process response and performance indices for both uncoupled and coupled scenarios. A more complex model in this study case does not bring any benefit since the same behavior of uncoupling is modeled in both instances. FLHI's increase in computational load in the second scenario is expected since the necessarily larger point cloud is a combination of both inputs and the internal hypercube mimics input dimensions, adding to the model's complexity.
Discussion
In this paper a novel method for modeling nonlinearities has been presented and applied to the problem of Hammerstein control using output compensation. The fuzzy logic hypercube interpolator, or FLHI for short, builds a fuzzy model based on point cloud data and allows model inversion. Model inversion enables FLHI to be applied directly as an output compensator, transforming the nonlinear control problem in a pseudo linear problem. Output compensation control, like in [8,16], is not related or anywhere similar to linearizing control [34,35].
Results are presented for a SISO process, a MISO process with uncoupled and coupled cases, and a MIMO process. These results include the ideal scenario, where only the linear system is controlled, and practical scenarios where the nonlinear Hammerstein system is controlled. In practical cases FLHI is applied using different membership functions such as Nearest Neighbor, Linear, Cubic, Lanczos and Spline. These results indicate the applicability of FLHI in both modeling of Hammerstein nonlinearities and output compensation, from its model inversion.
FLHI is currently limited to regular or semi-regular grid point clouds and injective data. Multivalued, i.e. non-injective, data and irregular grids are not automatically supported by the current method. Multivalued data can be used with FLHI but it is necessary for it to be manually separated in injective sets. Extrapolation is currently unsupported but the method can be trivially extended for it.
Future work includes, but is not limited to: i) support for irregular grids, as a possibility based on kd-trees; ii) support for multivalued data, using branch cuts; iii) study and analysis of multiplicative uncertainties, modeling errors, created by FLHI in its application as an output compensator; iv) investigation of FLHI in modeling unknown Hammerstein nonlinearities; v) investigation of the applicability of FLHI in other control situations where model inversion is necessary; vi) study of different membership functions, in particular parametric ones similar to cubic; vii) study of different fuzzy t-norms, logical conjunction between membership functions. | 7,225.4 | 2016-09-22T00:00:00.000 | [
"Computer Science"
] |
Emergence of complex structures from nonlinear interactions and noise in coevolving networks
We study the joint effect of the non-linearity of interactions and noise on coevolutionary dynamics. We choose the coevolving voter model as a prototype framework for this problem. By numerical simulations and analytical approximations we find three main phases that differ in the absolute magnetization and the size of the largest component: a consensus phase, a coexistence phase, and a dynamical fragmentation phase. More detailed analysis reveals inner differences in these phases, allowing us to divide two of them further. In the consensus phase we can distinguish between a weak or alternating consensus (switching between two opposite consensus states), and a strong consensus, in which the system remains in the same state for the whole realization of the stochastic dynamics. Additionally, weak and strong consensus phases scale differently with the system size. The strong consensus phase exists for superlinear interactions and it is the only consensus phase that survives in the thermodynamic limit. In the coexistence phase we distinguish a fully-mixing phase (both states well mixed in the network) and a structured coexistence phase, where the number of links connecting nodes in different states (active links) drops significantly due to the formation of two homogeneous communities of opposite states connected by a few links. The structured coexistence phase is an example of emergence of community structure from not exclusively topological dynamics, but coevolution. Our numerical observations are supported by an analytical description using a pair approximation approach and an ad-hoc calculation for the transition between the coexistence and dynamical fragmentation phases. Our work shows how simple interaction rules including the joint effect of non-linearity, noise, and coevolution lead to complex structures relevant in the description of social systems.
Introduction
Coevolving or adaptive network models 1 provide a better representation of real-world systems in comparison with static or evolving networks . Most empirical networks display both network dynamics (evolution of the network's topology) as well as dynamics of the state of the nodes 2, 3 . Moreover, a nontrivial feedback loop between these aspects renders a simple sum of effects analyzed separately incomplete. Adaptive mechanisms coupling network and nodes state dynamics give rise to new phenomena absent when coevolution process is not taken into account [4][5][6][7][8][9][10][11][12][13][14] . Coevolution models incorporate microscopic assumptions in better agreement with empirical observations, and they also produce new macroscopic results.
Another essential feature of many real-world systems is the non-linearity associated with non-dyadic interactions. It is often assumed in network models that an interaction occurs pairwise, only between two selected vertices. From a single node point of view it means selecting one of its neighbors at random for the interaction. This leads to a linear relation between the number of neighbors in a given state and the probability of choosing one of them. However, in non-dyadic or group interactions, linearity is lost 15 . In contagion or spreading processes, the difference between these two types of interaction goes under the name of simple vs. complex contagion [16][17][18] A third crucial empirical element in many dynamical processes on networks is noise. This is specially important in social systems where noise is inevitable 19,20 . It can manifest itself on various levels. First, people chose other people to interact with at random. The exact form of this randomness can take different forms, nevertheless the structure's evolution is never hard-coded. But the most fundamental part of randomness lays probably within individual choices. For example, having exactly the same influence on two people's opinions we can not be sure of the outcome. This mechanism is sometimes referred to as non-conformism 21 . It reflects the ability of agents to change state independently of the states of their neighbors. It is often a model parameter that needs to be calibrated to reproduce empirical data 22 .
In this paper we aim at exploring the joint effect of these three important aspects -the coevolution of network structure and Figure 1. Schematic illustration of update rules in the nonlinear coevolving voter model with noise. Every node is in a state +1 or −1, indicated by orange and blue colors. The active node i is chosen randomly. Then with probability (a i /k i ) q an interaction occurs and one of the active links (the one to the node j) is selected randomly. With probability p the link (i, j) is rewired to the link (i, k), where k is a random node being in the same state as i. With probability 1 − p the focal node i copies the state of the node j. At the end of the time step, regardless of what happened before, the active node draws a random state with probability ε.
node states, the non-linearity of interactions and the noise -on the behavior of the system. As the framework we choose the simple voter model 23,24 . With a binary state it is often used as a model of opinion dynamics, but different forms and extensions of the model have been fruitful in explaining empirical observations in fairly distinct phenomena such as electoral processes 22 , stock market 25 or online communities 26 . The consequences of coevolution [27][28][29] , noise [30][31][32][33] , and non-linearity 15 have been already considered separately in the voter model. The joint effect of these aspects, however, turns out to be more complex than a mere superposition of the results obtained so far. The coevolving voter model (CVM) 27 was among the pioneers in introducing adaptive mechanisms in general. In the standard voter model, node state dynamics follows an imitation rule which is here coupled with link rewiring, introducing the coevolution. This leads to a network fragmentation transition. The effect of noise in the CVM 34 prevents the existence of absorbing configurations so that the different phases of the system are described by dynamically active stationary states. These include a striking new dynamical fragmentation phase. Additionally, a fully-mixing phase is found, as could be expected for large noise levels. Nonlinear interactions have been also considered in the CVM 35,36 . Non-linearity changes the stability of fixed points in the voter model dynamics, leading to a new dynamically trapped coexistence phase. Finally, the joint effects of noise and non-linearity in the voter model have also been considered 37 . It was found that non-linearities transform a finite size transition known from the noisy voter model into a bona fide phase transition that survives in the thermodynamic limit. Here we introduce a CVM in which noise and non-linearities are jointly taken into account.
The model
Our noisy and nonlinear CVM is defined by its time evolution over discrete time steps as follows. First a random graph is generated and every node is assigned a state s i ∈ {−1, +1} at random 1 . In each time step a node i is chosen at random, we call it the active or focal node. Then, with probability (a i /k i ) q ≡ ρ q i an interaction occurs, where k i is the degree of the focal node i, a i is the number of neighbors of the node i being in the opposite state, and q is the non-linearity parameter of the model. If an interaction occurs, one of the a i neighbors in a different state is chosen randomly, call it j. Then, with probability p a link rewiring is performed and with complementary probability 1 − p a state copying. When rewiring, the node i cuts the connection to the node j and creates a new link with a randomly selected node in the same state (if there is no such node, nothing happens). When copying the state, the node i replicates the state of the node j, i.e. s i → s i = s j . At the and of the time step, regardless of what happened before, the active node with probability ε draws a random state. Note that this is equivalent to flipping the current state with probability ε/2. The algorithm of the model is illustrated in Figure 1. Our model has three parameters, namely the noise rate ε, the plasticity p and the non-linearity q. The parameter p is a network plasticity parameter measuring the ratio of time scales of node dynamics and network dynamics. The non-linearity parameter q measures the nonlinear effect of local majorities: q = 1 corresponds to the ordinary voter model with a mechanism of random imitation, while q < 1 indicates a probability of imitation above random imitation and q > 1 a probability below random imitation. The ordinary voter model corresponds to p = ε = 0 and q = 1, while p = ε = 0 corresponds to the nonlinear voter model and p = 0 and q = 1 to the noisy voter model. The CVM is obtained for ε = 0 and q = 1, the noisy CVM is obtained for q = 1 and the nonlinear CVM for ε = 0.
Our simulations are ran from an initial random network with N nodes and M links or average degree µ = ∑ i k i /N = 2M/N, and with random initial conditions for the nodes states until a stationary state or a frozen configuration is reached (the latter being possible only for ε = 0).
Results
We explore the space of possible values of the parameters (p, q, ε) by means of computer simulations and analytical approximations. In order to describe the system, we adapt typical order parameters, such as magnetization m = ∑ i s i /N, size of the largest network component S, and density of active links ρ = ∑ i a i /2M. By an active link we mean a connection between two nodes being in opposite states. All these values are usually normalized to fit the range [0, 1]. Additionally, we define a new indicator, namely community overlap (ov.), in order to be able to distinguish structural changes. The community overlap is a fraction of nodes assigned to the same community by both their state and by the algorithm of community detection in the network 38 . Consequently, if a given node was assigned to the same community in both cases it increases the ov. by 1/N (where the denominator comes from normalization). Table 1. Average values of the order parameters in different phases. The border between phases A and B is given by |m| = 0.5, between phases B1 and B2 is given by ov. = 0.75 which is approximated by ρ = 0.1, the border between phases B and C is given by S = 0.75. The difference between phases A1 and A2 can be observed in the dynamical behavior of the magnetization and on the system size scaling (see Figure 2 and 7).
Phase diagram
We numerically study the p-ε phase diagram for three different values of the q parameter -the sublinear case q = 0.5, the ordinary linear case q = 1, and the superlinear case q = 2. These phase diagrams are shown in Figure 2 for two different network sizes. Obviously, for any finite amount of noise in the system a frozen configuration does not exist, and any phase is described by a characteristic dynamical stationary state. We can distinguish three general phases in the model. Phase A, indicated by the red area in the figure, is a consensus phase. In this range of parameters the system stays in a consensus state for most of the time, i.e. magnetization is close to ±1 and the network is connected having a single large component S = 1 and a small number of active links ρ 0 . If we increase the noise rate ε or the plasticity p sufficiently, we obtain phase B indicated by the white area in Figure 2, and referred to as a fully-mixing phase in previous work for the noisy CVM 34 . In this phase the magnetization drops to zero, m = 0, hence there is no consensus in the system any more. In addition, the network stays connected most of the time, S 1 . We refer to this phase as a coexistence phase. As we will see, phase B is not homogeneous in its whole range of parameters and it can be either fully-mixing (phase B1) or structured (phase B2), what is indicated by different values of the density of active links ρ and community overlap ov. Finally, for values of the rewiring probability above Table 1).
the critical point p c of the nonlinear CVM 35 , and relatively small noise rates, phase C arises. It is marked by the blue area in the figure. In this region we find dynamical fragmentation -the network consists of two separate components with nodes in each one being in opposite states, so that m 0, S 0.5 and ρ = 0. It is possible, however, that the two network components get connected intermittently in the stationary state due to noise and random rewiring, creating again a single component network with m 0 and a small number of active links ρ 0. Phase C can be described as dynamical switching between these two arrangements. Values of all analyzed quantities for every phase are summarized in Table 1.
For the linear case (q = 1) phases A and C exist only for a finite size of the network, and the size of these phases in the parameter space decreases with growing number of nodes. For the sublinear scenario q < 1, we can see in Figure 2 that the same holds for phase C, while phase A does not exist at all. The only point where the average absolute magnetization slightly raises is at p c and for ε ≈ 0, but its maximal value is only about 0.3. This raise is due to higher fluctuations close to the transition point. On the other hand, phase C prevails for even twice larger noise rate than in the linear scenario.
In the superlinear case (q = 2) phase C is much smaller in parameter space and disappears faster with growing system size. But phase A prevails for much larger noise than in the linear case q = 1. We observe phase A even for ε larger by almost two orders of magnitude. Additionally, the system size scaling is different for the superlinear scenario. Indeed, non-linearity has significant influence on the nature of phase A. For q < 1 it does not exist, for q = 1 it exists only in finite networks, and for q > 1 phase A persists in the thermodynamic limit.
A closer look at the phases In order to better understand the behavior of the system and the differences between phases we analyse horizontal cross-sections of Figure 2 and single-run trajectories which are presented in Figures 3-6 for the linear, sublinear and superlinear cases. In panel (a) of each of them, a phase diagram with respect to rewiring probability p and for a particular value of ε is presented. Values of the noise rate are chosen in such a way that allows to show three phases in one panel. For an horizontal cross-section of the full phase diagram it is difficult to capture all phases. Therefore, areas of the middle phase can be narrow, but still different values of the order parameters can be distinguished. Panels (b)-(d) show typical time traces of the order parameters in different phases.
For the sublinear case (q = 0.5) we can see the differences between phase B1(fully-mixing, Figure 3b) and phase B2 (structured, Figure 3c). Phases B1 and B2 have zero average absolute magnetization |m| ≈ 0, but we can distinguish a region with high density of active links (B1) and small density of active links (B2). Phase C also has a low density of active links, but the largest network component switches from S = 1 to S = 0.5, giving and average value S ≈ 0.5, whereas phase B2 on the average stays connected.
Results for the linear case (q = 1) are shown in Figure 4. Phase A is characterized by a magnetization which tends to stay at one of the consensus states, but it can switch from −1 to +1, or the other way around, during the time evolution. Therefore, |m| ≈ 1 but m = 0. To distinguish it from the superlinear case where m ≈ ±1 we call this phase A1. We also observe that for q = 1 in phase B2 ( Figure 4c) the network can fragment close to the transition line, however it remains a single component network most of the time. Figure 5 and Figure 6 correspond, respectively, to small and large noise rates in the superlinear case (q = 2). In this scenario the consensus phase A prevails for much larger noise rate. In panels (b) of Figure 5 and Figure 6 we can see how the system behaves in phase A for q = 2. It quickly reaches a consensus state for either m = 1 or m = −1 and remains at this value of magnetization. Therefore, |m| ≈ 1 and in contrast to the linear case in a time average also m ≈ ±1. To account for this difference we call the consensus phase A2 for q > 1 .
The difference between the consensus phase A1 and A2 is also clearly visible from the probability distribution of the magnetization in a given realization of the dynamical process ( Figure 7). In the linear case there is a bimodal distribution for the magnetization with two equal peaks at values +1 and −1 (Figure 7d), while for the superlinear case there is a single peak for a value of the magnetization at either of the boundary values +1 or −1, depending on the run (Figure 7g). For q = 2, once the consensus is reached the system stays there with minor fluctuations (phase A2), while for q = 1 the system goes back and forth between opposite consensus states (phase A1). Furthermore, phase A2 is robust against finite-size fluctuations, while phase A1 disappears in the thermodynamic limit 34 (see Figure 2).
The distribution of the magnetization gives additional insights on the phase diagram: The fact that phase A does not exist in the sublinear case (q < 1), is reflected in a distribution with a single peak at 0 for all values of ε (Figure 7a-c). However, the variance of the distribution takes its maximal value for noise going to zero and p = p c , i.e. close to the transition point between coexistence and fragmentation phase in the nonlinear CVM 35 . A different form of the transition between phases A and B for the linear and superlinear case is also observed. For q = 1 there is a flat distribution at the transition point (Figure 7e) , while a trimodal distribution is found for q = 2 (Figure 7h). A trimodal magnetization distribution was reported before in the noisy voter model on a static network 37 , but only for non-linearity parameter equal 5 or larger. With coevolution, trimodality here is obtained already for q = 2.
Phase C can be defined in terms of the size of the largest component. In phases A and B it is equal to the size of the whole network (S = 1), while the phase C is characterized by a dynamical fragmentation into two components of similar size and opposite state. Due to noise expressed in random changes of nodes states and rewiring the components are constantly being reconnected and disconnected. It can be examined looking at the trajectory or at the probability distribution of the size of the largest component, which is presented in Figure 8.
Community structure
Phase B is generally defined by zero average magnetization, also zero average absolute magnetization, and by the existence of one large network component. Nonetheless, this description leaves room for different possible configurations. Analysis of the trajectories showed that the density of active links can vary within phase B, but the question is weather this is a sign of a topological change. In the linear case (q = 1) only the fully-mixing phase was reported 34 , with nodes of states +1 and −1 well mixed inside a random graph. We refer to this configuration as phase B1. On the other hand, we can satisfy conditions for the phase B having two evident communities, highly connected internally and of opposite states, with only a few links bridging them. There is still zero magnetization and one large network component in such configuration characterized by a small number of active links. We call this phase B2. The difference between phases B1 and B2 is clearly seen in Figure 9.
Although the difference between phases B1 and B2 can be seen in the density of active links, a closer look at Figure 9 suggests that phase B2 has well defined topological communities. Therefore, we propose an alternative quantitative measure for the difference between phases B1 and B2, the overlap between state communities -defined by the state of the nodes -and structural communities found by a community detection algorithm. We use a classical algorithm from 38 , but the result does not differ much when using other algorithms. Each node is assigned to the state community by its state and to a structural community by the algorithm's result. The relative overlap between these two communities is a new quantitative indicator of the phase of the system 2 . For a random assignment or no community structure the overlap will be close to 0.5. This is the situation in phase B1. For phase B2 the overlap will be close to 1. This means that our dynamical coevolving model generates clear topological communities emerging from local interactions involving only state of nodes. This result may potentially explain process of formation of communities in social networks, where such structures are especially common 39 .
Identifying phase boundaries
So far, we gave a description of different phases with different qualitative behavior. Transition lines between these different types of behavior are not clearly or unambiguously defined because every analyzed phase indicator changes value significantly across the phase diagram. We do not focus on properties of phase transitions in this work, but rather on the properties of different emerging structures. Therefore we follow here a simple and pragmatic way to identify boundaries between different phases, employing previous approaches 34 . These boundaries have to be understood as an arbitrary way to deal with crossover system behavior.
Each of the order parameters or phase indicators analyzed has a continuous range of values (for a large N) and different phases are described by the extreme values of these quantities, allowed in this range. Therefore, a straightforward way of dividing the phase diagram into separate phases is to use the middle value, i.e. the value in the middle between the maximum and minimum of a given range. For example, the absolute magnetization is defined in the range [0, 1], taking a value close to 1 in phase A and a value close to zero in phase B. Hence, we identify the border between the two phases with |m| = 0.5.
Likewise, the size of the normalized largest network component takes values in the range [0.5, 1]. In phases A and B there is a single component network (S = 1), while phase C is characterized by the dynamical fragmentation into two components of similar size and opposite state, so that S = 0.5. In this phase, due to noise expressed in random changes of nodes states and rewiring, the components are constantly being reconnected and disconnected. We then identify 34 the border between phase B and phase C by the middle value S = 0.75. This is a line at which the network is half of the time fragmented and half of the time contains only one big component. This phase boundary can also be obtained from the probability distribution of the size of the largest component (Figure 8). At the transition line (panel b) two peaks have the same area.
Finally the boundary between phases B1 and B2 can be identified in terms of the the overlap (ov.), which takes values in the range [0.5, 1]. There is a transition from values around 0.5 in B1, up to 1 in B2, and so we define the transition line at ov. = 0.75. However, this parameter is computationally very demanding. Alternatively, we can approximate the identification of the transition line by a small value of the density of active links which we arbitrarily fix at ρ = 0.1
Analytical predictions
The magnetization m and the density of active links ρ obey Equations 3 describing the dynamics of the system, as derived in the Methods section. Several fixed points (m * , ρ * ) of these ordinary differential equations can be found depending on parameter values. However, not all of them are stable, therefore not all of them are observed in numerical simulations. To analyze the stability of these fixed points, we consider flow diagrams of the dynamics in Figure 10. Note, that from panel (a) to (b) only the value of q changes, emphasizing the difference between sublinear and superlinear cases, while from panel (b) to (c) only the value of ε changes, emphasizing the noise effect in the superlinear case. A change in the non-linearity parameter q can reverse the stability of fixed points when going over the boundary value of 1. A change in the noise rate ε can additionally shift the position of fixed points allowing for fixed points different than m = −1, 0, 1. Since the analytical description is derived in the thermodynamic limit, we don't observe stable fixed points at non-zero magnetization for q ≤ 1. This finding is consistent with the scaling behavior of numerical results indicating existence only of phase B in the limit of a large number of nodes N. For the superlinear case, although the fixed points are placed at the same values of the magnetization m = −1, 0, 1 (for p < p c , ε < ε c ), their stability is inverted -now only the solutions of |m| = 1 are stable, corresponding to phase A2. This is clearly seen in the analytical prediction for the phase diagram in Figure 11. These results, obtained in the thermodynamic limit, indicate that phase A2 should be observed for any N when q = 2, which is in agreement with our numerical results in Figure 2. Separate mean-field prediction of the disappearance of phase A1 in the thermodynamic limit was given by Diakonova et. al. 34 for the linear CVM with noise. Additionally, phase C is not obtained in the thermodynamic limit. The non-linear noisy voter model in a fixed network (p = 0) has been thoroughly analytically studied 37 , showing that q = 1 is a bordering value between a unimodal and bimodal distribution of the magnetization m. In other words, it is a transition line between existence and nonexistence of phase A. The agreement of our results with previous studies can be seen when analyzing the extreme value of p = 0 in the phase diagrams (Figure 11a and b). It is also separately presented in the Figure 11c. The transition to phase A, characterized by nonzero absolute magnetization |m|, exists for finite values of ε only when q > 1. In the Methods section we derive a formula for the critical value of the noise rate ε c (p = 0) at which the system looses the consensus state, that is, the transition form phase A to phase B (Equation 7). This result is in agreement with the numerical solution of Equations 3 presented in Figure 11c, giving a critical value of the density of active links ρ c = 1 3 ≈ 0.33 and ε c = 2 11 ≈ 0.18 for the parameters values used in the figure. The analytical solution from Equation 7 also predicts disappearance of the transition at q = 1. In the reference 37 a similar prediction of the critical noise rate was given for a complete graph: ε c (p = 0) = 2 −q (q − 1) which would give for the case of Figure 11c ε c = 0.25. Therefore, a complete graph gives only a first approximation to the value found here.
There has been no previous attempt of an analytical approximation describing the transition from phase B to the dynamical fragmentation phase C already found in the noisy linear CVM. We propose here a simple description of this phenomena. Having two separate, but internally homogeneous network components (clusters) with nodes in opposite states, the only way of connecting them is by a random change of node's state and link rewiring to the second component. The probability of the first event is independent of q and is simply given by ε/2. When the node changing state is selected as the active node the probability of an interaction is ρ q i . Since ρ i ∈ [0, 1], for smaller q the probability of an interaction is higher, except for boundary cases with ρ i = 0, 1. To reconnect the two clusters, rewiring must occur, but this happens always with probability p, despite the value of q. Therefore, for a single node in a state opposite to the whole cluster, the probability of connecting to the other cluster is constant (since ρ i = 1). However, once the two clusters are connected, the higher probability of an interaction for lower q means a higher probability of rewiring causing fragmentation again. Consequently, we expect phase C to persist for larger noise when q is smaller. More detailed description of this process is given in the Methods section, where we derive the following formula for the transition line: Based on this approximation we predict phase C to fade with growing non-linearity parameter q or with growing system size N, as shown in Figure 12. Both predictions are consistent with our numerical results.
Discussion
In this paper we analyzed the nonlinear coevolving voter model with noise. Depending on the values of the three main parameters -the rewiring probability p, the noise intensity ε, and the non-linearity parameter q -we observed three distinct phases: a consensus phase A, a coexistence phase B, and a dynamical fragmentation phase C. We observed, however, significant internal differences within phases A and B. The first one can be further divided into phase A1 and A2. Phase A1, for q = 1, is a consensus phase with absolute magnetization equal 1 on average, but real magnetization switching between −1 and +1 states, giving rise to a bimodal magnetization distribution within one realization of the stochastic dynamics. In phase A2, observed for q > 1, there is a stable consensus, i.e. global magnetization states −1 and +1 are stable. Consequently, during one realization of the stochastic dynamics the system remains in a given consensus state, producing a unimodal magnetization distribution with a peak at the maximal (minimal) magnetization. Additionally, phases A1 and A2 have different system size scaling -phase A1 disappears in the thermodynamic limit, while phase A2 is stable against finite size fluctuations. Finally, phase A does not exist for q < 1. Phase B can be similarly divided into phases B1 and B2. Phase B1 is a fully-mixing phase with random network structure and random states of the nodes, giving zero magnetization. But for larger p and low noise intensity we observe phase B2, which has the same vanishing average magnetization, but a different network structure. In the structured phase B2 one can easily distinguish two communities of opposite states connected by just a few inter-community links. This structural difference is confirmed by community detection algorithms.
Phase C is associated with a dynamical fragmentation of the network -two components in opposite magnetization states are being constantly connected and disconnected. We derive an analytical description of this behavior and an approximated value for the transition line between phase B2 and phase C.
The only phases surviving in the thermodynamic limit are phases A2, B1 and B2. The transition line form phase A2 to phase B is largely independent of finite size fluctuations. We have also presented an analytical pair approximation able to describe these findings and the main features of phases A and B in the thermodynamic limit.
Our work fills a gap in the studies of the CVM. It provides a binding bridge between studies on the CVM with noise 34 and studies on the nonlinear CVM 35 . It reduces to the nonlinear noisy voter model 37 and the ordinary CVM 27 for a proper configuration of parameters values. We obtain full consistency with those limiting cases and we explore new parameter domains. Our work brings the analysis of the voter model to a greater complexity by taking into account the joint effect of noise, coevolution and non-linearity which turns out not to be a mere superposition of them. It may provide a tool for the evaluation of the relevance of different mechanisms in the description of opinion dynamics, but can be also a reference point in the study of coevolving network models. We also show how nonlinear vs linear interactions can change the stability of a consensus state in the network and how topological communities can arise from non-topological interactions. These results are of relevant value in the description of social networks.
Pair approximation
We use the same approach as used for the nonlinear coevolving voter model 35 to describe the dynamics of magnetization m and the density of active links ρ. Given the network homogeneity due to the random rewiring, we assume each node to have the same average degree µ = 2M/N. Let us denote by n + = (1 + m)/2 and n − = (1 − m)/2 the fraction of nodes in the state +1 and −1 respectively. Then, when we pick a node in the state ±1 as the active node, the probability of choosing a neighbor in the opposite state is given by ρ/2n ± . In other words, ρ 2n ± gives the density of active links ρ i for a node i being in the state s i = ±1. Therefore, the probability of an interaction is given by ρ q i = (ρ/2n ± ) q ≡ n ± q . When q takes integer values this can be also interpreted as the probability of choosing a neighbor in the opposite state q times. Hence, when an interaction occurs with this probability, we can make the approximation that there is at least q neighbors in the opposite state and for the rest of them the probability of being in a different state than the focal node is ρ/2n ± . All together this implies that 3 a i ≈ q + (µ − q) ρ 2n ± . To approximate the evolution of the density of active links ρ we must estimate the contributions of different events that can result in a change of ρ. These events are: (i) rewiring, followed by a change of state through noise, (ii) rewiring, without a change of the node's state due to noise, (iii) changing the state of the node through state copying with no further change due to noise, and (iv) changing the state of the node only as a result of noise, with no previous state copying or rewiring. Let δ ± be the change in the total number of active links given that a node i such that s i = ±1 changed state. The total change in the number of active links in the four possible events above is: (i) 1 + δ ± , (ii) −1, and for events (iii) and (iv) just δ ± . Magnetization changes only when the state of the node is changed via copying or as a result of noise in three possible scenarios: state copying with no noise effect, link rewiring followed by noise effect, and no interaction -neither state copying nor link rewiring -but noise acting alone. When the focal node having a state s i = ±1 changes its state, the total change in the magnetization is ∓2. Hence, in the thermodynamic limit we have: With a few simple algebraic transformations these equations can be rewritten as: When node i changes state, all its a i active links become inactive and all other µ − a i inactive links become active, therefore the total change in the number of active links is δ ± = µ − 2a i . Using the previous approximation for a i we can write δ ± = µ − 2q − 2(µ − q) ρ 2n ± . The simplest stationary solution (fixed point) of Equations 3 is given by taking the magnetization m = 0, which leads to an equation for the stationary value of ρ: A fixed point solution with m = ±1 does not exist for any finite noise rate ε, whilst for ε = 0 a stationary solution is ρ = 0. Setting the noise rate to zero together with m = 0 we obtain the stationary solution of the nonlinear CVM 35 : For ε = 0 and q = 1 we recover the solution of the standard CVM 27 : To compare our results with the nonlinear noisy voter model on static networks we analyze our approximation for the particular case p = 0. Putting ρ = ρ c (1 − m 2 ) in the first of Equations 3 and performing a stability analysis of the fixed point solution m = 0 we can find a critical noise value which depends on the critical value of the density of active links ρ c . The latter can be obtained from Equation 4 as ρ c = µ−2q 2µ−2q . This formula is shown to be in full agreement with numerical solutions of Equations 3 shown in Figure 11c.
Phase C: Finite size scaling
In order to describe the behavior of the dynamical fragmentation phase we first look for an approximation for the probabilities of reconnecting two separate clusters and of disconnecting two clusters sharing at most two links. In this approach we omit events of probability proportional to (1/N) 3 and to ε 2 , or of higher order.
Imagine two separate and internally homogeneous components of opposite states, as it happens in phase C. The simplest way of connecting them under the rules of the nonlinear noisy CVM involves two steps. First, one of the nodes, call it i, must change it's state. This is possible only due to noise and occurs with probability ε/2. Second, node i that has changed its state must rewire one of its links to a node in the opposite cluster, having the same state. This can happen with probability pρ q i /N, because we need to select this particular node as the active node (1/N), an interaction has to occur (ρ q i ), and rewiring must be performed (p). Note, that since node i is the only node in a different state than its cluster ρ q i = 1. Finally, it gives the probability of reconnecting two components equal: Approaching the transition line between phases B and C now from phase B, so that a fragmentation event occurs, we consider one single component network disconnecting into two equal clusters. As done before, imagine a situation two time steps before a possible fragmentation -network has two internally homogeneous components in opposite. One of the nodes i is part of a bridge, i.e. it is connected to two nodes in the opposite cluster. Now, for the fragmentation to occur we need both of the links between the components to be rewired. The probability to rewire the first one is 1 N (2/µ) q p(1 − ε 2 ) + 2 N (1/µ) q p. We have to select the node i (1/N) or one of its neighbors (2/N). An interaction must occur, what happens with probability (a j /µ) q , where the number of active links is 2 for node i and 1 for each of its neighbors in the opposite cluster. Finally, a rewiring must occur with probability p. Additionally if node i was selected, it can not change its state due to noise (1 − ε 2 ), otherwise fragmentation could not be achieved in two steps. The transition occurs, however, for very small values of noise and therefore we can approximate 1 − ε 2 ≈ 1. To rewire the second link we have to select one of the two nodes (2/N) connecting the link, an interaction must occur (1/µ q ), which must be a rewiring event (p). Therefore, the probability of losing the last link between the two clusters is 2 N (1/µ) q p. Finally, we obtain the probability of disconnecting two clusters sharing only two links: Between phases B and C a continuous fragmenting and reconnecting of the network is observed. We define the transition between the two phases when connection and fragmentation happens at such a rate that half of the time the system consists of two separate components and half of the time the network is connected. Therefore, at the transition line we expect P r = P d , which leads to the equation for the critical density of noise given in the main text (Equation 1). | 9,685.2 | 2020-04-14T00:00:00.000 | [
"Physics"
] |
Hydro-geochemical characterization and Groundwater modelling of the subsurface around Ughelli West Engineered Dumpsite in the Western Niger Delta, Nigeria
: Geoelectric, geochemical investigation, and groundwater modelling were integrated in areas surrounding the Ughelli West Engineered dumpsite in the Western Niger Delta. The study focused on assessing the environmental impact of the dumpsite on its surrounding groundwater. The geochemical analyses revealed that the leachates generated from the dumpsite have significant potential to contaminate the surrounding environment. The BOD 5 /COD ratio was less than 0.1, indicating that the dumpsite is old with stabilized leachate of low biodegradability and in the methanogenic phase of anaerobic degradation. The groundwater chemistry in monitoring wells outside the dumpsite and at other control sites showed no significant impact of the dumpsite on groundwater quality. Groundwater models showed groundwater flow in the north-western direction and significant vertical movement of contaminants up to depths of about 60 m beneath the dumpsite after a period of 3 years. The contaminant plume, however, had not moved considerably laterally away from the dumpsite. The location of the dumpsite within areas of low vulnerability due to the presence of clay/sandy clay between 1.3 and 6 m thick around the dumpsite limited the lateral migration of leachate in groundwater.
INTRODUCTION
Groundwater resource is abundant in the Niger Delta region and is a major source for domestic and industrial use; however, the presence of contaminants in groundwater pose significant challenges to its search and usage.The Niger Delta is one of the world's largest petroleum region, and its importance lies in its abundant oil and gas reserves.The presence of oil companies and other related industries has caused population in the region to increase enormously; thus leading to high demand for potable water by the teeming population.The area has sufficiently thick aquifers comprising of porous and permeable sand with high transmissivity thus giving the Niger Delta region good to excellent groundwater potential.However, the huge waste generated by this growing population has an alarming detrimental effects on groundwater quality (Bate et al., 2018).
Groundwater conditions in any environment is controlled by several factors such as the chemical composition of the infiltrating water at the point of recharge and the chemical composition of the host rock; such as the cementing material of the aquifer matrix.Others include water table, porosity/ permeability of the aquifer, groundwater flow rate and the travel time of water through the vadose zone into the saturated zone.Anthropogenic activities; apart from these natural causes of contamination can also have an adverse effect on groundwater quality (Akpoborie et al., 2015).Leakages from septic tanks, sewage channels, and dumpsites are some established sources of subsurface contamination which has now become a major threat to groundwater resources (Kumari et al., 2017;Rana et al., 2018;Igboama et al., 2022).This is because the subsurface, which serves as aquifers, is also the site for waste disposal.The prime effect of disposing waste directly into the subsurface or dumpsites without suitable liners is generation of leachate, which constitutes one prominent challenge related to groundwater exploitation (Adeolu et al., 2011;Aweto, 2017;Aladejana et al., 2018).One exquisite problem associated with dumping of wastes in the open is the spread of diseases.According to the World Health Organization (WHO 2011), about 80 % of all the diseases in a human being are water-borne.
Deterioration in groundwater quality of the Niger Delta has been linked to the shallow attributes of the aquifer, thus making them open qualitatively to chemical and biological contamination (Aweto 2012;Igboama et al., 2022).Various studies have reported the deterioration of groundwater quality in the Niger Delta.Aweto et al. (2015) reported elevated total dissolved solids (TDS) in shallow aquifer due to anthropogenic activities; Okpara et al. (2021) observed that groundwater sources close to dumpsites are prone to contamination while Iwegbue et al. (2023) reported Cd, Fe and turbidity as major cause of deterioration of water quality.It has been observed that once groundwater quality is negatively affected due to presence of pollutants, it cannot be ameliorated by the removal of the source.This is because the pollutants in groundwater may persist for a significant long period of time even after the source of pollution had been removed (Cozzarelli, et al., 2011;Bjerg, et al., 2014).Therefore, it has now become crucial to monitor groundwater quality frequently and envisage means to protect it.
Modern dumpsites in emergent nations are designed with suitable liners which prevent leachate migration into groundwater.The dumpsite in the study area is lined, however, despite the compactness of the clay liners they may deteriorate after sometime due to numerous setbacks (Cossu & Stegmann, 2019).Shell Petroleum Development Company (SPDC 2005) had earlier reported compromise of liners and intense downward percolation of leachate has been recorded.In this present investigation, the groundwater surrounding the dumpsite was characterized to ascertain the effect of leachate percolation and dispersion using geophysical, hydrogeochemical, and groundwater modelling approaches.
A. Location and Geology
The study area is the Ughelli West engineered dumpsite, located in the Uvwiamuge community, 10 km Southeast of Warri, in the Western part of the Niger Delta (Figure 1).It lies between longitudes 5° 51' E, 5° 53' E, and latitudes 5° 34' N, 5° 36' N. The subsurface geology of the area includes the Holocene Sombreiro -Warri Deltaic Plain deposits have been described and summarized in different studies (Akpoborie et al., 2015, Ohwoghere-Asuma et al., 2018;Ohwoghere-Asuma et al., 2020).This deposit which is between 40 and 150 m thick, comprises of laterally homogenous fine to medium and coarse-grained sands with occurrences of little clays.Beneath the Sombreiro -Warri Deltaic Plain deposits are successions of sedimentary formations that include, from top to bottom: Oligocene -Pleistocene Benin Formation, Eocene -Oligocene Agbada Formation, and Paleocene -Eocene Akata Formation.
B. Resistivity Survey
Field resistivity measurements engaged the Schlumberger field procedure (Patra & Nath 1998).In the present work vertical electrical soundings were conducted using the ABEM SAS 1000 resistivity meter at sixteen (16) locations, as shown in Figure 1.Current electrodes (AB) separation ranged between 1 m and 200 m.The depth-sounding data were presented as sounding curves which were interpreted manually (Patra & Nath, 1998) and subsequently by computer iteration with winResist Version 1.0 software (Vander Velpen, 2004).The electrical resistivity values of geoelectric layers facilitated the delineation of lithological units and identification of aquifer units (Aweto, 2019) with the aid of drillers log data from the area.
C. Aquifer Vulnerability
The Aquifer Vulnerability Index method was used to assess the vulnerability of the aquifer in the study area.This method evaluates the hydraulic resistance C in the aquifer; this is equivalent to the travel time of contaminants through the vadose zone (Van Steempvort et al., 1992).The hydraulic resistance in a year is determined by the expression given below: where di and ki are the thickness and hydraulic conductivity of the vadose layers, respectively.
Hydraulic conductivity values of 3650 cm/y for sand, 0.365 cm/y for clayey sand, 0.0365 cm/y for sandy clay, and 0.000365 cm/y for clay as obtained from Aweto & Ohwoghere-Asuma (2018) were used in this study.
D. Leachate and Groundwater Analysis
To examine the effect of leachate on the environment surrounding the dumpsite, water samples were collected at monitoring wells surrounding the dumpsite and other locations far from the dumpsite.At the same time, five (5) leachate samples from the collection systems were also collected.All samples collected were preserved and analysed using standard methods (APHA 2012;USEPA 2007).The parameters analysed in groundwater and leachate samples include pH, EC, COD, BOD5, Ca, Mg, K, Na, SO4, Cl, Pb, Cu, Cd, Zn, Cr, Ni, and Fe.
E. Groundwater Modelling
The package used for this work is MODFLOW, developed by the United States Geological Survey (McDonald & Harbaugh 1988).MODFLOW is capable of performing both steady-state and transient analyses based on the law of conservation of mass, which assumes that the groundwater flow rate into an aquifer equals the rate of flow out from the aquifer.The governing equation is given below: Where; Kx, Ky, Kz = hydraulic conductivity along the x, y, and z axes which are assumed to be parallel to the major axes of hydraulic conductivity; h = piezometric (hydraulic) head; Q = volumetric flux per unit volume representing source/sink terms; Ss = specific storage coefficient i.e the volume of water released from storage per unit change in head per unit volume of porous material.
The contaminant plume in the dumpsite was modelled for a stress period of 30 years.Transportation of solutes in the saturated zone is controlled by the advection-dispersion equation, which for a porous medium characterized by constant porosity pattern is given as follows: Where; c = concentration of the solute; Rc = sources or sinks; Dij = dispersion coefficient tensor; Vi = velocity tensor.
A. Isoresistivity and Vulnerability Map
In order to show the subsurface lithological distribution at various depths based on resistivity variations and vulnerability index, an isoresistivity and vulnerability maps were generated with the aid of SURFER 8 terrain and surface modelling The vulnerability of the aquifer to contamination was evaluated using the values of the logarithm of hydraulic resistance, as shown in Table 1.
The Aquifer Vulnerability Index defined three zones of vulnerability (Figure 3): low vulnerability (ruby red), moderate vulnerability (brown), and extremely high vulnerability (orange).The areas with low vulnerability constitute about 25 % of the entire area and lie within the Northern (VES 4, 5, and 6) and Southern (VES 12 and 15) parts.The dumpsite is located within the low vulnerability zone in the South, as shown in Figure 4.The aquifer in these areas is adequately protected from the surface and near-surface contaminants and hence not vulnerable.The areas with moderate vulnerability (representing about 19 % of the entire area) envelop the zones of low vulnerability as lithology changes from clays to sandy clays/clayey sand.The aquifer in the remaining 56 % of the area is highly vulnerable to contaminants because these areas are underlain by porous sand.
C. Groundwater flow and Contaminant model
The groundwater of the study area flows in the Northwest direction (Figure 4).The hydraulic head ranged from 6.62 -6.78 m; groundwater infiltrates into subsequent layers from the top layer and tends to change its course of direction toward the residential area in the Northwest.
MT3D package was used to model the contaminant in the dumpsite for three periods of stress: 1 year (365 days), 14 years (5114 days), and 30years (10,950 days).Using steady-state condition and initial concentration of 11.85 mg/L for Fe (from geochemical analyses of leachates from the dumpsite).Typical infiltration of iron in leachate in the study area is shown in Figures . 5 -8.
The models (Figures 5 -8) show an increase in impact area with depth, indicating that the liners could not limit plume migration.After 365 days, figure 6 showed the localized area of leachate that had infiltrated into deep layers up to 26 m with concentrations of Fe ranging from 1.0 -6.3 mg/L; after 5114 days (Figure 7), it had penetrated depths of about 40 m with concentrations ranging from 1.0 -7.1 mg/L.The plume infiltrated to depths of 60 m after 10950 days, with concentrations ranging from 1.0 -7.9 mg/L.The average concentration of Fe in the leachate after 10950 days is 7.1 mg/L has the potential to contaminate surrounding groundwater.According to (Sykes et al., 1982;Cozzarelli et al., 2011;Bjerg et al., 2014), contaminant plumes can persist in groundwater after the source has been removed.
D. Contaminants spread in the aquifer
The lateral spread of Fe and Pb introduced by the leachate into the aquifer is shown in Figures 10 and 11 as mass function.The mass of Fe introduced at the dumpsite is 13,800 mg, while Pb is 16,200 mg; at 130 m away from the dumpsite, the masses of Fe and Pb detected were 1,440 mg and 172 mg, respectively.This further reduced as the plume moved 220 m away to 56 mg for Fe and 6.8 mg for Pb.At a distance of about 340 m away from the dumpsite, the mass of Fe was 0.56 mg, while that of Pb was 0.48 mg.The diminishing mass of Fe and Pb with distances away from the plume has shown the impact of the attenuation process on the transport of the leachates from the plume.The soil (clays/sandy clays/clayey sand) surrounding the dumpsite in the study area tends to filter the leachates as it spreads a process known as natural filtration.
The P H value of leachate indicates alkaline conditions; the alkaline nature of leachate from dumpsites has been reported by Obasi et al. (2012) and Agbashie et al. (2020).According to (Gautman &Kamar, 2021 andLindamulla et al., 2021), alkalinity is typical of dumpsite leachate during the phase of waste stabilization.The leachate's BOD5 (75 mg/L) and COD (881 mg/L) are relatively low.The BOD5/COD ratio measures the biodegradability (Dincer, 2020) and age of the dumpsite (Alvarez -Vazquez et al., 2004;Lindamulla et al., 2021).The BOD5/COD ratio, less than 0.1, indicates that the dumpsite is old and in the methanogenic phase of anaerobic degradation.This is usually characterized by old and stabilized leachate with low biodegradability.The dumpsite in the study area is over 28 years old, and according to (Yadav & Dikshit, 2017), old dumpsite produces stabilized leachate with relatively low COD and low biodegradability.
The chemical composition of groundwater from monitoring wells and other wells in the control sites showed great contrast with the leachate composition.A lower concentration of major cations, anions, and heavy metals was observed in groundwater.The result shows that groundwater in wells around the control sites had very low electrical conductivity and COD values, which may be due to the absence of leachate.Heavy metals concentration in the four monitoring wells outside the dumpsite and at other control sites were below the maximum contamination limits for drinking water by the Standard Organization of Nigeria (SON, 2007), except iron which exceeded the limits of 0.3 mg/L in some localities.This may be unrelated to impact from the dumpsite, studies by Etu-Efeotor & Odigi (1983) (2020), the high iron content in these aquifers is sequel to leaching of ferruginised regolith overlying the aquifers.This study showed that the leachate had a higher concentration of chemical constituents in orders of several magnitudes above concentration in groundwater at the control sites.Thus, it is evident that the decomposed wastes and leachate have the potential to contaminate the underlying aquifer.
Groundwater models show infiltration of contaminants into successive layers beneath the dumpsite and spread laterally northwest in the direction of groundwater flow.However, the contaminant plume had not moved considerably far from the dumpsite.Substantial differences were observed in the distribution of plumes at a different distance away from the dumpsite after a stress period of 30 years.The mass of Fe and Pb reduced considerably from 13,800 mg and 16,200 mg at the dumpsite to 0.56 mg and 0.48 mg, respectively, at 340 m away from the dumpsite.When leachate from a dumpsite mixes with groundwater in the aquifer, it gradually mixes with the non-contaminated flow.This results in dilution of contaminants plume leading to reduction of concentration or mass of contaminants owing to hydrodynamic dispersion.
Studies by Akudo et al. (2010);Aladejana et al. (2018;Ameloko et al. (2018) have shown that leachate from dumpsites is a known source of chemical loading to domestic groundwater.Comparatively low contents or none of these chemicals have been introduced into the groundwater surrounding the dumpsite.This suggests that there is no noticeable influence of leachate from the dumpsite on the concentration of chemical constituents of groundwater.This can be adduced to low contents of major cations, anions, and heavy metal contents in groundwater below the stipulated standard permitted by regulatory bodies.The clayey lithology (1.3 -6 m thick) around and beneath the dumpsite probably acted as an effective aquiclude that prevented lateral migration of possible contaminants in the groundwater zone.
IV. CONCLUSION
The geophysical study at Uvwiamuge revealed that the lithological successions are mostly sand, thus giving 56 % of the study area an extremely high vulnerability status.The chemical composition of the leachate showed high cations, anions, and heavy metals content, several orders of magnitude higher than groundwater.This indicates the potential of the leachate from the dumpsite to contaminate the surrounding groundwater.However, the low concentration of these chemical constitutions below the maximum contamination limits set by World Health Organization (WHO) revealed no significant groundwater contamination.Based on the modelling, the contaminant plume had neither expanded nor moved considerably from the dumpsite.The clays between 1.3 and 6 m thick around the dumpsite in the southern part probably acted as affective aquiclude preventing significant vertical migration of contaminants in groundwater.
Figure 1 :
Figure 1: Map of the study area from digital elevation model F. Conceptual model The conceptual model grid approach was used to produce the groundwater-flow model.The model's grid consists of x, y, and z axes indicating width, length, and depth; x =1104 m, y = 1582 m were estimated from the Digital Elevation Model (DEM) of the study area and z = 80 m from the borehole log of the area.Groundwater flow direction was determined from known hydraulic heads from wells.The DEM of the area was used as top elevation, and the borehole log was used to assign the remaining two layers.The hydraulic conductivity values of each layers were assigned from hydraulic conductivity values of different formations as provided by Guideal (2011).The Recharge rate of 638.46 mm/yr = 0.001749 m/day used for the model was estimated from the average annual rainfall of Warri between 2005 -2015 (Oyerinde 2021).Dispersion, longitudinal dispersivity, transverse and vertical dispersivity as estimated bySchulze-Makuch (2005) was used in this study.Longitudinal dispersivity was taken as 8.5 m, while transversal and vertical dispersivity were taken as 0.85 m and 0.085 m respectively.
software (SURFER 2002).The results are shown in Figures 2 & 3.The colours indicate the various lithologies and their resistivity range.The areas with blue colour having resistivity values ranging between 21 and 90 Ωm indicate clay lithology.The areas with yellow colour having resistivity values ranging from 101 to 130 Ωm indicate sandy clay and clayey sand lithology respectively.In contrast, the areas with red colour having resistivity values ranging between 180 and 1340 Ωm represents sandy lithology.The isoresistivity map of Uvwiamuge at 5 m (Figure2) showed that about 30 % of the area in the Southwest, Central, Northwest, and Northeast delimited with blue colour(VES 4, 5, 6, 11, 12, 15, and 16) is underlain by clay.The areas with yellow colour around VES 7 and 10, representing 26 %, are underlain by sandy clay and clayey sand, while the remaining 46 % (red colour) is underlain by sand.The isoresistivity map at 10 m and 20 m shows that the entire area is underlain by pervious sand, which serves as an aquifer.
Figure 4 :Figure 6 :
Figure 4: Groundwater flow direction in the study area ; Ngah & Nwakwoala.(2013); Nwakwoala et al. (2016) and Okiongbo et al. (2020) revealed a preponderance of high iron content in some aquifers of the Niger Delta, which is linked to the geology of the area.According to Etu-Efeotor & Odigi (1983) and Okiongbo et al. | 4,379.2 | 2023-10-10T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
An improved Baldwin–Lomax algebraic wall model for high-speed canonical turbulent boundary layers using established scalings
Abstract In this work, we employ well-established relations for compressible turbulent mean flows, including the velocity transformation and algebraic temperature–velocity (TV) relation, to systematically improve the algebraic Baldwin–Lomax (BL) wall model for high-speed zero-pressure-gradient air boundary layers. Any new functions or coefficients fitted by ourselves are avoided. Twelve published direct numerical simulation (DNS) datasets are employed for a priori inspiration and a posteriori examination, with Mach numbers up to 14 under adiabatic, cold and heated walls. The baseline BL model is the widely used one with semilocal scalings. Three targeted modifications are made. First, we employ a total-stress-based transformation (Griffin et al., Proc. Natl Acad. Sci. USA, vol. 118, issue 34, 2021, e2111144118) to the inner-layer eddy viscosity for improved scaling up to the logarithmic region. Second, we utilize the van Driest transformation in the outer layer based on the compressible defect velocity scaling. Third, considering the difficulty in modelling the rapidly varying and singular turbulent Prandtl number near the temperature peak in cold-wall cases, we design a two-layer strategy and use the TV relation to formulate the inner-layer temperature. Numerical results prove that the modifications take effect as designed. The prediction accuracy for mean streamwise velocity is notably improved for diabatic cases, especially in the logarithmic region. Moreover, a significant improvement in mean temperature is realized for both adiabatic and diabatic cases. The mean relative errors of temperature to DNS for all cases are down to 0.4 % in the logarithmic wall-normal coordinate and 3.4 % in the outer coordinate, around one-third of those in the baseline model.
Introduction
In high-speed flows, turbulent boundary layers are known to severely affect the surface drag and heat transfer, so accurate predictive models are strongly desired for reliable vehicle design and flow control (Bradshaw 1977).Among various simulation strategies, the Reynolds-averaged Navier-Stokes (RANS) models are long established yet still prevailing, especially for engineering problems, owing to their simplicity, efficiency and robustness (Wilcox 2006).Compared with the incompressible counterpart, however, compressible RANS models have weaker theoretical foundations, and suffer from the complications brought about by intrinsic compressibility, heat transfer, shocks, high-enthalpy effects and other factors (Gatski & Bonnet 2013;Cheng et al. 2024).
The RANS models are usually divided into four categories by the number of additional equations introduced: the algebraic (zero-equation), one-equation, two-equation and stress-transport models.The algebraic models are the simplest ones, which directly model the eddy viscosity μ t (and eddy diffusivity κ t ) using theoretical/empirical algebraic relations.Two standout models are the Cebeci-Smith (CS) model (Cebeci & Smith 1974) and the Baldwin-Lomax (BL) model (Baldwin & Lomax 1978), both of which formulate μ t into a two-layer structure.The inner layer part is based on the mixing length model with a viscous damping correction devised by van Driest (1956).The outer portion is built on the defect layer scaling by Clauser (1956) and the intermittent function by Klebanoff (1955).In incompressible applications, the algebraic models can faithfully reproduce mean velocity profiles and skin friction for attached boundary layers, though they become unreliable when subject to strong pressure gradient and separation (Wilcox 2006).Furthermore, the CS and BL models can attain comparable accuracy levels.The latter is more commonly considered for complex flows since it avoids directly using the boundary layer thickness.When extended to compressible flows, no special compressibility correction was considered in early investigations, observing the insensitivity of classical mixing length to the Mach number Ma (Maise & McDonald 1968;Baldwin & Lomax 1978).To close the problem, κ t in the energy equation is related to μ t through a prescribed turbulent Prandtl number Pr t .The resulting compressible models can reproduce well the mean flows in high-speed adiabatic flows with minor pressure gradients, but they deteriorate under diabatic walls (with surface heat transfer; (Maise & McDonald 1968;Shang, Hankey & Dwoyer 1973;York & Knight 1985)).As one improvement, the wall viscous unit for the inner layer scaling can be replaced by the semilocal one (though not in this terminology originally) based upon local density and viscosity (Gupta et al. 1990;Cheatwood & Thompson 1993).Dilley & McClinton (2001) showed that this modification in BL largely improved the mean flows in hypersonic cold-wall cases, and the predicted surface friction and heat flux agreed well with experiments.Further improvements for complex three-dimensional boundary layers were contributed by, for example, Degani & Schiff (1983) and Panaras (1997), among others.Consequently, the BL models are extensively adopted in high-speed applications and numerous commercial solvers (Cheatwood & Thompson 1993;Srinivasan, Bittner & Bobskill 1993;Rumsey, Biedron & Thomas 1997;Townend et al. 1999;Candler et al. 2015).
On the other hand, recently accumulated direct numerical simulation (DNS) data for high-speed canonical flows provides a chance to reassess the behaviour of the BL model.As analysed by Hendrickson et al. (2023), and as will be shown below, even for zero-pressure-gradient (ZPG) flat-plate boundary layers, there are clear disparities in mean profiles between BL and DNS under diabatic conditions, especially for the temperature.Also, there is room for improvement in adiabatic flows.Therefore, the objective of this work is to improve the velocity and temperature prediction by the BL model for canonical supersonic/hypersonic boundary layers, based on recently advanced knowledge of mean flow properties.
The established relations of mean velocity and temperature in compressible wall-bounded turbulence are briefly reviewed, to set the grounds for later discussions.First, the hypothesis of Morkovin (1962) earns wide support, which states that at moderate free stream Mach numbers (Ma ∞ 5), the dilatation effect is small, so any differences from incompressible turbulence can be accounted for by variations of mean properties (Coleman, Kim & Moser 1995;Pirozzoli, Grasso & Gatski 2004;Duan, Beekman & Martín 2010;Lagha et al. 2011).As a result, velocity transformation can be built using only mean flow variables, expecting that the transformed streamwise velocity reproduces the incompressible law of the wall and outer-layer scalings.More attention has been paid to the former, i.e. the compressible law of the wall.Pioneering work is the transformation by van Driest (1951) (denoted as VD hereinafter) built upon the mixing length assumption.This widely used transformation performs well for high-speed adiabatic flows, but deteriorates in diabatic conditions.Trettel & Larsson (2016) designed a transformation based on viscous stress and semilocal units (denoted as TL), which is particularly accurate for pipe and channel flows, but can also become less accurate in diabatic boundary layers (logarithmic region).Recently, Griffin, Fu & Moin (2021) proposed a total-stress-based transformation (denoted as GFM), combining the advantages of the near-wall relation by TL and a modified version of the equilibrium arguments of Zhang et al. (2012).The GFM transformation performs remarkably well in a wide range of air flows, particularly diabatic flows, hence successfully collapsing the channel, pipe and boundary layer cases within and below the logarithmic region.Very recently, Hasan et al. (2023b) proposed a transformation (termed HLPP) by introducing a correction to the TL transformation to interpret intrinsic compressibility effects (hence questioning the validity of Morkovin's hypothesis), so the logarithmic scaling in diabatic flows can be reasonably formulated.Besides, the non-air-like and supercritical flow cases can be accounted for, which can be a challenge for the GFM transformation (Bai, Griffin & Fu 2022).On the other hand, fewer transformations are available for the outer-layer velocity, presumably due to the greater reliance on flow configurations.Maise & McDonald (1968) demonstrated that a compressible law of the wake was attainable for adiabatic boundary layers (Ma ∞ from 1.5 to 5) using the VD transformation.Duan, Beekman & Martín (2011) (and also Guarini et al. (2000), Pirozzoli et al. (2004) and Wenzel et al. (2018)) suggest that the VD-transformed velocities collapse in the outer layer for adiabatic boundary layers with Ma ∞ from 0 to 12, provided comparable Re δ 2 (defined later).Pirozzoli & Bernardini (2011) also noted that for supersonic adiabatic boundary layers, the VD-transformed defect velocity matched the incompressible counterpart well.
In terms of temperature, the classical Crocco-Busemann relation (e.g.White 2006) shows that, after assuming unity Prandtl numbers (Pr), the mean temperature is almost a quadratic function of the mean streamwise velocity.A less restrictive relation was proposed by Walz (1969) to incorporate non-unity Pr effects by introducing the recovery temperature.Although this relation holds in high-speed adiabatic flows, the accuracy degrades severely in case of significant surface heat transfer.A crucial modification was contributed by Duan & Martín (2011), who introduced a semiempirical quadratic function of the velocity.The resulting quadratic temperature-velocity (TV) relation was shown to be highly accurate for a wide range of boundary layer, channel and pipe flows (Zhang, Duan & Choudhari 2018;Modesti & Pirozzoli 2019;Fu et al. 2021;Griffin, Fu & Moin 2023), even with high-enthalpy effects (using enthalpy instead; Passiatore et al. 2022).Subsequently, Zhang et al. (2014) recast the above relation in terms of a generalized Reynolds analogy, where the Reynolds analogy factor is present for further physical interpretations of the closure constant.
The success of these mean flow relations makes it possible to recover the mean velocity and temperature by solving an inverse problem, which helps improve turbulence modelling.Pioneering work is the generalized velocity derived by van Driest (1951) through combining the VD transformation and the quadratic TV relation throughout the boundary layer.This framework enables efficient computation of the mean profiles and skin friction (Huang, Bradshaw & Coakley 1993;Kumar & Larsson 2022).Owing to the continuously increased accuracy of these mean flow relations, more and more attention has been paid to the modelling aspect in recent years.For channel and pipe flows, the combination of the velocity transformation and TV relation leads to ordinary differential equations (ODEs) for the mean flow, which achieves a relatively high accuracy (Chen et al. 2023a;Song, Zhang & Xia 2023).For ZPG boundary layers, Hasan et al. (2023a) supplemented a Re-dependent function for Coles' wake parameter (Coles 1956).The ODE set for the inverse problem is thus formulated, and the results are in close agreement with DNS.In a more general set-up, Hendrickson, Subbareddy & Candler (2022) and Hendrickson et al. (2023) used velocity transformations to improve the inner-layer scaling of the BL model, also for ZPG boundary layers.Although the mean profile prediction is improved for the two cases displayed, there are still noticeable deviations in temperature from DNS for cold-wall cases.More encouragingly, the established relations can help improve the wall-modelled large-eddy simulations (WMLES).In a very recent work, Griffin et al. (2023) proposed a near-wall model using the GFM transformation and the TV relation, with the outer boundary conditions provided by large-eddy simulations (LES).This model was shown to be significantly more accurate than the classical ODE wall model for a wide range of canonical cases examined.Hendrickson et al. (2023) made similar explorations, while the temperature prediction was less accurate for cold-wall boundary layers when velocity transformations alone were taken into account.
As aforementioned, we aim to improve the compressible BL model for canonical boundary layers using the established relations for mean velocity and temperature.To make the improvement clean and solid, we strictly adhere to the following three principles.
(i) First, the BL model for incompressible flows is not altered.The compressible version is modified to achieve the same accuracy level as the incompressible one.(ii) Second, only well-established relations are used, which have been widely verified.
We avoid introducing any new functions or coefficients fitted by ourselves.(iii) Last, the modification is made as simple as feasible.
For a priori inspiration and a posteriori examination, wide published DNS databases for ZPG boundary layers are employed containing 12 cases from different sources, with ranging from 2 to 14 under adiabatic, cold and heated wall conditions.Of particular focus are the cold-wall cases, which are ubiquitous and even unavoidable in practical hypersonic applications.The remaining parts are organized as follows.Section 2 describes the governing equations, DNS database and the baseline BL model.Section 3 presents how established relations are implemented in the wall model, and provides a priori examination using the DNS data.The resulting modified BL model is examined in § 4 for all the cases, and the work is summarized in § 5.Although only ZPG boundary layers are considered, we believe that the present framework is promising.The applications, limitations and future steps are discussed in § 5.1.
Governing equations and DNS datasets
The ZPG turbulent boundary layers are considered, as illustrated in figure 1.Assuming a calorically perfect gas, the zero-equation RANS equations are written as where ρ, u, T and p = ρRT are the fluid's density, velocity, temperature and pressure; R and c p are the gas constant and isobaric specific heat; μ and κ are molecular viscosity and thermal conductivity.The Reynolds and Favre averages are denoted as φ and φ (fluctuations as φ and φ ), respectively.The quantities μ t and κ t model the Reynolds stress − ρ u u and turbulent heat flux − ρc p u T , whose formulations are described in § 2.3.The wall is set no-slip and isothermal or adiabatic.The variables in wall viscous units are expressed with a superscript +, as x + = x/δ ν , ρ+ = ρ/ρ w , ũ+ = ũ/u τ and μ+ = μ/μ w , where w represents the wall variables, δ ν = μ w /(ρ w u τ ) is the viscous length unit, u τ = (τ w /ρ w ) 1/2 is the friction velocity, τ w is the wall shear.Correspondingly, the friction Reynolds number is Re τ = δ 99 /δ v with δ 99 the nominal thickness based on streamwise velocity.Furthermore, semilocal units are adopted, with a superscript * as u * τ = (τ w / ρ) (Fernholz & Finley 1980), where θ is the momentum thickness and ∞ denotes free stream variables.
To comprehensively examine the modified BL model, wide elaborated DNS databases are employed for ZPG boundary layers from different sources, with Ma ∞ from 2 to 14 under adiabatic, cold and heated walls.The published data are from Pirozzoli & Bernardini (2011, 2013), Zhang et al. (2018) and Volpiani, Bernardini & Larsson (2018, 2020), as summarized in table 1.Though Ma ∞ = 14 is reached, the high-enthalpy effects (e.g.Chen et al. 2022b) are not considered following the reference set-up.Besides, the incompressible data from Schlatter & Örlü (2010) are included, as a reference for the incompressible BL model.The viscous parameters in each case are computed according to the references.The viscosity is from Sutherland's law, the power law or the formula for N 2 (the working fluid in case ZDC-M8Tw048R20).Constant Pr = c p μ/κ are adopted for the thermal conductivity.
Notably, (2.1) is expressed using the Favre-averaged variables (except for ρ) to form a closed system (Gatski & Bonnet 2013).The difference between the Reynolds-and Favre-averaged results cannot be accounted for using the algebraic RANS models, so we simply use the Favre averages throughout for consistency of notation, though the DNS data mostly adopt the Reynolds averages.This simplification will not affect the main conclusions of this work because even for the M14Tw018R24 case of the highest Ma, there are only slight differences between the DNS statistics from these two averages (Zhang et al. 2018).
Solution of boundary layer equations
Based on the hypersonic interaction parameter (White 2006), the effects of shock-boundary-layer interaction at the leading edge are evaluated to be minor on the downstream locations for the cases in table 1.Therefore, the boundary layer equations can be used for efficient computation of the mean flow, in the absence of impinging shock and separation.From (2.1), the boundary layer equations are written as (e.g.White 2006) where Ũ and Ṽ are the streamwise (x) and wall-normal (y) velocities.The pressure is assumed invariant along the y-direction (P e ), so ρ = ρ e T e / T is satisfied, where e denotes values at the boundary layer edge.More general formulations accounting for geometrical curvature, far-field shock and cross-flow can be found in Degani & Schiff (1983) and Gupta et al. (1990).Due to the complex formulation of μ t and κ t ( § 2.3), the boundary layer profiles are not self-similar, even with ZPG.To solve the non-self-similar flow, the Mangler-Levy-Lees transformation can be used to remove the singularity at the leading edge x = 0 (Probstein & Elliott 1956).The transformed coordinate (ξ, η) is defined as dξ = ρ e μ e U e dx, dη = ρU e √ ξ dy. (2.3a,b) The continuity equation is then eliminated and the transformed momentum and energy equations are where the normalized streamwise velocity and temperature are F = Ũ/U e and G = T/T e and Π = F dη.The other parameters are C 1 = ρ( μ + μ t )/(ρ e μ e ), C 2 = ρ( κ + κ t )/(ρ e c p μ e ) and the Eckert number Ec e = U 2 e /(c p T e ).The streamwise pressure gradient is reflected in dU e /dξ through the Bernoulli equation, taken to be zero in the flows considered.The wall-normal boundary conditions at the wall and in the free stream are (2.5) Equation (2.4) is solved using the streamwise marching procedure (Blottner 1963;Chen, Wang & Fu 2021).At ξ = 0, (2.4) degenerates into two ODEs in terms of η, which are solved using the shooting method and serve as the initial profile.Afterwards, the solution at ξ > 0 is feasible through streamwise marching.The streamwise (ξ ) derivatives are discretized using the third-order finite-difference scheme.The Chebyshev collocation method is adopted for the wall-normal direction (η), with more points clustering near the wall.A grid number N y = 241 is adequate to provide grid-independent results.A uniform ξ mesh is adopted, while the grid with increasing spacing can be used for better robustness.At each ξ i , Newtonian iteration is used for quick convergence of the nonlinear equations.Second-order convergence can be realized for laminar flows (e.g.Chen, Wang & Fu 2022a), while for turbulent flows here, the derivatives of μ t and κ t may not be smooth (due to the maximum and intersection functions, see § 2.3), so the convergence is only first order.The convergence criterion for [F, G] T is set to 10 −9 and at most 50 iterations are allowed at each streamwise location.The above procedure is substantially more efficient than solving (2.1).For the cases in table 1, the mean flow at the target x can be obtained within minutes on a desktop computer.The solver is verified (detailed in Appendix A) through comparing with Hendrickson et al. (2023), who solved the full Navier-Stokes (NS) equation with BL models using the US3D code.
It is worth mentioning that μ t and κ t from the BL model are zero at ξ = 0 (since y = 0), so the initial profile at ξ = 0 is exactly the laminar counterpart.Thereby, there is a numerical (not physical) transition process downstream as μ t increases.If the transition is not desirable, the start point for marching ξ 0 can be placed somewhere downstream, where the maximum μ t /μ ∞ is already high and the flow is turbulent.The initial profile at ξ 0 > 0 can still be obtained by solving the ODEs with the streamwise derivatives (left-hand sides in (2.4)) artificially dropped.The regime of streamwise adjustment is short due to the parabolic nature of (2.2), so the results downstream are not sensitive to ξ 0 .
Baseline BL model
As mentioned in § 1, the BL model using semilocal units for the inner layer construction outperforms the one using wall-viscous units in high-speed applications (Dilley & McClinton 2001).Therefore, the semilocal version is selected as the baseline BL model for modification.For future reference, it is termed the BL-local model in this work.The formulations are specified below, primarily following Cheatwood & Thompson (1993) for the LAURA code.The two-layer formulation of μ t in BL (and also CS) is where y mμ is the matching (intersection) point between the inner layer μ t,i and outer layer μ t,o .The former is based on the mixing length concept, as where ω is the vorticity, l mix is the mixing length corrected by the exponential damping function of van Driest ( 1956) and κ c is the von Kármán constant, taken as 0.40.There are two differences in (2.7a-c) from the original version of Baldwin & Lomax (1978).First, y * is used in the exponent of l mix , instead of y + , which was also adopted in recent works on WMLES (Yang & Lv 2018;Fu, Bose & Moin 2022;Kamogawa, Tamaki & Kawai 2023).Second, A + is not a constant, but dependent on the local total shear τ + = τ/τ w .For thin layer flows, | ω| can be simplified to |∂ Ũ// /∂n|, where Ũ// is the velocity parallel to the wall, and n is the wall-normal direction.For the configurations in figure 1, we simply have The outer-layer viscosity is evaluated by the Clauser-Klebanoff formulation (Klebanoff 1955;Clauser 1956).First, Clauser reasoned that for boundary layers, the eddies sufficiently away from the wall are no longer constrained by the wall, so their sizes should be proportional to the overall boundary layer thickness.The resulting maximum kinematic eddy viscosity in the outer layer is ν t,max ∼ U e δ * k , or ν t,max = αU e δ * k , where δ * k = (1 − Ũ/U e ) dy is the kinematic displacement thickness (equals the displacement thickness in incompressible cases) and α is a closure coefficient.Farther away from the wall, the flow becomes intermittent.An empirical intermittency factor F Kleb (specified later) was introduced by Klebanoff, to model the diminishing of ν t,o with increasing height.Consequently, μ t,o is computed as ρν t,max F Kleb , which leads to the prevailing CS model.To avoid determining the boundary layer thicknesses, which is beneficial for complex flows, U e δ * k in ν t,max is replaced by the wake function C cp y max F max in the BL model, which can be justified from the defect velocity scaling (detailed in § 3.2).Therefore, the outer-layer μ t , adopted without explicit compressible corrections, is where the vorticity function and the intermittency function are Notably, y max is the peak position of the vorticity function following common usage, not the height of the computational domain.Compared with the CS model, the boundary layer thickness in F Kleb is replaced by y max /C Kleb .For more general flows, μ t,o can be further restricted by a wake relation designed for free shear flows (see Wilcox 2006).It is inactive in the present wall-bounded cases, thus not displayed for conciseness.
Besides κ c and A + , there are three closure coefficients α, C cp and C Kleb in the baseline model.Some works (e.g.Gupta et al. 1990) suggest their Ma dependence, but following the principles in § 1, we adopt the original constant values, α = 0.0168, C cp = 1.6 and C Kleb = 0.3.After the numerical discretization, the intersection and maximum operations in (2.6) and (2.9a,b) are conducted after a third-order interpolation on adjacent grid points, to ensure smoothness and accuracy.After obtaining μ t , κ t = c p μ t /Pr t is calculated through a prescribed Pr t .Although Pr t can be designed as a function of the wall-normal height (Subbareddy & Candler 2012), the simplest choice of a constant Pr t is adopted, equal to 0.9 as a common choice.The effects of Pr t variations will be discussed in § 3.3.
Two cases are employed to demonstrate the behaviour of the baseline BL-local model.First is an incompressible case from SO (Ma ∞ set to 0.01 in our solver).The mean velocity and eddy viscosity are compared with the DNS data at Re θ = 2540 in figure 2. Note that μ t from DNS is evaluated by definition as μ t = − ρ u v /(∂ Ũ/∂y); μ t near the boundary layer edge is not displayed since both the numerator and denominator tend to zero.The predicted streamwise velocity is basically in line with DNS, showing the good capability of BL for incompressible flows.In the inner layer, μ + t faithfully follows the incompressible scaling by Johnson & King (1985) as where the last multiplier is termed the damping function.Note that (2.10) is more convenient than (2.7a-c) for comparisons between cases because ω does not explicitly appear.Away from the wall, the damping function is nearly unity, so the logarithmic scaling is formulated.In the outer layer, the peak value μ t,max = ρν t,max from the BL model is also close to the DNS (note that ρ is invariant), so the matching point y + mμ = 152 (y mμ = 0.18δ 99 ) is near the upper bound of the logarithmic region, above which the outer-layer scaling is formulated.Although μ t,o damps (by F Kleb ) more slowly than the DNS in the intermittent region, only a minor difference in Ũ appears due to the diminishing ∂ Ũ/∂y.
For the hypersonic cold-wall case M8Tw048R20 from ZDC, however, Ũ and T from the BL-local model have clear deviations from the DNS data, especially for T, as shown in figure 2(d).The discrepancy can be explained from figure 2(c).Compared with DNS, μ t,o is severely underestimated, so the matching point y * mμ = 67 (y mμ = 0.09δ 99 ) is quite low.Consequently, the region 67 < y * 300 (0.09 < y mμ /δ 99 0.27) is not formulated by the logarithmic scaling in (2.7a), leading to the errors in Ũ and T there.Meanwhile, T around the temperature peak (y * ∼ 7) is over-predicted, possibly due to the inaccurate Pr t there (specified in § 3.3).
In the following, the inner and outer-layer scalings of μ t and κ t are investigated separately using the DNS data.Three targeted modifications are proposed based on established relations.
Established relations and a priori examination
3.1.Inner-layer scaling First, we demonstrate that the formulation of μ t,i in the BL-local model is equivalent to applying the TL transformation.Analogous derivation was presented by Yang & Lv (2018) within the WMLES framework.Using the definition of μ t and (2.7a-c), the mixing-length relation can be expressed, in wall-viscous units, as The left-hand side, − ρ+ u The left-hand side and right-hand side are both functions of y * , so if the transformed inner layer shear well matches the incompressible counterpart, then (2.7a) will be a highly accurate modelling of μ t,i .For diagnostic purposes, a semilocal eddy viscosity is defined as which is expected to match the incompressible μ t,i (or ν t,i ).Equation (3.3) is examined in figure 3(a) using all the DNS data in table 1.Since we are concerned with the inner layer part, μ * t,TL is plotted in dotted lines after reaching its maximum.The reference line is the counterpart of (2.10) with y + replaced by y * (Yang & Lv 2018), as There is only a small scattering of μ * t,TL (y * ) within and below the buffer layer, showing the accuracy of the TL transformation for that region.This also explains the well-predicted surface quantities in the BL-local model for hypersonic cases (Dilley & McClinton 2001).In the logarithmic region, μ * t,TL tends to be lower than (3.4), especially for diabatic cases, suggesting an underestimated μ t,i in BL-local.This is consistent with the behaviour of U + TL , which can lead to over-prediction in the logarithmic region in diabatic cases.Nevertheless, figure 2(c) indicates that μ t,i in the BL-local model is somewhat higher than DNS, rather than lower.This inconsistency is due to the discrepancy in T, on which the variables for constructing the semilocal units ( ρ and μ) are dependent.As an alternative, we employ the GFM transformation to construct μ t,i , a thought recently implemented by Griffin et al. (2023) and Hendrickson et al. (2023) for model improvement.The results using the very recent HLPP transformation will be discussed in Appendix B. The GFM is shown to have better overall performances than TL for canonical air flows, particularly diabatic boundary layers in the logarithmic layer, though it can degrade for supercritical or non-air-like flows (Bai et al. 2022;Hasan et al. 2023b).This transformation is defined as where S + TL is the TL transformation kernel defined above, and S + eq results from the approximate balance of turbulence production and dissipation in the logarithmic region, as a modification to the arguments of Zhang et al. (2012).Similar to (3.2), the GFM-based mixing length relation in the semilocal coordinate takes the form of Accordingly, the semilocal eddy viscosity using GFM is Notably, the GFM and TL transformations are designed only for the region within and below the logarithmic layer, so their accuracy for the outer region is not guaranteed.Consequently, they may not be directly used to modify the outer-layer μ t .Our exploration to improve the μ t,o modelling is presented below.
Outer-layer scaling
The derivation of the outer-layer scaling in the baseline BL model is briefly reviewed first, to set the grounds for possible modifications.
The crucial part of modelling μ t,o is the estimation of its peak value, as learned from figure 2. In (2.8), the peak value ν t,max is estimated first, then μ t,o is formed as ρν t,max before the diminishing by F Kleb .Consequently, μ t,o can continue to increase at y > y mμ due to the rising ρ (see figure 2c), rather than monotonically decreasing as in incompressible cases.Therefore, it is first a question whether ν t,max (then μ t,o = ρν t,max F Kleb , as in BL-local) or μ t,max (the maximum μ t , then μ t,o = μ t,max F Kleb which monotonically decreases with y) should be modelled.In hypersonic cases, ρ can vary considerably in the outer region, so the two strategies can lead to significant differences.From existing works, the scaling for μ t,max is scarce while that for ν t,max is prevailing, so the following investigation is mainly on ν t,max .As mentioned in § 2.3, Cebeci & Smith (1974) argued that ν t,max was proportional to the boundary layer thickness, and suggested two scalings, which also hold in compressible flows.The first closure constant α has been introduced in § 2.3, and the second one α 2 is 0.06-0.075.Although widely used, (3.8) is re-examined here using the DNS data for future reference.The ratios α 2,DNS = ν t,max /(u τ δ 99 ) and α DNS = ν t,max /(U e δ * k ) for all cases are shown in figure 4(a,b), as functions of the corresponding Reynolds numbers.For incompressible cases, α 2,DNS varies more slightly than α DNS with increasing Re.When compressible cases are included, however, α 2,DNS is the less robust one.In particular, α 2,DNS varies more than twice between the diabatic cases, which is not surprising due to its higher sensitivity to wall quantities by definition.
For incompressible flows, the well-known velocity defect law provides another way to estimate ν t,max without using the boundary layer thicknesses, as adopted in the BL model.Specifically, the Clauser defect law reads U + e − Ũ+ = U + df (η), where U + df is the defect velocity and the outer scale η = y/Δ is based on the Rotta-Clauser boundary layer thickness Δ = U + e δ * k = (U + e − Ũ+ ) dy (note that η is no longer the transformed coordinate defined in § 2.2).Consequently, a collapse of the following function is suggested as where the approximation holds in the outer layer where y * A + (see (2.7c)).The last term is the scaled vorticity function in (2.9a), and the first three terms are the diagnostic function commonly used to evaluate κ c , though the main focus here is on the outer layer.For a quantitative evaluation of (3.9) and then y max F max , the semiempirical relation by Monkewitz, Chauhan & Nagib (2007) for incompressible boundary layers in the large-Re limit is employed, which gives one explicit expression of U + df (η).As a result, 987 A7-13 the outer-layer maximum of y∂ Ũ+ /∂y is equal to 5.55 at η = 0.155.Therefore, the ratio of ν t,max to y max F max using (3.8a), i.e.
is a constant (namely C cp , though not exactly the same value), which leads to (2.8) directly used in the compressible baseline BL model.Analogously, if ν t,max is from (3.8b), then the constant in (3.10) can be obtained from another form of incompressible defect law U + e − Ũ+ = U + df (y/δ 99 ).Inspired by (3.9) and (3.10), the compressible defect velocity scalings and their derivatives are examined below, to explore possible compressible corrections to (2.8).
As introduced in § 1, most of the existing compressible defect velocity scalings are based on the VD transformation.The VD is built upon the mixing length assumption for the region within and below the logarithmic layer, so the outer layer is not the targeted region for it to work, like other transformations (TL, GFM, etc.).Nevertheless, previous investigations actively support its usage for compressible defect velocity scalings (Maise & McDonald 1968;Fernholz & Finley 1980;Guarini et al. 2000;Pirozzoli et al. 2004;Duan et al. 2011;Pirozzoli & Bernardini 2011;Wenzel et al. 2018), suggesting its fundamentality in transforming compressible flows, though the diabatic cases are somewhat insufficiently examined.Therefore, the VD transformation is considered until future reports of more advanced ones.Following the incompressible procedures above, a VD-based displacement thickness can be defined, following Smits & Dussauge (2006), as The corresponding outer scale is η VD = y/Δ VD , where the transformed Rotta-Clauser thickness is VD,e − U + VD ) dy.Since VD may not be accurate enough in the inner layer, the transformed displacement thickness can be alternatively defined in a mixed manner, δ * mix , where the integrand in the inner region (say y < 0.15δ 99 ) 987 A7-14 is replaced by the TL-or GFM-based defect velocity.Taking GFM as an example, the difference between δ * mix and δ * VD turns out to be less than 2.5 % and 4.5 % for the supersonic and hypersonic cases, respectively.Since δ * VD is not finally used in the new model but to inform modification, we employ δ * VD instead of δ * mix hereafter for simplicity and consistency.Similar to (3.8), ν t,max can be evaluated as ν t,max = α VD U e δ * VD = α VD ν e Re δ * VD , which is examined in figure 4(c) by computing α VD,DNS = ν t,max /(U e δ * VD ) for all cases.The ratio α VD,DNS experiences a comparably small variation with α DNS .Besides, there is generally a decreasing trend of α VD,DNS with Re δ * VD rising, which awaits future examination using higher-Re data.Notably, the diabatic cases exhibit no larger departure than the incompressible and adiabatic cases, though the VD transformation tends to deteriorate.A possible reason is that (3.11) is defined in an integral manner, allowing error cancellation.
Afterwards, the compressible defect velocity scalings are studied, to establish connections with the vorticity function.The form examined by Pirozzoli & Bernardini (2011) for supersonic adiabatic boundary layers is (U VD,e − U VD )/U VD,e = g 1 (y/δ 99 ), where g 1 is a universal function.This function is computed in figure 5(c) for all the DNS cases, and the results before the VD transformation are plotted in figure 5(a) for comparison.As can be seen, g 1 for diabatic cases deviates more significantly from the incompressible counterpart.As an alternative to g 1 , the VD-based analogy to the Clauser defect law takes the form of U + VD,e − U + VD = g 2 (η VD ), which is examined in figure 5(b) between cases.For the present datasets, g 2 tends to achieve a somewhat better collapse than g 1 .We proceed to study the derivatives of these defect velocities, to connect with the diagnostic function and then the vorticity function.The diagnostic function used in (2.8) is displayed in figure 5(d), where a few strongly oscillating curves are not shown.Note that y∂ Ũ+ /∂y is plotted instead of y∂ Ũ/∂y for direct connection with ν t,max (see (3.10), there is a factor u τ in (3.8) or Δ).The main focus is on the outer layer, so the region y < 0.15δ 99 is plotted in dotted lines.If the outer-layer peaks of y∂ Ũ+ /∂y are collapsed between cases, then the ratio ν t,max /(y max F max ) would be nearly invariant, which would then provide a robust modelling for μ t,o , as derived in (3.10) for incompressible cases.First, the incompressible cases in figure 5(d) suggest a rough outer-layer similarity at this Re range (Re τ 400, Re δ 2 1400), though the collapse is not as close to perfect as in the large-Re limit (Nagib, Chauhan & Monkewitz 2007).For the compressible cases, the peak locations in figure 5(d) all concentrate around 0.7δ 99 , but the maximums vary considerably, especially for some cold-wall cases, even if the low-Re cases are excluded.Consequently, μ t,o can be severely under-predicted, as is observed in figure 2(c).For the VD-based defect velocity g 2 (η VD ), the outer-layer relation analogous to (3.9) is derived as , where y * A + . (3.12) As noted by Smits & Dussauge (2006), (3.12) indicates a proper velocity scale in the outer layer to be u τ / ρ+ , which can be used to extend Coles' law of the wake (Maise & McDonald 1968;Huang et al. 1993;Hasan et al. 2023a).Furthermore, the factor ρ+ turns out to be fundamental to account for compressibility effects, and was suggested by Catris & Aupoix (2000) and Otero Rodriguez et al. (2018) to improve the eddy viscosity and diffusivity in sophisticated RANS models.The transformed function y∂U + VD /∂y is plotted in figure 5(e) in terms of η VD .Although obvious scattering in peak values and locations is still observable, better collapse is attained compared with figure 5(d), demonstrating the effectiveness of the VD density weighting.Therefore, it is suggested that the VD-based vorticity function can be utilized in the outer layer scaling of the BL model, to replace the original one without compressible corrections.From (3.12), the vorticity function in (2.9a) is modified by simply adding a factor ρ+ as 6(a) using the DNS data from ZDC.At y * 25, three Pr t are quite close to each other, slowly decreasing towards the boundary layer edge.Near the temperature peak, however, ∂ T/∂y and v T both change signs, so Pr t can be negative or even singular.This singularity is also observable in the adiabatic case M2p5Tw10R17, though very close to the wall (y * ≈ 2).Meanwhile, Pr t between the singular point and the wall is generally lower than 0.5 for the three cases.The rapid variation of Pr t can lead to prediction errors for the temperature peak, as discussed for figure 2(d).For further demonstration, figure 6(b) compares between three cases with different Pr t using the BL-local model (case M8Tw048R20).At y * 100, T is quite insensitive to Pr t but at y * 100, a continuous decrease of T is observed with the drop of Pr t due to enhanced turbulent heat flux.The case Pr t = 0.8 seems to best predict the temperature peak.For the colder-wall case M6Tw025R11, an even lower Pr t = 0.6 is required in a similar test (not shown) to capture reasonably the peak temperature.Therefore, accurate modelling of Pr t is significant for near-wall temperature prediction.A constant Pr t = 0.9 can lead to an over-predicted peak temperature for cold-wall cases.
Besides constancy, Pr t can be modelled as a function of the wall-normal height (Abe & Antonia 2017;Huang, Duan & Choudhari 2022;Huang et al. 2023); for example, the expression by Subbareddy & Candler (2012) for boundary layers is Pr t = 1 − 0.25y/δ 99 .This formula is also examined in figure 6(b) within the BL model.The resulting temperature is close to the Pr t = 1 case and thus inaccurate for near-wall temperature prediction.Other available Pr t formulae are expected to have analogous behaviours because Pr t are all designed to monotonically decrease away from the wall, though most of the formulae are modelled based on channel flows.As discussed above, capturing the non-monotonic and singular behaviour of Pr t near the wall is crucial, but it seems difficult to summarize a universal expression of Pr t in the near-wall region.The location of the singular point may correlate with the temperature peak, but the Pr t farther below towards the wall also differs considerably from case to case.Ad hoc fitting of Pr t for each case is plausible, but no universality is guaranteed.Consequently, we seek alternatives to specifying Pr t .As mentioned in § 1, the algebraic TV relation by Duan & Martín (2011) and Zhang et al. (2014) is remarkably accurate for adiabatic and diabatic boundary layers, and channel and pipe flows.It is thus incorporated for efficient and accurate temperature prediction in several recent works within the frameworks of ODE solvers and WMLES (Chen et al. 2023a;Griffin et al. 2023;Hasan et al. 2023a;Song et al. 2023).Thereby, the TV relation is utilized to improve the temperature prediction in the BL model.
The quadratic algebraic relation is written as where Ũδ and Tδ are the values at the boundary-layer edge δ (specified later), and T r = T ∞ (1 + rEc ∞ /2) is the recovery temperature with r = Pr 1/3 the recovery factor.The closure constant C T is determined to be 0.8259 by Duan & Martín (2011) and is recast as sPr by Zhang et al. (2014), where the Reynolds analogy factor s is 1.14 following convention.The examination of (3.16) for the deployed DNS datasets is not presented since it has been extensively verified (e.g.Zhang et al. 2018;Modesti & Pirozzoli 2019;Fu et al. 2021;Zhang et al. 2022;Griffin et al. 2023).Its implementation into the model is discussed below.In the works of Song et al. (2023) Griffin et al. (2023), to be where Ũm is the velocity sample at y mT .Within the framework in § 2.2, the energy equation (2.4b) with constant Pr t is solved first throughout the boundary layer.As a second step, (3.17) is used to update T at y < y mT once within each iteration, based on the quantities at y mT and δ.If solving the full RANS equation (2.1), then (3.17) can be directly imposed during the temporal stepping subject to an appropriate initial field.Note that only a zero-order continuity of T at y mT is guaranteed by (3.17).Actually, such a high-order discontinuity also exists in μ t at y mμ (see figure 2).Numerical results prove negligible effects of the discontinuities on the mean profiles after the computation converges.Inspired by figure 6(b), y * mT is fixed at 100 throughout.A sensitivity study of y * mT is conducted in Appendix A, suggesting minor differences in the mean temperature with y * mT varied from 50 to 150.In the spirit of the BL model, the explicit determination of the boundary layer edge, required by Ũδ and Tδ , should be avoided.Consistent with (2.9a), δ is set to y max /C Kleb (y VD,max /C Kleb if using (3.14)), and the final results turn out to be quite insensitive to δ.As a final remark, (3.17) is designed for fully developed turbulence and becomes invalid for laminar flows (however, see the recent results of Mo & Gao ( 2024)), so the numerical transition process mentioned in § 2.2 should be avoided; in other words, the initial profile should be placed downstream in the turbulent regime.
Modification summary
As described in § § 3.1-3.3,three well-established mean flow relations are employed to modify the BL-local model.Two modifications are made to the two-layer μ t in (2.6).To be specific, the GFM transformation is utilized for the inner layer μ t,i (3.7), and the VD transformation is adopted for the outer-layer μ t,o (3.14).This version, without modifying the temperature relation, is termed the BL-GFM-VD model.Furthermore, one modification is made to the inner layer temperature.The algebraic TV relation (i.e.(3.17)) is enforced at y < y mT , instead of specifying Pr t and solving the energy equation.This final version is called the BL-GFM-VD-TV model.For incompressible flows with constant thermodynamic properties, the two modified models (and also BL-local) degenerate to the original version by Baldwin & Lomax (1978).
Results of the modified BL models
The two modified BL models are comprehensively evaluated using the 12 DNS cases.The primary quantities of interest are the mean profiles, and the wall and integral quantities.The fluctuation statistics are not expected to agree well with DNS considering the great simplicity in RANS.
The cold-wall Ma8Tw048R20 case from ZDC is focused on first, as examined in figure 2. The mean streamwise velocity and temperature predicted by the three BL models are shown in figure 7 in the inner scale.The distributions of μ t and Pr t are also displayed for diagnostic purposes.As mentioned above, the BL-local model underestimates the outer-layer μ t for this case, leading to a smaller u τ and an upward tilting of Ũ+ in the outer region, compared with the DNS data.This tilting of Ũ+ suggests an under-predicted Ũ/U ∞ = Ũ+ /U + ∞ in the logarithmic region, as displayed in figure 2(d).A notable improvement for the outer-layer Ũ+ is observed in the BL-GFM-VD model, and as a step forward, Ũ+ (and also Ũ/U ∞ in the outer scale) from BL-GFM-VD-TV is closely in line with DNS.From figure 7(c), the matching point y * mμ = 242 (y mμ = 0.24δ 99 ) is much higher than that in BL-local (y * mμ = 67 in figure 2c), owing to the lifted μ t,o after utilizing the VD transformation.Consequently, the region y * 242 obeys the logarithmic scaling formulated by the GFM transformation.In tandem with the more accurate Ũ, clear improvement in predicting T is observed in figure 7(b) from BL-local to BL-GFM-VD.Nevertheless, the predicted peak temperature and that in the logarithmic region are still higher than DNS.Further improvement coinciding with DNS is realized in the BL-GFM-VD-TV model, where the inner layer T is formulated by (3.17).The excellent agreement of T in the BL-GFM-VD-TV model can be explained from a posteriori diagnosis of Pr t , which is computed by substituting the BL results back to the energy equation (2.1c) or (2.4b).As shown in figure 7(d), Pr t at y * > y * mT is 0.9 by construction.At y * < y * mT , Pr t is not a constant but varies in a qualitatively consistent manner with DNS; a singular point also appears at y * ≈ 8.The slight discontinuity in Pr t around y * mT is due to the matching procedure, as noted in § 3.3.In short, by employing the TV relation, the intricate variation of Pr t near the wall can be modelled, leading to an accurate temperature prediction combined with an appropriate μ t .
Similar intermodel comparisons are made for the remaining DNS cases.For conciseness, we demonstrate below the mean-flow profiles of three representative cases from different sources: one cold-wall case from ZDC (M6Tw025R11); one adiabatic-wall case from PB (M4Tw10R21); one heated-wall case from VBL (M2Tw19R3).Afterwards, more attention will be paid to diabatic cases, and the prediction errors of the remaining cases will be summarized collectively.The results of the cold-wall M6Tw025R11 case are shown in figure 8 in both the inner and outer scales.The velocity prediction from the BL-GFM-VD model is close to BL-GFM-VD-TV, especially in the outer scale, so the former is not displayed.The model comparisons for velocity exhibit different features in the inner and outer scales.Both models tend to under-predict u τ , thus leading to higher Ũ+ in the outer region; Ũ+ by BL-GFM-VD-TV tends to deviate more.If expressed in the outer scale, nevertheless, Ũ by BL-GFM-VD-TV follows the DNS trend more closely, especially in the logarithmic region due to the elevated μ t,o and hence a higher y mμ (y * mμ rises from 80 in BL-local to 224).The reversed trends in the inner and outer scales will be revisited later, along with more quantitative measurements of the prediction errors.The temperature predictions of the three BL models strongly resemble those in figure 7. The BL-local model severely over-predicts T in most regions (y < 0.7δ 99 ) within the boundary layer, especially when expressed in the outer scale (figure 8d).Though the error in the logarithmic region is reduced in the BL-GFM-VD model owing to the improved logarithmic scaling, clear over-prediction still exists due to the overrated Pr t in the near-wall region.After employing the TV relation, excellent agreement with DNS is obtained in the BL-GFM-VD-TV model.
The adiabatic case M4Tw10R21 is considered in figure 9.As extensively examined in previous works (see § 1), the BL-local model well reproduces the mean velocity, but T in the logarithmic region is over-estimated.After using BL-GFM-VD and BL-GFM-VD-TV, the high accuracy for the velocity is retained with even a better prediction for Ũ+ , and the prediction for T is significantly improved.Again, this is attributed to the elevated μ t,o and y mμ , so the logarithmic scaling in (3.6) is more strictly followed up to y * mμ = 210 for this case.The TV relation is of limited help for this adiabatic case because the singular point of Pr t , if it exists, is down to the viscous layer (see figure 6a), where κ is dominant over κ t .Away from the singular point, Pr t varies moderately and can be well approximated by a constant.For the heated-wall case in figure 10, the same conclusion as for figures 8 and 9 is drawn.Good agreement with DNS is realized for Ũ and T in the BL-GFM-VD-TV model.The improvement for velocity is more obvious when expressed in the outer scale.The mild under-prediction in u τ is presumably related to the relatively low Re δ 2 in the M2Tw19R3 and also M6Tw025R11 cases, so the outer-layer similarity of the defect velocity diminishes (see figure 5).
More attention is paid to the diabatic cases considering their ubiquity in hypersonic applications and difficulty in prediction by the baseline model.The temperature predictions for four more diabatic cases (VBL-M2Tw05R13, VBL-M5Tw19R7, ZDC-M6Tw076R17 and ZDC-M14Tw018R24) are shown in figure 11, covering the lowest and highest Ma ∞ in the dataset.In addition to the same conclusions as for figures 8-10, two more points are concluded.For the heated and moderately cooled wall cases (figure 11b,c, T w /T r > 0.5), abundant improvement in T is achieved by BL-GFM-VD, over the BL-local model.For the highly cooled wall cases (figures 11a,d and 8c, T w /T r ≤ 0.5), however, modifying only μ t is insufficient due to the intricate variation of Pr t near the wall.The TV relation should be further incorporated for accurate temperature prediction.As demonstrated in figure 11(e, f ), the Pr t profiles from BL-GFM-VD-TV well match the DNS trends for these highly cooled wall cases, so the non-monotonic behaviour and the singularity of Pr t can be reasonably modelled.For reference, the velocity predictions for the four diabatic cases are also demonstrated in figure 12.In both the inner and outer scales, the velocities by the BL-GFM-VD-TV model are in good agreement with the DNS data.
Quantitative measurements of the prediction accuracy are presented using two types of relative errors to DNS, which are defined in terms of the inner-scale logarithmic coordinate and the outer-scale normal coordinate, respectively, as The two errors for temperature, lg,T and n,T , are defined likewise.The upper limits of the integral y up for all four are fixed at 1.1δ 99 .The four errors for all cases are displayed in figure 13 in the order of case numbers.Note that the algebraic averaged errors between locations are displayed for the cases with multiple streamwise locations.
As mentioned above, the BL-local model satisfactorily reproduces Ũ for the adiabatic cases (numbers 1-4), with lg,U < 1.0 % and n,U < 1.6 % for all examined.These are also the accuracy levels of the incompressible BL model.In the presence of surface heat transfer, the two U rise a bit, and n,U reaches over 3 % for the four hypersonic cases.Meanwhile, the temperature prediction has poor performance, where n,T exceeds 10 % for the five hypersonic diabatic cases and even surpasses 20 % for case 12 with a heated wall.In comparison, the velocity prediction is moderately improved using the BL-GFM-VD-TV model.The mean lg,U for all cases is reduced from 1.2 % to 0.7 %, and the mean n,U is from 2.5 % to 1.3 %.More importantly, the two U from BL-GFM-VD-TV do not exhibit an obviously increasing trend with Ma lifted or wall-cooling strengthened, indicating enhanced model robustness by employing established relations.It is worth mentioning that case 7 (M6Tw025R11) is the only one where lg,U is not improved, as discussed for figure 8.This is acceptable considering the imperfect collapse of the outer-layer scaling in figure 5 due to compressibility and low-Re effects.Compared with the velocity counterpart, a more significant improvement is realized in the temperature prediction.After deploying BL-GFM-VD-TV, lg,T and n,T for all cases are notably decreased.As an overall measure, the mean lg,T is reduced from 1.4 % to 0.4 %, and the mean n,T is reduced from 8.8 % to 3.4 %.Moreover, we plot in figure 13(b) the lg,T from BL-GFM-VD.It is between the errors of BL-local and BL-GFM-VD-TV for each case, demonstrating that the improved temperature prediction is jointly contributed by the velocity transformations and the TV relation.
Besides the mean profiles, the wall and integral quantities are also of interest in RANS.The comparison between the BL models and DNS is listed in table 2 for the five ZDC cases at specific Re τ , where H is the shape factor, and C f and B q are the non-dimensional surface friction and heat flux.First, much better predictions for integral quantities, such as H, are realized in the BL-GFM-VD-TV model, attributed to the overall improvement in mean profile shapes.Regarding the wall quantities, the BL-local model employs the TL transformation for the inner layer μ t (see § 3.1).Since TL provides accurate scaling in the viscous layer, the wall quantities can be well predicted by the BL-local model including diabatic cases, as demonstrated by Dilley & McClinton (2001).After using BL-GFM-VD-TV, some improvements for the wall quantities can be observed, especially for u τ and C f .Nevertheless, not all the cases are improved at the specific limited streamwise locations, in particular the M6Tw025R11 case of relatively low Re δ 2 .A more comprehensive evaluation of the wall quantities is anticipated based on the data at a set of locations in each case.
Discussions and summary
5.1.Discussions The present work is designed for ZPG boundary layers, so its applications and limitations are further discussed.First, we demonstrate a promising framework of how to incorporate well-established mean flow relations to improve the BL wall model.The module-style modification allows direct substitutions of more accurate relations in the future.Second, a highly efficient and accurate mean-flow solver is feasible using the modified BL model, which can be further combined with, for example, the resolvent analysis to obtain the fluctuation characteristics over a wide parameter space (Ma, Re, wall cooling, etc.), as done by Cossu, Pujals & Depardon (2009) for incompressible boundary layers and by Chen et al. (2023a,b) for compressible channels.Third, the two modified models can be applied to high-speed channel and pipe flows, though we anticipate that the improvement over the BL-local model will not be as large as that for the boundary layer cases.Two possible reasons are that for channel and pipe flows, the TL transformation used for the inner layer is particularly accurate, and Pr t has no singularities in the near-wall region (e.g.As discussed in § 3, the present modifications are limited to attached thin-layer flows (then | ω| = |∂ Ũ// /∂n| or |∂ Ũ/∂y|), so it becomes inapplicable when other components of ω are non-negligible.In this sense, the modified BL model is less general than the baseline one.On the one hand, it is known that computing the attached thin-layer flow is the strength of the BL model (and other algebraic ones) over other more sophisticated turbulence models, so the present modifications to the BL model can help maximize its strength.On the other hand, further extensions to more general flows with pressure gradients, separation, etc., are under investigation and will be reported separately.Taking the pressure gradient effects as an example, model improvement is anticipated owing to the following evidence.A valuable DNS database was elaborated by Wenzel et al. (2019) for supersonic boundary layers (Mach 2) under both favourable and adverse pressure gradients.Their results, and also those of Bai et al. (2022), actively support the usage of various velocity transformations for the inner layer.With rising adverse pressure gradient strength, the VD transformation increasingly underestimates the velocity in the outer region, but it significantly reduces the compressibility effects.Moreover, Gibis et al.
(2019) suggested appropriate compressible scalings for the outer-layer self-similarity under pressure gradients, which could, for example, improve the Rotta-Clauser parameter used in the CS model.Nevertheless, more non-ZPG DNS databases are highly desired in a range of Ma and wall-cooling conditions.
Summary
Different forms of mean-flow relations for high-speed wall-bounded turbulence, including the velocity transformation and algebraic TV relation, have been established in previous works with increasing accuracy.The combination of these relations enables an efficient and accurate recovery of the mean flow as an inverse problem, which is a solid mean to accommodate compressibility effects.This thought is utilized in this work, and the core idea is to systematically improve the BL wall model for ZPG boundary layers using various established scalings.The objective is that BL can achieve comparable accuracy with the incompressible counterpart.Only well-established relations are adopted, and we avoid introducing any new functions or coefficients fitted by ourselves.Twelve published DNS datasets are employed for a priori inspiration and a posteriori examination.A large parameter space is covered, with Ma ∞ ranging from 2 to 14 under adiabatic, cold and heated wall conditions (T w /T r from 0.18 to 1.9).
The baseline BL-local model is the classical one widely used in numerous commercial solvers, which uses semilocal units in the inner-layer damping function, and assumes a prescribed Pr t distribution.The baseline model can well reproduce the velocity for adiabatic cases, but deteriorates subject to surface heat transfer.Meanwhile, the temperature prediction has obvious deviations from DNS in both adiabatic and diabatic cases.
Three modifications are made to the formulations of μ t and T, corresponding to the three shortcomings of the BL-local model.First, we show that the inner-layer scaling of μ t in BL-local is equivalent to applying the TL transformation, which degrades in the logarithmic region in diabatic boundary layers.Therefore, we adopt the GFM transformation instead, for improved logarithmic scaling of μ t .Second, the outer-layer μ t and thus the matching location can be severely underestimated (y * mμ down to 40-60) in BL-local, so the logarithmic scaling above this low y * mμ is not followed.For improvement, we adopt the VD transformation in the outer layer based on the compressible defect velocity scaling.Third, the inner layer temperature in cold-wall cases is quite sensitive to Pr t , and Pr t varies considerably near the wall and exhibits a singularity around the temperature peak.Since there lacks a unified modelling of Pr t near the wall, we design a novel two-layer formulation of T. The energy equation with constant Pr t is only solved in the outer layer (y > y mT ), and the inner layer (y < y mT ) temperature is formulated by the algebraic quadratic TV relation.The modified model is termed BL-GFM-VD-TV, where the latter three acronyms denote the three modifications, respectively; see § 3.4.
Numerical results between all the DNS cases demonstrate that the three modifications take effect as expected.For the mean streamwise velocity, the high accuracy of the BL-local model for adiabatic cases is retained, while that for diabatic cases is improved, especially in the logarithmic region.Meanwhile, significant improvement for the temperature is realized for both adiabatic and diabatic cases, that T is in close agreement with DNS in most cases, which has not been realized before.The mean relative errors of T to DNS for all cases are down to 0.4 % measured in the logarithmic wall-normal coordinate and 3.4 % in the outer coordinate, only around one-third of those in the baseline model.Furthermore, a posteriori diagnosis suggests that the non-monotonic and singular behaviours of Pr t in cold-wall cases can be modelled.We emphasize that modifying only μ t is insufficient for an accurate temperature prediction in highly cooled wall cases.The TV relation should be further incorporated in the near-wall region.Future works will be on possible extensions of the modified models to more complex configurations, and their behaviours in the flows with moderate pressure gradients.The transformed velocities U + tsf using TL, GFM and HLPP are shown in figure 16 for all 987 A7-30 the DNS cases (Reynolds-averaged values), and the relative errors to DNS are displayed as a function of Ma τ , following Hasan et al. (2023b).The mean errors (also defined based on Reynolds averages) of GFM and HLPP between all cases are approximately the same, equal to 2.2 % and 2.4 %, respectively.Note that the errors in figure 16 are somewhat differently defined from Hasan et al. (2023b) (their figure 2) because the present errors are based on the absolute value of the velocity difference, hence non-negative by definition (see equation ( 8) in Griffin et al. (2021) and (4.1a,b) above).
The HLPP transformation can also be employed, following the procedures in § 3.1, to improve the inner-layer scaling in the BL-local model.After incorporating VD for the outer layer and the TV relation, the final model can be termed BL-HLPP-VD-TV.We have also implemented this model and find that the mean relative errors in predicting velocity and temperature are comparable with BL-GFM-VD-TV, as expected from figure 16.Specifically, the four mean errors are lg,U = 0.6 %, lg,T = 0.4 %, n,U = 1.3 % and n,T = 3.5 %.
Figure 2 .
Figure 2. (a,c) Eddy viscosity and (b,d) mean streamwise velocity and temperature (only compressible case) from the baseline BL-local model and DNS for the (a,b) incompressible SO-M0R25 case and (c,d) hypersonic ZDC-M8Tw048R20 case.
Figure 3 .
Figure 3. Semilocal eddy viscosity using the (a) TL (as in the BL-local model) and (b) GFM transformations from the DNS datasets.The legends for panels (a,b) are the same, separately shown in the two boxes.
Figure 4 .
Figure 4. Maximum kinematic eddy viscosity scaled by different variables (see (3.8a,b) and (3.11)) from the DNS data.The diamonds are for incompressible cases, and the cycles and triangles are for adiabatic and diabatic ones, respectively.The symbol colours follow the usage in figure 3.
Figure 5 .
Figure 5. Different forms of (a-c) defect velocities and (d-f ) diagnostic functions for all cases.The reference lines in panels (d-f ) are from (3.10).The legends for all the panels are the same, separately shown in the three boxes.
Figure 8 .
Figure 8. Mean (a,b) streamwise velocity and (c,d) temperature in the (a,c) inner and (b,d) outer scales from different BL models for case ZDC-M6Tw025R11 (cold wall).
Figure 13 .
Figure 13.Relative errors of different BL models to DNS for the mean (a,c) streamwise velocity and (b,d) temperature, measured in terms of (a,b) logarithmic and (c,d) normal coordinates, respectively, where (a) lg,U , (b) lg,T , (c) n,U , (d) n,T .The horizontal lines are their average errors.For reference, the Ma ∞ and T w /T r in each case are plotted in panels (e, f ).Cases 1-4 are adiabatic walls, 5-10 are cold walls and 11-12 are heated walls, as categorized in shaded areas.
Figure 14 .Figure 15 .Figure 16 . 0 1
Figure 14.Mean streamwise velocity and temperature from the standard BL model for cases (a) M6Tw025R11 and (b) M14Tw018R24.The reference data are from (a) Hendrickson et al. (2023) and (b) our RANS solver.
∂y * = As shown in figure 3(b), μ *t,GFM experiences smaller scattering than μ * t,TL at y * 70.Also, μ * t,GFM in the logarithmic region follows (3.4) more closely, demonstrating better robustness for modelling μ t,i than (2.7a).For practical use, the incompressible analogy μ * Figure 6.(a) Turbulent Prandtl numbers in different DNS cases from ZDC and (b) the mean temperature using the BL-local model with different Pr t for case M8Tw048R20.As stated in § 2.3, the eddy diffusivity in RANS models for temperature prediction is obtained mostly through a specified Pr t , whose one common definition is Pr t can experience complicated variations within and below the buffer layer, especially in cold-wall boundary layers, which is illustrated in figure (Huang, Coleman & Bradshaw 1995;Duan et al. 2010;Pirozzoli & Bernardini 2011;Lusher & Coleman 2022)(3.13).The final form also turns out to reach a compromise between the modelling of ν t,max and μ t,max , so μ t,o ∼ ρ1/2 F Kleb and ν t,o ∼ ρ−1/2 F Kleb .As a natural consequence, (3.14) produces the same results as (2.8) for incompressible flows with constant density.The empirical function F Kleb (y/y VD,max ) remains unmodified as in (2.9b) before acquiring more theoretical guidance.Also, the closure constants α, C cp and C Kleb remain unaltered, consistent with the incompressible version.Finally, since the quantity y VD,max /C Kleb used in F Kleb represents an estimation of the boundary layer thickness, its relation to δ 99 is examined in figure5( f ) by plotting the transformed function in terms of y/δ 99 .For most cases, y VD,max /δ 99 is within 0.6-0.8,but it seems closer to the boundary layer edge than y max /δ 99 , suggesting slower damping of μ t,o in the intermittent region.3.3.The TV relation t is within 0.7-1.1 in most regions above the buffer layer and is typically equal to 0.85 in the logarithmic region, insensitive to Ma, Re and wall cooling(Huang, Coleman & Bradshaw 1995;Duan et al. 2010;Pirozzoli & Bernardini 2011;Lusher & Coleman 2022).However, (2023)adopt it only in the near-wall region with the upper boundary condition matched by LES.The latter strategy is preferred for two reasons.First, T in the outer layer is already insensitive to Pr t and Pr t also varies mildly.Second, by solving the original energy (2.1c) in the outer layer, more flow information is retained, which is more applicable to general flows.To realize the multilayer formulation, a matching location y mT < δ within the boundary layer is required, above which (2.1c) is solved and below which (3.16) is enforced.The resulting temperature formulation is also a two-layer structure, analogous to μ t in (2.6).The outer boundary condition for (3.16) is now the temperature sample at y mT .To enforce the matching temperature Tm at y mT , the original outer boundary condition T| y=δ = Tδ is relaxed, and (3.16) is rewritten, following Chen et al. (2023a))andHasan et al. (2023a), (3.16) is used throughout the boundary layer.In comparison,Griffin et al.
Table 2 .
Huang et al. 1995; Cheng & Fu 2023).Some wall and integral quantities from different BL models and DNS for the ZDC cases.The significant figures of the DNS data are the same as in the reference.Note that C f was not listed in the reference, so it is inferred here using ρ w and u τ . | 15,639.4 | 2024-05-13T00:00:00.000 | [
"Engineering",
"Physics"
] |
Precise and low-power closed-loop neuromodulation through algorithm-integrated circuit co-design
Implantable neuromodulation devices have significantly advanced treatments for neurological disorders such as Parkinson’s disease, epilepsy, and depression. Traditional open-loop devices like deep brain stimulation (DBS) and spinal cord stimulators (SCS) often lead to overstimulation and lack adaptive precision, raising safety and side-effect concerns. Next-generation closed-loop systems offer real-time monitoring and on-device diagnostics for responsive stimulation, presenting a significant advancement for treating a range of brain diseases. However, the high false alarm rates of current closed-loop technologies limit their efficacy and increase energy consumption due to unnecessary stimulations. In this study, we introduce an artificial intelligence-integrated circuit co-design that targets these issues and using an online demonstration system for closed-loop seizure prediction to showcase its effectiveness. Firstly, two neural network models are obtained with neural-network search and quantization strategies. A binary neural network is optimized for minimal computation with high sensitivity and a convolutional neural network with a false alarm rate as low as 0.1/h for false alarm rejection. Then, a dedicated low-power processor is fabricated in 55 nm technology to implement the two models. With reconfigurable design and event-driven processing feature the resulting application-specific integrated circuit (ASIC) occupies only 5mm2 silicon area and the average power consumption is 142 μW. The proposed solution achieves a significant reduction in both false alarm rates and power consumption when benchmarked against state-of-the-art counterparts.
Introduction
With the prolongation of human life expectancy and the emerging of the aging society, brain disorders such as epilepsy, Parkinson's disease, depression have inflicted suffering on a significant portion of the global population.Brain disorders not only pose a severe threat to human health but also impose a substantial medical and societal burden, ranking as the leading cause of all diseases (Poo et al., 2016).As shown in Figure 1, the latest statistics from the World Health Organization (WHO) indicate that the numbers of individuals affected by Yang et al. 10.3389/fnins.2024.1340164Frontiers in Neuroscience 02 frontiersin.orgbrain disorders such as epilepsy, Parkinson's disease, and depression have exceeded 70, 10, and 350 million, respectively (Lee et al., 2020).
Traditional treatments for brain disorders primarily involve medication and surgical procedures.However, medication-based treatments often come with significant side effects, slow progress, and the risk of developing drug resistance.For instance, approximately 30% of epilepsy patients exhibit drug resistance or adverse reactions.Moreover, irreversible surgeries can lead to unpredictable adverse consequences for patients, including impairments in memory, vision, and motor function (Kuhlmann et al., 2018b).
In recent years, the utilization of implantable medical devices for neuromodulation has emerged as one of the most effective approaches for treating various brain disorders, and it has benefited hundreds of thousands of brain disorder patients worldwide (Won et al., 2020).Figure 2A illustrates the forms and implantation of current neuromodulation devices.The devices are typically implanted in the chest region through invasive surgeries and connected to electrodes implanted near the target brain regions via wires.Most neuromodulation systems employ an open-loop design, as depicted in Figure 2B, where the device delivers continuous or periodically stimulation to the nerves.For example, deep brain stimulation (DBS) for Parkinson's treatment can deliver electrical stimulation to target brain regions to regulate the faulty nerve signals causing tremors, rigidity, and other symptoms.Although these open loop systems provide possibilities of the treatment of many brain disorders, the lack of adaptability to the dynamic neural activity or the changing needs of the patients have limited their ability to deliver personalized and optimal treatment outcomes.Moreover, open-loop control suffers from inefficiency because continuous nerve stimulation can lead to habituation and neurological chemical changes, resulting in a decrease in treatment efficacy and safety issues.Side-effect such as dyskinesia have been constantly reported in epilepsy and Parkinson's patients due to open-loop stimulations (Dembek et al., 2017).
To address the issues associated with open-loop control, a natural solution is to monitor the activity of the nervous system in real-time and determine whether disease-related features exist in the neural signals before initiating nerve stimulation.This approach is known as closed-loop control, as illustrated in Figure 2C.Closed-loop systems integrate real-time data processing and responsive stimulation functions, enabling a more sophisticated and adaptive approach to neuromodulation.By continuously monitoring neural activity and adapting stimulation parameters accordingly, closed-loop neuromodulation devices offer enhanced precision, efficacy, and patient-specific therapy (Scangos et al., 2021).Algorithms used for biomarker detection are crucial for achieving closed-loop neuromodulation.Currently, the algorithms primarily used in closedloop neuromodulation systems main involves neural signal features extraction and classification.Features such as Phase Locking Value (PLV; O' Leary et al., 2018), Spectrum Energy (SE; Cheng et al., 2018;O'Leary et al., 2018;Huang et al., 2019;Zhang et al., 2022), line length (Shin et al., 2022) that can reflect the amplitude, phase, and frequency of the neural signal are commonly used.The classification mainly relies on linear regression and support vector machine classifiers.However, due to limited on-device battery and computing resource, the algorithms used in existing neuromodulation devices still suffer from a high false positive rate.The reported false positive can exceeds 2,800 per day (Bruno et al., 2020), which greatly undermining the effectiveness of the closed-loop neuromodulation.
In recent years, artificial neural networks have begun to reshape closed-loop neuromodulation, substantially reducing false positives.In 2018, an epilepsy detection algorithm that used short-time Fourier features and convolutional neural networks has been reported (Truong et al., 2018).The algorithm has reduced the false positive rate to below 0.21 fp/h, but due to the use of large convolutional kernels and neural network with over seven layers, the model's parameter size reached 0.76 MB, the associated computation exceeds the capacity of implant medical devices.In 2019, a comparative study of different classification algorithms was conducted, validating the superiority of neural networks in closed-loop control (Daoud and Bayoumi, 2019).In 2020, the difference between regression, SVM and CNN for closed-loop control was reviewed and compared in Yang and Sawan (2020).An approach that combined direct transfer functions and neural networks reduced the false positive rate to 0.08/h, but it involved a large number of multiplicative and convolutional calculations, and the model size exceeded 1 MB (Wang et al., 2020).Recently, research like EEGNet (Lawhern et al., 2018) and its variant (Schneider et al., 2020) have been proposed to optimize the neural network so that can be used in embedded systems.Through the network size is reduce, they still cannot be migrated to implantable devices with limited storage and computing capacity.
To solve the computing and power consumption issues associate with the close-loop control, dedicated integrated signal processing chips have been developed.A closed-loop chip with a power consumption of 3.12 mW and an area of 25mm 2 was reported.Frequency features and linear regression classification method achieved a sensitivity of 97.8% and a false positive rate of 2 fp/h (Cheng et al., 2018).O' Leary et al. (2018) reported an integrated chip with a sensitivity of 100%, and a false positive rate of 0.81 fp/h.The chip had an area of 7.6 mm 2 and consumed 1.07 mW.In 2019, a dedicated neural signal processing chip with 4.5 mm 2 area and 1.9 mW power consumption was reported (Huang et al., 2019).It achieved a sensitivity of 96.6% and a false positive rate of 0.28 fp/h.Recently, an integrated chip with an area of 4.5 mm 2 and power consumption of 1.2 mW, achieving a sensitivity of 97.8% and a false positive rate of 0.5 fp/h was reported (Zhang et al., 2022).Recently, a SVM-based processor has been proposed and achieved 92.0% sensitivity and 0.57/h false alarm rate in seizure prediction task (Hsieh et al., 2022).With optimized SVM algorithm and customized circuits implementation, the chip consumes less than 4 mm 2 silicon area, and the power is reduced to 2.3 mW, exhibit reduced power consumption Brain disease statistics from WHO (Lee et al., 2020).Yang et al. 10.3389/fnins.2024.1340164Frontiers in Neuroscience 03 frontiersin.orgwhen compared to embedded microprocessors for the computing of the same complexity.
In this study, we undertake an algorithm-integrated circuit co-design approach for close-loop control with dedicated integrated circuits to address the issues related to false alarm rates and power consumption.Initially, we employ a neural-network search strategy to acquire neural network models that not only demand less computational effort but also exhibits low false alarm rate.These models are then quantized to minimal memory and computation resource requirements.A low-power, event-driven processor was designed to implement these models, allowing consistently monitor events in a low-power state and transition to a high-precision state for eliminating false alarms.The performance and the efficiency of the proposed method are validated with a real-time seizure prediction demonstration system.
The organization of this paper is as following.Section 2 describes the detail of the algorithm design and the optimization strategies.Chip architecture and circuit design are given in section 3. Experiments and evaluation results are summarized in Section 4. The last section concludes this paper.
Algorithm designs
To minimize the false positive rate and power consumption, our study adopts a two-stage optimization approach.Initially, a network search space is defined, with consideration of the sensitivity and false positive rate requirements specific to closed-loop neuromodulation.A targeted network search strategy yields a baseline model which meets these criteria.Subsequently, the baseline model is refined using network quantization techniques, which reduces the model to sizes and computational requirement appropriate for integration within the limited resources of implantable systems.
Network search
Traditional neural signal processing techniques like line length, although simple to implement in hardware, exhibit limited accuracy in recognition.Consequently, they result in a high false positive rate during closed-loop control processes, making them unsuitable for precise treatment.On the other hand, currently available highly accurate and low false alarm rate algorithms rely on neural networks, but their sizes typically exceed several hundreds of kilobytes (kB), which surpasses the storage capacity and computing resource of many implantable chips.In this study, we first define an approximate network structure space (Figure 3A) based on the available on-chip storage and computational resources of implantable chips (Yang and Sawan, 2020;Martínez et al., 2022).In neuromodulation systems, there are a few to tens of channels, and each channel is sampling at the rate of at least few hundred Hertz.Hence, the sampled data has extremely unbalanced X (time) axis and Y axis (channel), and the Y axis would dimmish rapidly if 2d convolution is applied in the beginning.Moreover, it may decrease the classification performance if the time (X) and channel (Y) axis are mixed up (Zhao et al., 2020;Wang et al., 2021).Therefore, a channel-wise neural network structure is proposed in this work to avoid mixing up data features from different channels.Using one-dimensional convolutions, data processing occurs independently within each channel, ensuring that there is no mixing or interaction of data across channels during these operations.It also reduces memory and computing overhead for hardware implementation.
As depicted in Figure 3B, the network structure comprises five distinct blocks, the first three blocks are responsible for temporal information extraction, while the latter two blocks perform spatial convolution.Each convolutional block consists of a convolutional layer and a pooling layer, with a Batch Normalization (BN) layer and a rectified linear unit (ReLU) activation function applied after the convolutional layer.The global average pooling layer is applied to further reduce the number of parameters.The finial output was obtained using a dense layer with the Softmax function.During network search, the convolution kernels are restricted to one-dimensional to reduce the number of parameters and associated computation.For the first three blocks that operate on the spatial dimension, filter and pooling height are fixed as 1, and an RNN-based search controller is employed to explore the sizes, quantities, and pooling widths of the convolution kernels.It selects filter width from the options of [1,2,8,16], the number of filters from the choices of [4,8,16], and select a pooling width from the set [1,4,8,16].For the last two convolutional blocks, which operated on the channel dimension, the filter width and pooling width were fixed at 1. Similarly, the controller RNN also make decisions regarding filter height, which could be chosen from [1,2,8,16], the number of filters (from [4, 8, 16]), and the pooling height (from [1,2,4]).All the strides of the convolutional layer were fixed as 1.The RNN controller generates a description of the target neural network, including the number and ( ) α 1:T represents the string generated by the RNN controller, following the probability distribution P. R is the validation accuracy of the network generated by the controller, serving as the reward signal for training the controller.We update the RNN controller parameters using the following reinforcement learning approach of Eq. 2 (Williams, 1992): K is the number of network structures generated by the RNN controller in a batch, and T is the number of hyperparameters predicted by the controller in the string.R k represents the accuracy of the k-th network structure, and b is the exponentially moving average of the accuracy of the previous architecture.By constraining the search space, we can identify the best-performing model under the specified conditions as the baseline model for further compression.
Baseline model compression
Network quantization represents a highly effective approach for compressing and accelerating neural networks.In this method, the neural network's weights and activation values, originally stored as high-bit floating-point numbers, are converted into low-bit integers or fixed-point numbers.This transformation results in a reduction in the number of operations required by the neural network and lowers the hardware implementation costs, as evidenced by previous studies (Alyamkin et al., 2019).In some extreme cases, parameters of a neural network can be quantized to just 1-bit, taking values of −1 or 1 (Courbariaux et al., 2016).This leads to a significant reduction in multiply-accumulate operations, which are replaced by the more efficient 1-bit XNOR operation in the hardware.While this reduces the amount of memory required for access, it is worth noting that this extreme quantization can lead to certain performance compromises, including increasing false alarm rate and diminished sensitivity.
In our study, we assess the performance of different bit quantization techniques in the context of epileptic seizure prediction networks.We aim to determine if the reduction in performance, resulting from these quantization methods, falls within acceptable limits.We utilize fixed-point representation to quantize both weights and activations to the same number of bits.Specifically, we employ the weight quantization method described in Eq. 3 (Moons et al., 2017).
( ) ( )
Here, ω represents the weights before quantization, and Q denotes the number of bits required for the quantization process.In the quantization of activations, we utilize the above quantized tanh function.Figure 3C shows the quantization process.When training the quantized model, the gradient propagated by the straight-through estimators (STE) uses function Eq. 4 (Courbariaux et al., 2016) regardless of how the number of quantized bits Q was This function is used because the quantization is a nondifferentiable operation during the backward pass.Complex neural signal processing is the primary contributor to system power consumption.While common low-power technologies like periodic wake-up and frequency reduction can decrease power usage, they come at the cost of compromising real-time performance in closed-loop neuromodulation.The proposed chip features low-power, event-driven real-time processing by exploiting the sparsity inherent in brain disease occurrences.Figure 4 demonstrates the motivation behind the event-driven processing approach.Neural signals typically exhibit a consistent pattern and lack distinct characteristics most of the time.Devices designed for detecting or predicting disease biomarkers are only effective for a brief window of time (Liu et al., 2022).For instance, with conditions like arrhythmia, various forms of rapid supraventricular arrhythmias occur infrequently, only happening once every few hours or less, with these episodes being of short duration, often just seconds or minutes.In the case of epilepsy patients, seizures represent merely 0.01% of their overall life span.The proposed event-driven chip employs an extreme compressed binary neural network (BNN) from the baseline model for continuous event detection and a moderate convolutional network for precise biomarker detection.The binary neural network can be achieved through quantize weights to binary values as mentioned in Section 2. As it only requires 1-bit weights and simple logic operations, the reduction in computational demands significantly decreases the required power consumption.The BNN is engineered to preserve a high level of sensitivity to guarantee that no potential onset goes undetected.However, due to the inherent limitations in the classification capabilities of the BNN, the rate of false alarms cannot be assured with BNN alone.To reject false alarms, a high-precision convolutional neural network will be activated after an event has been detected by the BNN.The convolutional neural network helps eliminate additional false alarms, ensuring overall low false alarm rate at system level.As illustrated in Figure 4, when employing the BNN model for event detection, the system successfully filters out most false alarms and operates in a low-power state.The CNN mode is briefly engaged to confirm alarms that surpass the event threshold.The BNN model predominantly governs the system's power consumption, whereas the CNN model dictates the rate of false alarms.To seamlessly integrate the BNN and CNN networks and conserve silicon area, the chip has been designed in a reconfigurable fashion without using any multiplier.
Neural signal processing architecture
Figure 5 provides the architecture of the proposed processor.It consists of four reconfigurable cores interconnected via a system bus.The sensor interface is responsible for receiving external input neural signals.The top controller fetches instructions from a 32 kB instruction memory through the system bus and controls the overall system while streaming data from the sensor interface to the four reconfigurable cores.Each core includes an array of Processing Elements (PEs), data reorder logic, 32 kB local data memory, and inter-PE logic that governs the behavior of the PE array.The data reordering mechanism contains two parts, the input data reorder and the output data reorder as illustrated in Figure 4.The input order logic connects 16-source ports from the data SRAM within the core to 16-destination ports at the PEs.Data is received at each source port from a designated SRAM address (e.g., i 0 ), with an enable signal (e.g., ie 0 ) activated based on the decoded instruction pattern.This signal guides the routing of input data to any of the PEs for subsequent processing.For the output reorder logic, 16-source ports from the PEs are linked to 16-destination ports at the data SRAM.These ports handle the processed data output from the PEs (e.g., o 0 ), with each piece of output data linked to an output enable signal (e.g., oe 0 ).This signal specifies which data bits should be stored back into the SRAM at the appropriate addresses.Each PE consists of eight pipelined Computation Units (CU) and intra-PE logic controlling the pipeline configuration, control signals, and inputs for each CU.The proposed processor is designed to support various sizes of CNN and BNN computation.The limitation on model size is determined by the memory requirement for the maximum intermediate layer if maps values and the weights.
Figure 6 shows the BNN and CNN mapping of the proposed architecture.During always-on event detection mode, each PE can work independently to compute the multiple-accumulation result of eight pairs of binary numbers.Taking a 4-channel 2 × 4 convolution operation as an example, every 2 × 4 binary weight kernel can be treated as an 8-bits integer.Every such 8-bit integer are copied to The architecture of the proposed event-driven neural signal processor and the structure of data reorder logic.Weight and feature mapping methods of the proposed architecture, (A) BNN mapping, (B) CNN mapping.Yang et al. 10.3389/fnins.2024.1340164Frontiers in Neuroscience 07 frontiersin.org
Reconfigurable processing element design
The computation of BNN mainly involves XNOR-popcount operations.As shown in Figure 7A Figure 8A shows the BNN circuit-level paradigm of the PE, it is configured to facilitate the popcount operations.In this mode, binary weights and feature maps originally represented as −1 or 1 are converted to 0 or 1.The conversion process begins with the PE employing its internal logic to compute the XNOR result from the weights and input feature maps.The CUs within each PE are interconnected in a sequential manner to execute popcount operations.Each XNOR result undergoes a one-bit shift to the right, and the bit that is shifted out is added to the rightmost spare bits in the register.These spare bits serve as a temporary storage to tally the number of "1"s counted.In the following cycle, the subsequent CU carries on with this operation, continuously updating this temporary count until all bits have been accounted for.This repetitive process enables the efficient computation of popcounts during BNN inference.As illustrated in Figure 8A, after this interactive process, the register's value updates to "00000101, " indicating that there are five "1"s in the original XOR result of "11010011." In the CNN mode, the PE undergoes a reconfiguration to operate as a Multiply-Accumulate (MAC) unit using Booth encoding, eliminating the need for a multiplier.Figure 8B demonstrates how this mode facilitates the MAC operation at the circuit level, where two 8-bit input numbers, a multiplicand "x" and a multiplier "y, " interact to complete the MAC operation.The process begins with the extension of the least significant bit (LSB) of "y, " transforming it into a 9-bit multiplier.This extended multiplier "y" is then divided into four 3-bit Booth multipliers.Each of these Booth multipliers encodes the multiplicand "x" using Booth encoding, resulting in four Booth products.These four Booth products are subsequently assigned to four consecutive Computational Units (CUs) in a specific order.In contrast to the BNN mode, where one of the inputs of each CU's ALU is connected to the Least Significant Bit (LSB) of the preceding CU's register, one ALU input is now linked to the predetermined Booth products values.Within each CU, the Booth products are added with the other input of the ALU, namely the old partial sums (psums) obtained from the previous CU.This addition yields new partial sums, which are then stored in the CU's register and subsequently transmitted to the next CU.This iterative method of incorporating Booth products with propagated partial sums (psums) throughout the chain of Computational Units (CUs) enhances the efficient execution of Multiply-Accumulate (MAC) operations.In contrast to general multiplication scenarios where both the multiplicand and multiplier are arbitrary, in convolutional operations, the multiplicand remains constant as it represents the kernel value.Consequently, the Booth products can remain unchanged throughout the computation of an entire kernel.
The intra-PE logic and the CU circuit diagram is shown in Figure 9.In BNN mode, the top controller oversees the operations and help read binary weight from data memory and reorganize weights into suitable weight groups through data reorder logic within the reconfigurable core.Once organized, these weights are written into the registers using the intra-PE logic and waiting for the input feature map to perform XNOR operations.The results of these XNOR operations are then sent to the first CU to start the popcount calculations.The final CU within the PE is responsible for propagating the results back During the CNN mode, multipliers are fetched from the data memory and stored in the registers of the intra-PE logic.These multipliers serve the purpose of encoding the multiplicand using Booth control logic.The encoded Booth products are then directed to the respective CU.Depending on if the output is partial sum or final results, the finial CU will direct the data to the next PE for further accumulation or necessitates the execution of the Rectified Linear Unit (ReLU) activation function and the max pooling operation by the respective modules.The data sent to the data memory needs to be quantized to 8-bits.The quantization process is managed by the quantizer module, and its configuration is determined by the immediate instructions decoded by the top controller.
Experiments and evaluation results
Evaluations of this study are conducted in terms of model classification accuracy, false alarm rate and chip power consumption.Experimental setup including graphic user interface (GUI), PCB and FPGA test board are designed to facilitate the testing of the proposed algorithm and the chip.
Closed-loop control performance evaluation
This paper employs three datasets to assess the performance of the proposed model: the American Epilepsy Society Seizure Prediction Challenge (AES) intracranial electroencephalogram (iEEG) dataset (Brinkmann et al., 2016), the Melbourne University iEEG dataset (Kuhlmann et al., 2018a), and the CHB-MIT electroencephalogram (EEG) dataset (Goldberger et al., 2000).The AES dataset comprises iEEG recordings collected from five dogs and two human subjects.The dog data was sampled at a rate of 400 Hz, with 16 electrodes used for four dogs and 15 electrodes for one dog (Dog5).The human subjects' iEEG data were sampled at 5,000 Hz, using 15 electrodes for one patient and 24 electrodes for the other patients.The Melbourne University dataset, accessible through the Melbourne-University AES-MathWorks-NIH Seizure Prediction Challenge, contains iEEG signals from three patients.Each patient had 16 electrodes implanted, and all measurements were sampled at a frequency of 400 Hz.The CHB-MIT dataset comprises EEG data from 22 patients, recorded over multiple days, resulting in a total of 637 recordings, including 163 seizures.Most measurements were obtained using 23 fixed electrodes, and the sampling rate for all subjects was 256 Hz.There were variations in the data, including the number of seizure events and data length.For seizure prediction task, it is essential to consider the Seizure Prediction Horizon (SPH) and the Preictal Interval Length (PIL) during the data preprocessing stage (Yang and Sawan, 2020;Wang et al., 2021).The SPH denotes the time interval between the onset of a seizure and the preictal phase, while the PIL quantifies the duration of the preictal state.In the case of the two iEEG datasets, we configured the SPH and PIL to be 5 min and 1 h, respectively.Conversely, for the CHB-MIT EEG dataset, these values were set to 5 and 30 min, respectively.In accordance with the specified SPH and PIL parameters, the training data is segregated into preictal and interictal samples.To address the data imbalance between these two sample types, preictal samples are extracted with 5-s overlaps, while interictal samples are extracted without any overlaps.Following the use of the proposed method in Section 2, we can obtain a baseline model and quantize the model to 8-bit quantization or binary models.The CNN model obtained with CHB-MIT dataset is shown in Figure 10.The architecture comprises five convolutional blocks, each consisting of a convolutional layer followed by a pooling layer.A ReLU activation function is applied after the convolution.The output is generated using a global average pooling layer, followed by a dense layer applying the Softmax function.The dimensions of the convolution and pooling for each layer are also depicted in Figure 10.
Due to the difference between sampling rate and number of channels, the network architecture can vary based on different dataset.Throughout the training process, we employ the leave-one-out crossvalidation method to mitigate the risk of potential overfitting and helps ensure the robustness of our model's performance.Critical metrics, including sensitivity, false alarm rate, model size is presented in Table 1.Our 8-bit quantization model achieved overall better performance when compared with previous reported top-performing models in different datasets.The 8-bit model can outperform other models in both sensitivity and false alarm rate.Although the binary model may not achieve a low false alarm rate comparable to other models, it significantly reduces computational and memory demands.
Chip implementation and performance
Figure 11A illustrates the prototype system designed for evaluating the proposed processor.The test system consists of several essential components, including an oscilloscope, power supply, a testing printed circuit board (PCB) that links the FPGA to the chip, and a real-time graphical user interface (GUI) on a desktop computer (PC).The PCB is interconnected with the FPGA via the FMC interface, and the FPGA communicates with the host PC through the PCIe interface.The GUI acts as the control central of the demonstration system, enabling various functions such as displaying neurological signals, loading instructions and data onto the chip, and collecting and presenting real-time calculation results generated by the chip.During operation, the GUI retrieves and streams electrophysiological signals from the database to the FPGA board via the PCIe interface.The FPGA board then executes the PCIe protocol, decoding the incoming data to align with the standard sensor interface format.Final, the FPGA retrieves the processed data from the chip and then these results are displayed on the GUI. Figure 11B displays the chip photograph along with its performance summary.The chip was fabricated using SMIC 55 nm CMOS technology and occupies an area of 2.5 × 2.51 mm 2 .It can operate within a clock frequency range from 300 kHz to 20 MHz and with a supply voltage range of 0.75 to 1.1 volts.During the seizure prediction task, power consumption measures at 142.9 μW with a 300 kHz frequency and 0.75 V supply voltage, while it reaches 18.86 mW at a 20 MHz frequency and 1.1 V supply voltage.Operating at a frequency of 20 MHz, the energy consumption per inference is 3.74 μJ for BNN and 11.8 μJ for CNN.When the frequency is reduced to 300 kHz, the energy consumption decreases to 0.99 μJ for BNN operations and 1.89 μJ for CNN operations.A breakdown of the power distribution for the chip is presented in Figure 12.The four reconfigurable cores accounting for 91.2% of total power consumption.The top RISC, system bus and other components contribute to 2.9, 5.9% of the power consumption, respectively.The chip achieves peak energy efficiency at 300 kHz and 0.75 V. Table 2 provides comparison between the proposed chip and existing state-of-the-art works.This chip incorporates event-driven processing, enabling it to effectively handle biomarker detection with low-power consumption.The proposed chip represents the first implementation of event-driven processing through a reconfigurable design featuring two neural networks for closed-loop neuromodulation control.
Conclusion
In this paper, we propose an algorithm and integrate circuits co-design strategy to close the loop of implantable neuromodulation devices.Low memory and computation demand convolutional neural network is first searched and compressed with architecture search and quantization techniques.The obtained network architecture achieves low false alarm rate of 0.1/h and high sensitivity while reducing model size about 50% when compared to state-ofthe-art models.A dedicated neural signal processor that can implement the networks is designed and fabricated in 55 nm technology.With the event-based processing scheme, the chip can switch between low-power consumption event detection mode and high precision classification mode to maintain both real-time performance and low-power consumption.The chip can operate under 0.75 V supply voltage and 300 kHz clock frequency with only 143 μW power.In summary, the proposed algorithm and integrated codesign strategy can offer versatility, high accuracy, and outstanding energy efficiency to close the loop for neuromodulation applications.While our proposed algorithm and integrated co-design strategy showcase significant improvement toward energy efficiency and The obtained network structure based on the constraints and CHB-MIT dataset.Test system and chip photograph, (A) test system with GUI, FPGA platform and PCB board, (B) microphotograph of the proposed chip.
Performance summary and power breakdown of the proposed processor.
FIGURE 3
FIGURE 3The proposed model compression strategy: (A) search space to generate baseline model with high sensitivity and low false alarm rate, (B) baseline model with the first two block for temporal information extraction and the last three block for spatial convolution, (C) quantization and straight through estimators (STE).
FIGURE 4
FIGURE 4Motivation of the proposed event-driven neural signal processor.
FIGURE 6
FIGURE 6 , during inference, multiple bits from weights and activations are grouped into two sets.The XNOR operation is performed based on these two groups of values, and then the number of resulting "one" are counted.The popcount operation can be completed by right shift the result of XNOR in each cycle and add the extracted least significant bit (LSB) to the left spare position of the register.The computation of CNN mainly involves multiplication and accumulation operations.To mitigate the usage of area and power consuming multipliers, all multiplication operations in this work are replaced with booth encoded accumulations.As shown in Figure 7B, an 8-bit multiplier can be encoded into four values and each represent a value in [−1, −2, 0, 1, 2].The final multiplication operation can be implemented by shifting or adding the multiplicand.Unlike conventional multiplication where the multiplier and the multiplicand can vary frequently, in CNN inference, either the feature map or the weight can stay stationary during the multiplication process.
FIGURE 7
FIGURE 7Major operations required in BNN and CNN inference, (A) PE configured to perform popcount operation, (B) PE configured to perform booth encoded multiplication.
FIGURE 8
FIGURE 8 Reconfigure PE to perform different operation, (A) PE configured to perform popcount operation, (B) PE configured to perform booth encoded multiplication.
FIGURE 9
FIGURE 9Reconfigurable details of PE, (A) intra-PE connection in BNN configuration (B) intra-PE connection in CNN configuration.
TABLE 1
Comparison with closed-loop control algorithms for seizure prediction.
accuracy, we acknowledge challenges such as catastrophic forgetting and the need for meta-learning capabilities that warrant further investigation in the field of closed-loop neuromodulation.Additionally, clinical validation is an essential next step in our continued research efforts.
TABLE 2
Comparison with closed-loop control algorithms for seizure prediction. | 7,543.2 | 2024-03-14T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Off-axis setup taking full advantage of incoherent illumination in coherence-controlled holographic microscope
Coherence-controlled holographic microscope (CCHM) combines off-axis holography and an achromatic grating interferometer allowing for the use of light sources of arbitrary degree of temporal and spatial coherence. This results in coherence gating and strong suppression of coherent noise and parasitic interferences enabling CCHM to reach high phase measurement accuracy and imaging quality. The achievable lateral resolution reaches performance of conventional widefield microscopes, which allows resolving up to twice smaller details when compared to typical off-axis setups. Imaging characteristics can be controlled arbitrarily by coherence between two extremes: fully coherent holography and confocal-like incoherent holography. The basic setup parameters are derived and described in detail and experimental validations of imaging characteristics are demonstrated. © 2013 Optical Society of America OCIS codes: (090.0090) Holography; (110.0113) Imaging through turbid media; (110.4980) Partial coherence in imaging; (120.5050) Phase measurement; (170.1790) Confocal microscopy; (180.3170) Interference microscopy. References and links 1. R. Barer, “Interference microscopy and mass determination,” Nature 169(4296), 366–367 (1952). 2. H. G. Davies and M. H. F. Wilkins, “Interference microscopy and mass determination,” Nature 169(4300), 541 (1952). 3. H. Janečková, P. Veselý, and R. Chmelík, “Proving tumour cells by acute nutritional/energy deprivation as a survival threat: a task for microscopy,” Anticancer Res. 29(6), 2339–2345 (2009). 4. F. Dubois, C. Yourassowsky, O. Monnom, J. C. Legros, O. Debeir, P. Van Ham, R. Kiss, and C. Decaestecker, “Digital holographic microscopy for the three-dimensional dynamic analysis of in vitro cancer cell migration,” J. Biomed. Opt. 11(5), 054032 (2006). 5. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. 24(5), 291–293 (1999). 6. L. Lovicar, L. Kvasnica, and R. Chmelík, “Surface observation and measurement by means of digital holographic microscope with arbitrary degree of coherence,” Proc. SPIE 7141, 71411S (2008). 7. L. Lovicar, J. Komrska, and R. Chmelík, “Quantitative-phase-contrast imaging of a two-level surface described as a 2D linear filtering process,” Opt. Express 18(20), 20585–20594 (2010). 8. T. Colomb, N. Pavillon, J. Kühn, E. Cuche, C. Depeursinge, and Y. Emery, “Extended depth-of-focus by digital holographic microscopy,” Opt. Lett. 35(11), 1840–1842 (2010). 9. F. Dubois, L. Joannes, and J.-C. Legros, “Improved three-dimensional imaging with a digital holography microscope with a source of partial spatial coherence,” Appl. Opt. 38(34), 7085–7094 (1999). 10. P. Klysubun and G. Indebetouw, “A posteriori processing of spatiotemporal digital microholograms,” J. Opt. Soc. Am. A 18(2), 326–331 (2001). 11. G. Indebetouw and P. Klysubun, “Optical sectioning with low coherence spatio-temporal holography,” Opt. Commun. 172(1-6), 25–29 (1999). 12. G. Indebetouw and P. Klysubun, “Imaging through scattering media with depth resolution by use of lowcoherence gating in spatiotemporal digital holography,” Opt. Lett. 25(4), 212–214 (2000). 13. E. N. Leith, W.-C. Chien, K. D. Mills, B. D. Athey, and D. S. Dilworth, “Optical sectioning by holographic coherence imaging: a generalized analysis,” J. Opt. Soc. Am. A 20(2), 380–387 (2003). #181222 $15.00 USD Received 5 Dec 2012; revised 3 Feb 2013; accepted 2 May 2013; published 13 Jun 2013 (C) 2013 OSA 17 June 2013 | Vol. 21, No. 12 | DOI:10.1364/OE.21.014747 | OPTICS EXPRESS 14747 14. M.-K. Kim, “Tomographic three-dimensional imaging of a biological specimen using wavelength-scanning digital interference holography,” Opt. Express 7(9), 305–310 (2000). 15. P. Massatsch, F. Charrière, E. Cuche, P. Marquet, and C. D. Depeursinge, “Time-domain optical coherence tomography with digital holographic microscopy,” Appl. Opt. 44(10), 1806–1812 (2005). 16. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4(9), 717–719 (2007). 17. T. Zhang and I. Yamaguchi, “Three-dimensional microscopy with phase-shifting digital holography,” Opt. Lett. 23(15), 1221–1223 (1998). 18. L. Xu, J. M. Miao, and A. Asundi, “Properties of digital holography based on in-line conguration,” Opt. Eng. 39(12), 3214–3219 (2000). 19. D. Carl, B. Kemper, G. Wernicke, and G. von Bally, “Parameter-optimized digital holographic microscope for high-resolution living-cell analysis,” Appl. Opt. 43(36), 6536–6544 (2004). 20. D. Shin, M. Daneshpanah, A. Anand, and B. Javidi, “Optofluidic system for three-dimensional sensing and identification of micro-organisms with digital holographic microscopy,” Opt. Lett. 35(23), 4066–4068 (2010). 21. P. Girshovitz and N. T. Shaked, “Generalized cell morphological parameters based on interferometric phase microscopy and their application to cell life cycle characterization,” Biomed. Opt. Express 3(8), 1757–1773 (2012). 22. B. Bhaduri, H. Pham, M. Mir, and G. Popescu, “Diffraction phase microscopy with white light,” Opt. Lett. 37(6), 1094–1096 (2012). 23. F. Dubois and C. Yourassowsky, “Full off-axis red-green-blue digital holographic microscope with LED illumination,” Opt. Lett. 37(12), 2190–2192 (2012). 24. R. Chmelík and Z. Harna, “Parallel-mode confocal microscope,” Opt. Eng. 38(10), 1635–1639 (1999). 25. R. Chmelík, “Three-dimensional scalar imaging in high-aperture low-coherence interference and holographic microscopes,” J. Mod. Opt. 53(18), 2673–2689 (2006). 26. H. Janečková, P. Kolman, P. Veselý, and R. Chmelík, “Digital holographic microscope with low spatial and temporal coherence of illumination,” Proc. SPIE 7000, 70002E (2008). 27. R. Chmelík and Z. Harna, “Surface profilometry by a parallel–mode confocal microscope,” Opt. Eng. 41(4), 744–745 (2002). 28. P. Kolman and R. Chmelík, “Coherence-controlled holographic microscope,” Opt. Express 18(21), 21990–22003 (2010). 29. M. Lošťák, P. Kolman, Z. Dostál, and R. Chmelík, “Diffuse light imaging with a coherence controlled holographic microscope,” Proc. SPIE 7746, 77461N (2010). 30. E. N. Leith and J. Upatnieks, “Holography with achromatic-fringe systems,” J. Opt. Soc. Am. 57(8), 975–980 (1967). 31. T. Slabý, M. Antoš, Z. Dostál, P. Kolman, and R. Chmelík, “Coherence-controlled holographic microscope,” Proc. SPIE 7746, 77461R (2010). 32. N. Pavillon, A. Benke, D. Boss, C. Moratal, J. Kühn, P. Jourdain, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Cell morphology and intracellular ionic homeostasis explored with a multimodal approach combining epifluorescence and digital holographic microscopy,” J Biophotonics 3(7), 432–436 (2010). 33. B. Kemper, P. Langehanenberg, A. Höink, G. von Bally, F. Wottowah, S. Schinkinger, J. Guck, J. Käs, I. Bredebusch, J. Schnekenburger, and K. Schütze, “Monitoring of laser micromanipulated optically trapped cells by digital holographic microscopy,” J Biophotonics 3(7), 425–431 (2010). 34. E. Shaffer, N. Pavillon, and C. Depeursinge, “Single-shot, simultaneous incoherent and holographic microscopy,” J. Microsc. 245(1), 49–62 (2012). 35. M. Born and E. Wolf, Principles of Optics, 6 edition (Pergamon Press, 1986). 36. J. B. Pawley, Handbook of Biological Confocal Microscopy (Springer, 2006), pp. 65, Chap. 4. 37. N. Pavillon, C. S. Seelamantula, J. Kühn, M. Unser, and C. Depeursinge, “Suppression of the zero-order term in off-axis digital holography through nonlinear filtering,” Appl. Opt. 48(34), H186–H195 (2009). 38. T. Kreis, “Digital holographic interference-phase measurement using the Fourier-transform method,” J. Opt. Soc. Am. A 3(6), 847–855 (1986). 39. J. Kühn, F. Charrière, T. Colomb, E. Cuche, F. Montfort, Y. Emery, P. Marquet, and C. Depeursinge, “Axial sub-nanometer accuracy in digital holographic microscopy,” Meas. Sci. Technol. 19(7), 074007 (2008). 40. R. Chmelík, “Holographic confocal microscopy,” Proc. SPIE 4356, 118–123 (2001). 41. E. N. Leith and G. J. Swanson, “Recording of phase-amplitude images,” Appl. Opt. 20(17), 3081–3084 (1981). 42. R. Chmelík, P. Kolman, T. Slabý, M. Antoš, and Z. Dostál, “Interferometric system with spatial carrier frequency capable of imaging in polychromatic radiation,” patent EP2378244B1 (July 4, 2012).
Introduction
Interference microscopes have become established instruments for measurements and study of microscopic samples and found many biological and industrial application areas.These instruments allow obtaining the amplitude and the phase of the wave reflected by or transmitted through the specimen.The reconstructed phase carries information about specimen topography or morphology and is therefore of particular interest.The quantitative phase contrast imaging allows non-invasive, marker-free (non-toxic) analysis of the specimen with nanometer vertical resolution.In biological applications cell dynamics and morphology analyses can be performed such as monitoring of dry mass distribution within the cells [1][2][3] or cell tracking in 3D [4].In technical applications topography measurements are most frequently performed [5][6][7][8].Some interference microscope systems can also provide special features like numerical refocusing [9,10], optical sectioning [11][12][13] or tomographic imaging [14][15][16].
Many interferometric instruments have been proposed for phase measurements, which can be basically classified into two groups.In-line systems [9,17,18] are characterized by the zero angle between the object and the reference beam.The use of low-coherence light sources in these systems enables to suppress coherent noise, but also to achieve an optical sectioning effect by coherence gating.However, these systems require capturing more than one interferogram to reconstruct the object wave, which can be a limiting factor when imaging rapidly varying phenomena.Also this is disadvantageous since vibrations and medium fluctuations can introduce measurement errors.On the other hand off-axis systems, frequently called as digital holographic microscopes (DHM) [5,[19][20][21], are characterized by the nonzero angle between the object and the reference beam.These systems do not allow the use of incoherent light sources and therefore the optical sectioning property is not available as well as the coherent noise suppression effect.Also the lateral resolution is limited due to this reason.However, only one captured interferogram is needed to reconstruct the object wave which makes these systems suitable for imaging of rapidly varying phenomena and brings high stability of the phase measurement.
The coherence-controlled holographic microscope (CCHM) combines an off-axis configuration and an achromatic grating interferometer allowing for the use of arbitrarily lowcoherent illumination.This enables the CCHM to gain the described advantages of both inline and off-axis systems while eliminating the disadvantages at the same time.Thus CCHM is capable to provide speckle-free real-time optically-sectioned quantitative phase contrast imaging with lateral resolution fully comparable to conventional optical microscopes.Recently some interesting off-axis setups emerged employing diffraction gratings and using sources of low temporal coherence [22,23].However, in [22] spatially coherent illumination is needed.In [23] spatial coherence is also increased to a certain extent.
To our knowledge, the first achromatic holographic microscope allowing for off-axis holographic imaging with light of arbitrarily low coherence was designed by Chmelík and Harna for reflected light [24].The confocal-like optical sectioning property of CCHM systems was proved theoretically and experimentally for reflected-light configuration [24].The theoretical description of the imaging process of CCHM was carried out and compared to other imaging systems [25].Time-lapse analyses of living cells [3,26] and surface profilometry measurements [6,7,27] were made using the CCHM.A novel method of combined phase and depth-discriminated intensity imaging was proposed [6].Recently some remarkable imaging properties of CCHM were discovered when imaging through scattering media [28,29].In 2010 the name "Coherence-controlled holographic microscope" was firstly introduced [28] as it reflects the crucial ability of the microscope to control its imaging properties by the degree of spatial and temporal coherence of the illuminating light.In this way imaging properties of the microscope can be easily adapted to match the application requirements.
The latest concept of the transmitted-light CCHM was described in detail and its optical properties were discussed in [28].This device provided a remarkable progress in the CCHM design and created a platform for variety of mainly biological observations [3,26].This concept employs a diffraction grating used as a beamsplitter to split the incident light into the object beam and the reference beam.Since the diffraction grating plane is optically conjugated with the output plane of the microscope as proposed by Leith [30], the formation of an achromatic interference fringe pattern in the output plane is ensured for arbitrarily low coherence of illumination.However, this design has some limiting factors.The most significant limitation is introduced to the spectral transmittance of the microscope for wavelengths different from central wavelength.This is the consequence of the dispersive power of the diffraction grating, which produces laterally shifted images of the source in the entrance pupils of condensers.When using wavelengths longer or shorter than the central wavelength, the laterally shifted images of the source are cropped by the aperture of the entrance pupil which results in reducing the amount of interfering light while inducing increased spatial coherence of the source at the same time.Thus signal quality at these wavelengths is affected.This also causes slightly anisotropic transfer of spatial frequencies.Another important disadvantage of this concept is given by the need of four identical microscope objectives (two acting as condensers and two as objectives).Although long working distance lenses are employed, the lack of working space between condensers and objectives is significant especially when working with high NA lenses.The considerable limitation is also given by economical aspects when considering the costs of four identical objectives employed in the setup for each magnification.
The concern of this paper is to describe in detail the novel optical setup of the CCHM which we designed and which overcomes most of the mentioned disadvantages of the previous concept, preserves all the advantages of incoherent off-axis holography and enables multimodal imaging.Some of the preliminary results were already presented in [31].In the following sections the basic setup parameters are derived and described in detail and experimental validations of imaging characteristics are demonstrated.
Optical setup and principles of operation
The novel CCHM setup (Fig. 1) is based on Mach-Zehnder-type interferometer adapted for achromatic off-axis holographic microscopy.The light passing through the achromatic interferometer propagates through separated optical paths -the object and the reference arm of the interferometer.Both arms are formed by identical microscope setups consisting of condensers (C 1 , C 2 ), infinity-corrected objectives (O 1 , O 2 ) and tube lenses (TL 1 TL 2 ).The essential component of the CCHM setup is the reflection diffraction grating (DG), which is placed in the reference arm of the interferometer and imaged into the output plane (OP) as proposed by Leith [30].The diffraction grating plane (DG) and object planes (Sp, R) of objectives are optically conjugated with the output plane (OP) by objectives and output lenses (OL 1 , OL 2 ).Since only the + 1st order of the diffraction grating is used for imaging (other diffraction orders are eliminated by spatial filtering in focal plane of output lens OL 2 ), the image of the grating is not formed directly by the reference beam in the output plane.However, when the object beam and the reference beam recombine in the output plane, the interference fringe pattern appears, which corresponds to the diffraction grating grooves' image as it would be formed directly by 0th and + 1st order of the diffraction grating.Thus the spatial frequency of interference fringes f C in the output plane -i.e. the carrier frequencyequals to the spatial frequency of diffraction grating grooves f G reduced by output lenses' magnification m OL .The extended and broadband, i.e. spatially and temporally incoherent light source (S) (e.g. a halogen lamp) is imaged by a collector lens (L) to front focal planes of condensers, thus providing the Köhler illumination.Then the secondary image of the source is formed in the rear focal planes of objectives and also the tertiary image of the source is formed near the rear focal planes of output lenses.In the reference arm, the tertiary image of the source in the rear focal plane of the output lens OL 2 is spectrally dispersed with respect to the dispersive power of the diffraction grating so that the longer is the wavelength of light, the further the image of the source is placed from the reference arm axis.Let trace the axial ray which comes from the source, passes through the reference arm and hits the grating.When considering the + 1st diffraction order of the grating, the incident ray is diffracted by the grating at an angle α according to the grating equation sin(α) = f G λ, where λ is the wavelength of light.The diffracted ray then passes through the output lens OL 2 and enters the output plane at an angle β.The relation between α and β is given by sin(β) = sin(α)/m OL .In the object arm of the interferometer, the light is reflected by mirror M 2 and passes through the output lens OL 1 normally since there is no diffractive element in the path.The light is not spectrally dispersed in this arm.Thus rays of different wavelengths emitted from corresponding points of tertiary images of the source in both interferometer arms recombine in the output plane under different angles β.This is caused by the dispersive power of the diffraction grating and gives rise to interference fringes parallel with grooves of the diffraction grating and of a spatial carrier frequency f C which is constant for all wavelengths -i.e. the interferometer is achromatic.If a specimen (Sp) is observed, an image plane off-axis hologram with the spatial carrier frequency f C is formed in the output plane.
A proper alignment of the output angle β for all available wavelengths is crucial for achromaticity of the interferometer.When any misalignment θ is introduced to the output angle β, the interferometer produces interference fringes of slightly different carrier frequencies at different wavelengths.The higher values of θ give rise to higher values of f C and vice versa.Also the positive values of θ give rise to higher values of f C at shorter wavelengths while the negative values of θ give rise to higher values of f C at longer wavelengths.This behavior significantly influences achromaticity of the interferometer and consequently the contrast of the interference fringes pattern in the recorded hologram.Therefore the output angle β has to be properly aligned.
Although the use of incoherent illumination brings high demands on precise alignment of optical components, a simple and fully automatable two-step procedure was developed for easy operation of the microscope.To equalize optical paths the mirror M 2 is shifted along the optical axis with the use of piezo-positioner.Second piezo-positioner is used to translate the reference objective O 2 perpendicularly to the optical axis to align precisely images from both interferometer arms formed in the output plane.
Several variations of the proposed setup are possible, e.g. with the use of transmission diffraction grating or with the use of two diffraction gratings, each placed in one arm of the interferometer.The use of transmission grating would be more convenient because of lower light losses, which are otherwise significant when considering the use of reflection diffraction grating together with beamsplitters (BS 2 , BS 3 ).Also a reflected-light setup can be easily achieved by introducing illumination beams into the infinity space between objectives and tube lenses.In the same way, multimodality can be achieved by implementing other imaging or micromanipulation techniques to provide combined imaging [32][33][34].
The following components were used in our experimental setup: light source S (halogen lamp coupled into light guide), collector lens L (achromatic doublet, focal length 50 mm), condensers C (NA 0.52), objectives O (Nikon Plan Achromat 10 × /0.25, infinity-corrected), tube lenses TL (focal length 200 mm), output lenses OL (focal length 35 mm, NA 0.25, plancorrected), detector D (CCD, BW, 14-bit, 1376 pixels × 1038 pixels, pixel size 6.45 μm).It is highly desirable to employ plan-corrected optics for the imaging part of the microscope setup (O, TL, OL) because of the detector D used as a recording device.
Incoherence of the light source
As it was already mentioned, the above described setup allows using illumination of arbitrary degree of coherence.Using of incoherent illumination in CCHM brings advantageous imaging properties and therefore the lowest achievable degree of coherence is of great importance.To demonstrate this achievable degree of coherence, one can estimate coherence width (CW) and coherence length (CL) of the illuminating light.
We estimated CW as a diameter of circular area that is illuminated almost coherently, which we expressed as the full width at half maximum (FWHM) d w of the mutual intensity function.From the formula for this function [35, p. 511] it can be computed that 0 0.7 / , where λ 0 is central wavelength and γ is angular radius of the tertiary image of the light source as viewed from the output plane.According to [35, p. 319 ] the CL can be calculated as , where Δλ 0 is FWHM of the spectral function.For parameters of the real setup (γ ≈0.0063 rad, λ 0 ≈570 nm and Δλ 0 ≈150 nm) we obtained values of CW d w ≈63 µm (calculated in the output plane) and CL d l ≈2.2 µm.However, these values are only approximate and do not reflect the increase of coherence in tertiary images of the light source introduced by the imaging process when the light source is imaged by the optical system to focal planes of the output lenses.
To confront the theoretical values with real conditions and to measure the real values of CW and CL for our experimental setup an experiment was performed.A white (unfiltered) light illumination was provided by 5 mm diameter light source.Objective lenses 10 × /0.25 were used in this experiment.
To find the mutual intensity function a 2-axis piezo-positioner was used to translate the reference objective (O 2 ) in a direction perpendicular to the optical axis.Thus the image formed by the reference arm in the output plane was shifted with respect to the image formed by the object arm.The reference objective was translated in two directions -perpendicular to diffraction grating grooves (x axis) and parallel with diffraction grating grooves (y axis).The average values of the reconstructed amplitude were then computed in area of 5 px × 5 px within the central part of the image and the normalized values were plotted versus the lateral shift d OP of the image formed by the reference arm in the output plane (Fig. 2(a)).The CW was estimated as the FWHM of the mutual intensity function, giving d w,x ≈91 µm and d w,y ≈76 µm for the two directions respectively.
Similar procedure was performed to measure the mutual coherence function and to find the CL.However, at this time 1-axis piezo-positioner was used to translate the mirror M 2 .In this way the optical path difference (OPD) between the two interferometer arms was varied.The averaged and normalized amplitude values versus the OPD value d OPD were plotted and the CL was estimated as the FWHM of the obtained mutual coherence function giving d l ≈4 µm (Fig. 2(b)).
It can be seen that the experimentally determined values of CW and CL are higher when compared to the theoretical values.Partially it is an expected effect due to the increase of coherence by the imaging process as it was mentioned above.Moreover, the reference beam is spectrally dispersed when passing through the output lens contrary to the object beam.For this reason, small phase shifts introduced by residual aberrations of these lenses are not balanced in the output plane and individual interference patterns belonging to different wavelengths are unequally laterally shifted.The patterns belonging to different parts of the spectrum then match and thus contribute to the output signal for different values of the shifts d OP (in x axis) and d OPD , which is probably the reason for higher measured values of d w,x and d l .This effect together with the presence of secondary maxima of the spectrum (caused by the reflection diffraction grating) may explain also the side-lobes of the curve in the Fig. 2(b).
The obtained values demonstrate the extremely low coherence of illumination which the CCHM is capable to utilize.
Lateral resolution
Let the complex amplitude distribution of object and reference wave in the output plane be o(x, y) and r(x, y) respectively, where r(x, y) = r 0 (x, y) exp(-i2πf C x) and r 0 (x, y) is the complex amplitude distribution of reference wave expressed in the plane perpendicular to propagation direction.Then the intensity distribution of the hologram which is generated in the output plane by interference of the two waves is given by where asterisk denotes the complex conjugate operator and x, y are coordinates defined in the output plane.The first two terms in second row of Eq. ( 2) correspond to the intensities of object and reference waves, respectively.In the spatial frequency spectrum of hologram these terms create so-called zero-order term, or* is the image term and o*r is its complex conjugate, i.e. the twin image.Both the terms or* and o*r can be used for reconstruction of the object amplitude and phase (see section 7).In Fig. 3 Let us consider now two extreme cases of illumination: fully spatially coherent and fully spatially incoherent.
In typical off-axis DHM setups spatially coherent sources such as lasers or laser diodes are usually used to ensure the proper functionality of the device.Therefore the frequency spectrum of r 0 is nearly a two-dimensional Dirac distribution.Thus the highest frequency produced by the terms or 0 * and o*r 0 in the spatial frequency spectrum is given by max, max, and their spectral supports are therefore circles of radius max, (marked by dashed line in Fig. 3).It can be seen that the use of spatially coherent light sources in these systems leads to twice higher lateral resolution limit when compared to conventional optical microscopes where it is given by the standard formula max 2 /( ) and where spatially low-coherent sources are used.
In the case of CCHM the use of spatially and temporally incoherent sources is allowed.Therefore an extended and broadband source can be employed, which provides a range of illumination directions in both interferometer arms.When a proper condenser is used so that the aperture of the objective is fully filled by the image of the source, then the highest spatial frequency of r 0 is given by max, max,
CCHM
(marked by solid line in Fig. 3).It can be seen that the lateral resolution limit achievable by the CCHM with spatially incoherent illumination is fully comparable to conventional optical microscopes and it is half of the value for coherent illumination, the mode used in most current DHMs.Thus the lateral resolution limit of CCHM corresponds to incoherent imaging process [25].Since the highly incoherent illumination as well as highly coherent is possible, the lateral resolution can be controlled arbitrarily in CCHM between the two extremes described by Eq. ( 4) and Eq. ( 5).Moreover due to the achromatic interferometer design of the CCHM, the wavelength of illuminating light can be varied arbitrarily (in the range of spectral transmissivity of the CCHM setup) to reach the best resolution achievable in the particular application.
To confront these theoretical conclusions with real imaging conditions and to prove the influence of spatially incoherent illumination on the achievable resolution in CCHM, we observed a sample with broad spectrum of spatial frequencies (surface of a ground glass).Multiple of captured holograms were averaged to increase SNR and preserve the highest spatial frequencies located near the resolution limit of the objective lens (10 × /0.25).Then the modulus of spatial-frequency spectrum of the averaged hologram was calculated (Fig. 4).The images were captured under two different conditions of illumination -highly spatially incoherent (halogen lamp coupled into 5 mm diameter light guide with interference filter λ = 650 nm, 10 nm FWHM) and highly spatially coherent (HeNe laser, λ = 633 nm).The spatially coherent illumination was provided to allow comparison of results obtained by CCHM in incoherent mode with those for a typical DHM (simulated by CCHM in coherent mode).The circles in Fig. 4 show the expected diameters of spectral supports of zero-order term and image terms corresponding to Eq. ( 5) and Eq. ( 4).Although one have to be cautious when directly comparing the diameters because of slightly different wavelengths used (650 nm vs. 633 nm), the experimental results show a good agreement with the theoretical assumptions.Also one can notice the different shapes of spatial frequency transmission profiles in the image terms, where the profile is approximately triangular for spatially incoherent illumination and rectangular for spatially coherent illumination.This is an important fact to understand the difference in the achievable lateral resolution between spatially incoherent and spatially coherent illumination.While spatially coherent illumination will provide good contrast even at the maximum transmitted spatial frequency, the spatially incoherent illumination will provide contrast approaching zero at its maximum transmitted spatial frequency.However, the maximum transmitted spatial frequency for spatially incoherent illumination will be double that of spatially coherent illumination.5) and Eq. ( 4).The amplitude values are in logarithmic scale (arbitrary units).Objectives used: 10 × /0.25.
Spatial frequency of the diffraction grating
Conditions derived in previous paragraphs are essential for determination of the diffraction grating spatial frequency.To perform a hologram reconstruction, total separation of the sideband terms or* and o*r from the zero-order term |o| 2 + |r| 2 is required in the spatial frequency spectrum as it is depicted in Fig. 3.No overlap of these terms is allowed.It can be seen from Fig. 3 that the carrier frequency is then given as f C DHM ≥ 3a in the case of DHM and f C CCHM ≥ 4a in the case of CCHM.The need of higher carrier frequency in the case of CCHM is the consequence of higher lateral resolution.This influences negatively the available field of view (FOV) as it will be discussed in the next section.The total magnification m between the output plane and the object plane of objectives O 1 , O 2 is given as m = m O m OL , where m O is magnification of objectives and m OL is magnification of output lenses.The condition for carrier frequency in the output plane of the CCHM is thus given by 4 4 .
When considering Eq. ( 1) we obtain a condition for diffraction grating grooves' spatial frequency in the form 4 .
The spatial frequency of the diffraction grating used in our setup is f G = 150 mm −1 , which is designed for λ = 650 nm and NA O /m O ≤ 0.025 ratio.When a shorter wavelength or objectives with higher NA O /m O ratio are to be used, then higher values of f G are required to avoid an overlap of the sideband terms and the zero-order term in the spatial frequency spectrum (e.g.f G = 222 mm −1 is required for λ = 450 nm and NA O /m O = 0.025).
Output lens and the field of view
Output lenses are used in the CCHM setup to relay the images of the object planes (Sp, R) and the diffraction grating plane (DG) into the output plane (OP) of the interferometer.The second important role of output lenses is to ensure proper sampling of the hologram by a detector.Magnification m OL of output lenses is dependent on the maximum spatial frequency f OP,max present in the hologram at the output plane that has to be resolved and recorded digitally.Taking into account the rotation of the detector by 45° around the optical axis with respect to interference fringes, this frequency can be derived as Since the sampling rate should be at least 2.3 times higher [36] a condition for the spatial frequency f CCD (pixel density) of a CCD detector is given by ,max
For f G = 150 mm −1 and a camera pixel size of 6.45 μm Eq. ( 10) gives m OL ≥ 2.7.The higher is f G , the finer is the interference structure in the output plane and the larger magnification is thus needed to resolve the fringes by a detector and consequently the smaller is the field of view.Therefore it is convenient to keep f G as low as possible.When compared to typical DHM, the resulting FOV is smaller in the case of CCHM due to higher lateral resolution (FOV dimensions are approximately 1. to provide better resolution when compared to DHM with the same lens, while with the lowest available magnification lens the DHM is able to provide larger FOV when compared to CCHM with the same lens.To extend the available FOV a larger detector with increased number of pixels can be used.There are also several methods enabling suppression of the zero order term in the frequency spectrum to improve the available bandwidth for the sideband terms (e.g [37].).In this way a lower magnification of output lenses is needed because of lower carrier frequency, which results in larger FOV dimensions.However, these methods are still more or less approximated.There are also several more parameters of the output lenses that should be taken into account such as numerical aperture, lateral resolution or accessibility of the back focal plane.A stronger limiting condition for numerical aperture is given in the reference arm where the reference beam is deflected by the diffraction grating and has to be collected by the output lens OL 2 .The lateral resolution has to be sufficient across the whole FOV to transfer all the spatial frequencies produced by objectives into the output plane.The accessibility of the back focal plane is important to enable elimination of all diffraction grating orders except the imaging order (l = + 1) by spatial filtering.
Image processing
The reconstruction of the image amplitude and phase from a captured hologram is based on carrier removal in the Fourier plane [24,38].The hologram is Fourier transformed using the 2D fast Fourier transform (FFT) algorithm.Then the image spectrum in the sideband is extracted by a windowing operation.The window is centered at the carrier frequency f C and the size of the window corresponds to the maximum image term spatial frequency f max,or* .The frequency origin is translated to the center of the window and the spectrum is multiplied by an apodization function.Finally, the image complex amplitude is computed using the 2D inverse FFT and the image amplitude and phase are derived from the complex amplitude as modulus and argument, respectively.
Phase measurement accuracy and precision
There are several parameters that can influence the accuracy and/or precision of reconstructed phase in holographic microscopy such as coherent noise, parasitic interferences, shot noise, readout noise, quantization noise, wavelength stability, air flow, temperature fluctuations, mechanical robustness, numerical reconstruction algorithm etc. Thanks to the temporally incoherent light source CCHM enables strong suppression of coherent noise and parasitic interferences.This brings high quality and accuracy of the phase measurement.The sensitivity for wavelength stability issues is reduced thanks to the achromatic interferometer configuration.On the other hand the spatial incoherence of the light source causes decrease of interference fringes' contrast in the output plane.However, this can be easily overcome by the use of detector with increased bit depth.
To estimate the phase measurement precision we followed an approach described e.g. in [39].The temporal standard deviation σ was computed for each pixel of the blank reconstructed phase image throughout the 15 s long sequence of 140 captured images (no averaging used).In this way maps of temporal standard deviations of reconstructed phase were calculated providing information on the precision achieved in any particular pixel of the reconstructed image (Fig. 5).The measurement was performed with 14-bit camera 1376 pixels × 1038 pixels under two different degrees of temporal coherence of illuminationhalogen lamp filtered with interference filter (λ = 550 nm, 10 nm FWHM) and unfiltered (white light).Examples of central parts of captured holograms are shown in Fig. 5(a, b).With filtered light the interference fringes utilize 62% of dynamic range of the detector, while with white light the interference fringes utilize 28%.This gives 2.2 × lower contrast for white light when compared to filtered light.The obtained values of temporal standard deviations are in range of 0.002-0.006rad for filtered light with mode of ˆ0.003 ϕ σ = rad and 0.005-0.015rad for white light with mode of ˆ0.0085 ϕ σ = rad.Higher values of σ in the case of white-light illumination are probably caused by imperfect alignment of the output angle β, which influences achromaticity of the interferometer (see section 2) and decreases contrast of interference fringes.Higher values of σ at the edges of FOV are caused by slight decrease of interference fringes' contrast in these areas, which is a consequence of spatially incoherent illumination [28].When assuming the difference of refractive indices between sample and surrounding environment Δn = 0.5, one can obtain temporal standard deviation values converted to a real height: ˆ0.5 h σ = nm for filtered light illumination and ˆ1.5 h σ = nm for white light illumination.In such case the phase-measurement precision effectively reaches a sub-nanometer regime with filtered light illumination.Since these results were achieved with experimental setup built on optical table, we suppose there is a space for improvement in alignment and mechanical and thermal stability to further refine the phase measurement precision.Also there is the possibility of frame averaging to achieve better precision values.
Coherence gating
Thanks to the spatially and temporally incoherent illumination, the CCHM is capable of coherence gating, i.e. a limited contribution of light scattered in out-of-focus planes of the specimen to the resulting image.Low spatial coherence suppresses influence of scattered light in such a way that it limits interference of low-coherence non-ballistic photons.Temporal incoherence is transformed by the diffraction grating into spatial incoherence; broad-spectrum source then causes a similar effect of coherence gating as a spatially incoherent monochromatic source [28].Limiting both spatial and temporal coherence in CCHM thus results in improved in-focus image contrast especially for objects embedded in a scattering media.In the case of reflected-light CCHM, true confocal-like optical sectioning by coherence gating is achieved [6,24,25].
To prove the coherence gating effect induced by incoherent illumination and the possibility of imaging through a scattering media in our transmitted-light setup, we observed amplitude object hidden behind a strong diffuser (D) -coverslip ground glass (Fig. 6(a)).In the object arm of the CCHM the diffuser placed in the out-of-focus plane spreads the image of the object in many directions.The reference arm then acts as a filter separating always a single image from the object arm image plane, where images spread one over each other are located because of scattering by the diffuser.In this way only the separated image contribute to the interference structure of the resulting hologram.Although the reference arm is usually adjusted to separate ballistic light (providing highest contrast of interference fringes), the diffuse light imaging is also possible with CCHM [29].It can be seen in Fig. 6 that the structure of the specimen in the case of conventional bright-field image (Fig. 6(b)) is completely undistinguishable due to the diffused light, while the reconstructed CCHM amplitude (Fig. 6(c)) and phase (Fig. 6(d)) images still clearly reveal the structure.Although the interference signal is weak in case of such a strong diffuser, it still enables CCHM to acquire high-quality images of objects hidden behind it.However, this is only possible when the diffuser is located in the out-of-focus plane, otherwise the structure of the diffuser would affect the reconstructed image of the observed specimen.
Influence of spatial and temporal coherence on the imaging properties
Imaging properties of the CCHM can be varied to match the requirements of any particular application.This is done by controlling the degree of spatial and temporal coherence of the illuminating light.The degree of spatial coherence is controlled by an aperture diaphragm changing the effective area of the extended light source.The degree of temporal coherence is controlled by the use of bandpass filters together with white light source (e.g.halogen lamp).Higher degree of coherence allows for wider range of numerical refocusing.Lower degree of coherence results in several advantageous imaging properties.Reduction of spatial coherence brings better lateral resolution (see section 4).Reduction of both, spatial and temporal coherence, allows for coherence gating [25,27,28,40], i.e. imaging through scattering media and confocal-like optical sectioning in the case of reflected-light setup.Also strong suppression of coherent noise and parasitic interferences is achieved in this way [41], resulting in high phase measurement accuracy and high imaging quality (Fig. 7, Fig. 8).When working in reflected-light mode, controlling the degree of coherence between coherent and incoherent mode enables a novel method of combined phase and depth-discriminated intensity imaging which overcomes the known 2π phase ambiguity [6].In this way rough surfaces can be measured with nanometer precision.In addition to these features, achromatic off-axis geometry of CCHM allows for adaptation of illumination wavelength to take into account the spectral sensitivity of the specimen, spectral sensitivity of the detector or spectral output power of the light source.Also achievable lateral resolution and phase measurement precision can be varied in this way to optimize imaging performance in particular application.Composite color images of a sample can be obtained if a color camera is employed or by combining separate RGB intensity images.
Conclusions
The proposed setup of CCHM enables off-axis holographic microscopy with completely temporally and spatially incoherent (i.e.broadband and extended) light sources.The degree of coherence influences strongly the imaging characteristics of CCHM, thus controlling the degree of coherence brings the possibility to adapt the imaging characteristics according to particular application requirements.The main optical parameters were derived.Lateral resolution limit fully comparable to conventional widefield optical microscopes and twice smaller when compared to typical DHMs was demonstrated with spatially incoherent illumination.Also the phase measurement precision reaching a sub-nanometer regime was demonstrated.Coherence gating effect was proved when imaging an amplitude object hidden behind a strong diffuser.The influence of spatial and temporal coherence on the imaging properties was discussed pointing out the benefits of incoherent off-axis holography and the high imaging quality was presented.
The imaging characteristics of CCHM in coherent mode are comparable to typical DHM setups including the possibility to refocus numerically.In incoherent mode the CCHM is capable to provide high-quality speckle-free coherence-gated quantitative phase contrast imaging with sub-nanometer phase measurement precision and lateral resolution fully comparable to conventional optical microscopes.On the other hand these benefits of incoherent off-axis holography are at the price of more complex optical design.When compared to typical DHM setups, the CCHM provides better (lower) resolution/FOV ratio.Moreover the achromatic geometry of CCHM allows the illuminating wavelength to be chosen arbitrarily according to any requirement of any particular application or to optimize the imaging characteristics such as lateral resolution or phase measurement precision.It should be also noted that there is a limitation common for all off-axis geometry setups (including CCHM), which is the impossibility to fully exploit the available spatial frequency bandwidth of the detector.However, this limitation is balanced by the one-shot real-time measurement capability, which is the domain of off-axis systems.
When compared to previous generation of CCHM [28], the newly proposed setup eliminates spatial coherence limitation at wavelengths different from the central wavelength thus enabling the use of fully spatially incoherent sources at arbitrary wavelength of the visible spectrum.The number of required objective lenses was reduced from four to two and standard condensers are used with no need for replacement when magnification is changed.The working space and spectral transmittance were substantially improved to levels fully comparable with conventional optical microscopes.
By introducing the illumination beams into infinity spaces between objectives and tube lenses, the proposed setup can be easily adapted for reflected-light mode.Also multimodality can be achieved by implementing other imaging or micromanipulation techniques enabling CCHM to profit from combined holographic imaging.
Thanks to the real-time, non-invasive and marker-free imaging character, the CCHM in transmitted-light mode is very convenient for imaging of living cells [3,26].In such applications imaging through scattering media is highly desirable feature of CCHM, which is enabled by the use of incoherent illumination.The CCHM in reflected-light mode is most frequently used for surface profiling [6,7,27], where the incoherent illumination enables a novel combined phase and depth-discriminated intensity imaging to overcome the 2π phase ambiguity [6].
The proposed CCHM technology is patented by Brno University of Technology [42].
Fig. 2 .
Fig. 2. (a) Normalized amplitude versus the lateral shift d OP of the image formed by the reference arm in the output plane (measured in the output plane) in x and y axes and the corresponding FWHM values giving the estimation of CW.(b) Normalized amplitude versus optical path difference d OPD and the corresponding FWHM value giving the estimation of CL.
the spatial-frequency spectrum support (areas of non-zero values) of a hologram is depicted with all the terms of Eq. (2) for CCHM in comparison with a typical DHM setup.The form of the supports is explained in the following text.For purposes of the following paragraphs it should be noted that the Fourier transform of the terms |o| 2 and |r| 2 is the autocorrelation of the Fourier transform of o and r, respectively, and the Fourier transform of the term or* is the convolution of the Fourier transform of o and r* (and analogically for the term o*r).
Fig. 3 .
Fig. 3. Scheme of an ideal spatial-frequency spectrum support of a hologram captured by CCHM with spatially incoherent illumination (solid circles) and by a typical DHM setup using spatially coherent illumination (dashed circles).The theoretical highest lateral frequency max,o f carried by o is given by numerical aperture NA O of the objective, total magnification m (between the output plane and the object plane of objectives) and wavelength of light λ as ( ) highest frequency produced by the terms or 0 * and o*r 0 in the spatial frequency spectrum is given by max,
Fig. 4 .
Fig. 4. Modulus of spatial-frequency spectrum of a hologram (average of 30 images) captured by CCHM under condition of (a) spatially incoherent illumination (at 650 nm) and (b) spatially coherent illumination (at 633 nm).The circles show the expected diameters of spectral supports of zero-order term and image terms corresponding to Eq. (5) and Eq.(4).The amplitude values are in logarithmic scale (arbitrary units).Objectives used: 10 × /0.25.
Fig. 6 .
Fig. 6.Observation of an amplitude object hidden behind a strong diffuser.(a) Illustration of the object arm with inserted diffuser (D), (b) conventional bright-field image (captured with shutter closed in the reference arm), (c) reconstructed amplitude, (d) reconstructed phase.The amplitude and phase were reconstructed using a single hologram (no averaging).Objectives used: 10 × /0.25, interference filter λ = 650 nm, 10 nm FWHM.
Fig. 7 .
Fig. 7. Demonstration of imaging quality when observing a resolution target in (a) spatially and temporally low-coherent illumination (halogen lamp coupled into 5 mm diameter light guide with interference filter λ = 650 nm, 10 nm FWHM), (b) spatially and temporally coherent illumination (HeNe laser, 633 nm).Reduction of coherent noise and parasitic interferences is well demonstrated in the case of incoherent illumination as well as higher achieved lateral resolution (although it cannot be directly compared due to slightly different wavelengths used).Objectives used: 10 × /0.25.
Fig. 8 .
Fig. 8. Phase images of well spread cells of human breast adenocarcinoma cell line MCF-7 growing in vitro in eutrophic conditions.(a) Unwrapped phase image, (b) pseudo-color representation of unwrapped phase, (c) pseudo-color 3D representation of unwrapped phase.Images captured by CCHM at 650 nm (10 nm FWHM) with 10 × /0.25 objectives. | 10,682 | 2013-06-17T00:00:00.000 | [
"Physics"
] |
Interferometric distributed sensing system with phase optical time-domain reflectometry
We demonstrate a distributed optical fiber sensing system based on the Michelson interferometer of the phase sensitive optical time domain reflectometer (φ-OTDR) for acoustic measurement. Phase, amplitude, frequency response, and location information can be directly obtained at the same time by using the passive 3×3 coupler demodulation. We also set an experiment and successfully restore the acoustic information. Meanwhile, our system has preliminary realized acoustic-phase sensitivity around −150 dB (re rad/μPa) in the experiment.
Introduction
The distributed optical fiber acoustic sensors (DAS) offer the capability of measurement at thousands of points simultaneously, using a simple and unmodified optical fiber as the sensing element. It has been extensively studied and adopted for industrial applications during the past decades. Up to now, the distributed optical fiber measurements mainly include optical fiber interferometer sensors and optical backscattering based sensors. Interferometer sensors acquire distributed information by integration of the phase modulation signals, and usually two interferometers are used to determine the position, including combining the Sagnac to a Michelson interferometer [1], modified Sagnac/Mach-Zehnder interferometer [2], twin Sagnac [3]/Michelson [4]/Mach-Zehnder [5] interferometers, and adopting a variable loop Sagnac [6]. Another distinguished technique is the use of optical backscattering based sensors. A promising technique is phase sensitive optical time domain reflectometer (φ-OTDR) using a narrow line-width laser [7,8]. Brillouin-based dynamic strain sensors have been researched recently [9]. Recently, a hybrid interferometer-backscattering system is demonstrated [10], but the interferometer and the backscattering parts are working separately.
A major limitation of those distributed sensors above is that they are incapable of determining the full vector acoustic field, namely the amplitude, frequency, and phase, of the incident signal, which is a necessity for seismic imaging. Measuring the full acoustic field is a much harder technical challenge to overcome, but in doing so, it is possible to achieve high resolution seismic imaging and also make other novel systems, for example a massive acoustic antenna.
In this paper, we demonstrate the design and characterization of a distributed optical fiber sensing system based on the Michelson interferometer of the φ-OTDR for acoustic measurement. Phase, amplitude, frequency response, and location information can be directly obtained at the same time. Experiments show that our system successfully restores the acoustic information and has preliminarily realized the acoustic-phase sensitivity around -150 dB (re rad/μPa). Our system offers a versatile new tool for acoustic sensing and imaging, such as through the formation of a massive acoustic camera/telescope. The new technology can be used for surface, seabed, and downhole measurements all by using the same optical fiber cable.
Experimental setup and signal processing
The experimental setup of the Michelson interferometer of the φ-OTDR is shown in Fig. 1. The light source is a narrow linewidth laser with the maximum output power of 30 mW and linewidth of 5 kHz. The continuous wave (CW) light with a wavelength of 1550.12 nm is injected into an acoustic-optic modulator (AOM) to generate the pulses, whose width is 200 ns and the repetition rate is fixed at 20 kHz. The maximum detection length is related to the repetition rate of the pulse. The time interval among the pulses should be larger than the round trip time that the pulses travel in the detection fiber to keep only one pulse inside the detection fiber. For the 20 kHz repetition rate, the detection range is around 5 km which is determined by L<c/2nf. The detection frequency range is also related to the repetition rate. In our case, the highest detection frequency is no more than 20 kHz theoretically.
An erbium-doped fiber amplifier (A) is used to amplify the pulses, and the ASE noise is filtered by an optical fiber Bragg grating filter (F). Then the amplified pulses are launched into a single mode detection fiber (Corning SMF-28e) by a circulator. The Rayleigh back-scattering is amplified (A) and filtered (F) again to obtain better signal-to-noiseratio (SNR) improvement and then injected into a Michelson interferometer which consists of a circulator, a 3×3 coupler, and two Faraday rotation mirrors (FRMs) [11]. The half arm length of the Michelson interferometer s is set to 5 m. The final interference signals outputting from the 3×3 coupler are collected by three photodetectors (PD1-3), and then the signal processing scheme is accomplished by a software program. Theoretically, there is a 120° phase shift between two adjacent PDs. Accordingly, the outputs of the three PDs can be expressed as where ϕ(t)=ϕ s +ϕ n +ϕ 0 . ϕ s , ϕ n , and ϕ 0 are respectively the signal to be detected, the noise, and the intrinsic phase of the system. For each point on the detection fiber, ϕ s is obtained after the demodulation process shown in Fig. 2. It can directly demodulate all the information from the signal detected at the same time without any Fourier transforms. In our experiment, 2000 periods for detection fiber scanning are recorded by a high-speed oscilloscope with 100 MHz sampling rate, and the total data acquisition time is 0.1 s. Here, we choose a 200 m detection fiber and several individual acoustic frequencies within the detection length and frequency range as a test example. Two piezoelectric transducer (PZT) cylinders with 10 m single mode fiber wound are put at 100 m and 160 m over 200 m detection fiber in our system as the acoustic sources. Both PZTs are driven by two function generators. To eliminate the different frequency responses of the detection fiber, we set the two function generators to output the same sine-wave with the same frequency of 200 Hz but the amplitudes are 1 V and 2 V. The amplitude rate A 2V /A 1V =1.875/0.913 ≈ 2.054, nearly twice between the 2 V and 1 V signal amplitudes. Also the background noise of the two demodulated signals are all around -60 dB [=10 lg(A signal /A noise ), 1×10 3 rad)], so that the SNR is 29.6 dB. This result indicates that our system can well recreate the signals by their own proportions. Moreover, we use a water tank system to test the demodulation capability of our system (Fig. 4). An underwater speaker is fixed in the tank and driven by a function generator. The function generator is used to drive the underwater speaker with a 200 Hz separate sinusoidal signal. We wrap the sensing fiber into a 10 m length of fiber ring from the DAS instrument. A commercial piezoelectric hydrophone is also placed close to the fiber ring to measure the acoustic pressure amplitude. The fiber ring and the piezoelectric hydrophone are placed 5 cm away from the underwater speaker so that the sound wave produced by the speakers can be directly transmitted to the fiber. The hydrophone signal is to relate the phase measurement with acoustic pressure. So the acoustic pressure amplitudes in different acoustic intensities at 200 Hz are measured using the piezoelectric hydrophone, and the phase-pressure sensitivity of our φ-OTDR-interferometry system is calculated. Table 1 shows the phase-pressure data and results of our system. With a decrease in the acoustic pressure amplitudes, the demodulated phase changes also decrease. But the phase-pressure sensitivities are almost the same around 0.026 rad/Pa (-150 dB, =20 lg(A signal ), re rad/μPa), using the changed phase amplitude our system demodulated divided by the actual acoustic pressure amplitude the piezoelectric hydrophone detected, indicating that our system can well demodulate the amplitude, frequency, and phase of the acoustic signals with a sensitivity of -150 dB.
Further discussion
Another point that should be explained is that due to the participation of the interferometer the width of the source demodulated by our φ-OTDRinterferometer system is broadened than its origin. The extended length between the original source and the demodulated one equals just to the length of the arm length of the interferometer. On the contrary, the introduction of the Michelson interferometer has its advantage to the signal demodulation and could improve the sensitivity of our system because the effective detecting fiber is expanded. It could increase the dynamic sensitivity of our system significantly. These parameters should be chosen wisely in real applications. The polarization of the Rayleigh backscattering is also important to our system. The advantage of using Michelson interferometer rather than Mach-Zehnder one is that the FRMs keep the polarization states of the input and output lights independent from the fiber birefringence. And also the interferometer is not used as the sensing fiber, so the polarization is not a critical issue. Further experiment will be done to eliminate the polarization influence with certain polarizer at the beginning of the detection fiber.
In application, the mapping of acoustic events is very important, which directly gives what is happening around the detecting area. Classically, point sensors have been used as a serial of arrays to determine when, what, and where the acoustic event is, thus making a high cost of the monitor. Distributed sensors are much cheaper but have a major limitation that they are incapable of determining the full vector acoustic field, namely the amplitude, frequency, and phase, of the incident signal, which is a necessity for seismic imaging. By using the method of our φ-OTDR interferometry, it offers a versatile new tool for acoustic mapping and imaging in one single optical fiber, such as through the formation of a massive acoustic camera/telescope. For example, it is possible to incorporate our system as the optical hydrophone or directional accelerometer arrays and even to measure on existing arrays directly with the appropriate wavelength choice. It also can be used in many seismic acquisitions to date, encompassing vertical seismic profiling, in both flowing and non-flowing wells, and surface seismic surveys.
Conclusions
In this paper, we demonstrate the design and characterization of a distributed optical fiber sensing system based on the Michelson interferometer of the φ-OTDR for acoustic measurement. The phase, amplitude, frequency response, and location information can be directly obtained at the same time by using the passive 3×3 coupler demodulation. Experiments show that our system successfully restores the acoustic information with the acousticphase sensitivity around -150 dB (re rad/μPa). Our system offers a versatile new tool for acoustic sensing and imaging, such as through the formation of a massive acoustic camera/telescope. The new technology can be used for surface, seabed, and downhole measurements. The use of the system in downhole applications allows a continuum of benefits extending to flow profiling and condition monitoring, all using the same optical fiber cable. | 2,429 | 2016-11-30T00:00:00.000 | [
"Physics"
] |
Evidence of mud diapirism and coral colonies in the Ionian Sea ( Central Mediterranean ) from high resultion chirp sonar survey
A chirp sonar survey in the Ionian Sea investigated the Calabrian margin, the Calabrian accretionary wedge, the Taranto Trench and the Apulian foreland. Shallow tectonic structures have been related to deeper ones, recognised on CROP seismic profiles. The identified echo characters have been compared with those described in the modern literature and have been related to different kinds of sediments, on the basis of core samples. Based on echo character and morphology we have recognised: 1) A widespread presence of mounds, up to 50 m high, occurring on the Apulian plateau as isolated mounds in the deepest zones (1600-800 m) and in groups in the shallower ones (800-600 m); they have been interpreted as coral mounds, according to a recent discovery of living deep water coral colonies in this zone. 2) Some mud diapirs, isolated or in groups of two or three elements, widespread in the whole study area. Similar to what has been observed on the Mediterranean ridge, their presence suggests the activity of deep tectonic structures (thrusts and faults) and a reduced thickness (or absence) of Messinian evaporites in this part of the Ionian Sea. Mailing address: Dr. Nicoletta Fusi, Dipartimento di Scienze Geologiche e Geotecnologiche, Università di Milano «Bicocca», Piazza della Scienza, 4, 20126 Milano, Italy: e-mail<EMAIL_ADDRESS>
Introduction and geological setting
In March 2002 a cruise onboard the R/V Urania, collected about 1100 km of Chirp 2 sonar profiles in the Ionian Sea, from the inner parts of the Calabrian Arc accretionary prism to the Apulian foreland, crossing the Taranto Trench (fig.1).Part of the chirp data were collected along a trackline which follows CROP seismic line M5, oriented SW-NE (Doglioni et al., 1999;Merlini et al., 2000).A chirp sonar grid has been acquired off-shore Puglia, where living deep water corals (n.d.r.Lophelia pertusa) have been recently dredged (Tursi and Mastrototaro, pers. comm.).
The Apennine belt consists of thrust sheets of the sedimentary cover (Mostardini and Merlini, 1986;Sella et al., 1988;Patacca and Scandone, 2004), later cross-cut by deep normal faults, along which fluids migrate upwards (Doglioni et al., 1996).The Southern Apennines belt is marked by compressive thin-skinned tectonics at the front and to the east and extensional thick-skinned tectonics in the main chain and to the west of it (Merlini et al., 2000).
Seismic line M5, underlying our chirp survey, runs from off-shore Eastern Calabria to off-shore Southern Puglia and is considered one of most representative lines of the Italian project CROP (Doglioni et al., 1999).On the Calabrian margin this section crosses extensional faults, which inland are well known to form grabens (Val d'Agri, Vallo di Diano;Merlini et al., 2000) and are responsible for the high seismicity of the Southern Apennines (Amato and Selvaggi, 1993); on the eastern side the section crosses the Apulian swell which appears rather as a 100 km wide buckling anticline, deforming the entire lithosphere (Merlini et al., 2000).
Objectives of our study are: 1) to identify the main shallow active tectonic structures; 2) to integrate these structures into the regional framework, defined by deeper seismics (CROP seismic line M5); 3) to eventually relate regional fluid flow with the presence of living coral colonies and with active tectonics.
Methods
The chirp survey was carried out using the ship's DATASONIC DSP-661 Chirp 2 Profiler.Trigger rates varied from 2 to 4 s, depending on water depth, ranging from about 600 m to about 2500 m; the spectrum of frequencies of sound signal is between 3 to 7 kHz.The chirp survey was carried at a rather constant speed of about 10 knots, except for the lines on the Apulian plateau, where our main interest concentrated, which where carried at a lower speed of 4 knots.
Data were displayed on an EPC 9800 and recorded on Magneto-optical disks in SEG-Y format, with the exception of the first part of line Calabrian ridge, which was not recorded due to technical problems.Synchronization of this instrument with the navigation system (NAVPRO) allows precise location of structures on chirp profiles and precise localization for core sampling sites (gravity cores, fig.1).Sediment samples were collected with a 10 cm diameter gravity corer at 6 stations (fig.1); data from multi-corer, used mainly for biological interest, were also taken into account.
The digital chirp data were initially processed with suitable software to provide a useful image with proper geographic locations of features identified in the imagery.The processing steps included the replying data in several different vertical distance range windows, applying image corrections and digitising sub-bottom.The imagery was then visualised in its proper geographic location and exported from the software package as TIFF images with associated georeference information.
The chirp profiles yield a detailed image of the surficial structure up to 100 ms twt (corresponding to 80 m of sediments, using an interval velocity of 1600 m/s) depending on seafloor dip and characteristics.The interpretation of seismic facies identified on chirp profiles was compared with those identified by other authors (Damuth and Hayes, 1977;Damuth, 1980;Lee et al., 2002) and ground truth by means of core logs.
Results and discussion
The chirp survey (fig. 1) crosses the following structural domains, from SW to NE: the Calabrian margin, the accretionary wedge, the Taranto Trench and the Apulian foreland.
The seismic section identified by means of chirp profiling can be referred to Plio-Pleistocene units on the Calabrian margin, the accretionary wedge and the Taranto Trench, whereas on the Apulian plateau Neogene -Pleistocene units outcrop, forming the Apulian swell; for stratigraphic and structural interpretation of CROP seismic line M5 we refer to Doglioni et al. (1999) and Merlini et al. (2000).Seismic facies were related, if possible, to echo types identified by Lee et al. (2002) and indicated following their nomenclature (roman number + arabic one).
Calabrian margin
In this area the Chirp 2 system has generally a good penetration, ranging from 50 to 75 ms twt.The seismic sequence is characterised by strong reflectors alternating with transparent intervals, which mantle the topography, pinching out on the top of topographic highs.Prevalent echo type of this area can be attributed to type I-2.
This part of the seismic section is characterised by widespread dip-slip faults, which offset the seafloor for several hundred meters, the total throw of the Calabrian margin being of about 1500 m.These faults cross-cut both the Plio-Quaternary sedimentary successions and the buried pre-Messinian back-thrusts (see fig. 3 of Merlini et al., 2000); their activity seems thus to be Plio-Quaternary.This active tectonics results in an articulated topography, with several horsts (off-shore Punta Stilo and Rocella Ionica) mantled by sediments (fig.2).
The steepest flanks of horst structures (about 1.5°gradient) are affected by mass movements: landslide scarps, about 20 m high, can be identified, due to sharply truncated reflectors.Landslide deposits, with chaotic reflectors inside, (type IV-1) extend for about 8.5 km, both on the lower part of the slope and in the basin.Landslides seem to involve only the superficial and probably unconsolidated part of the sedimentary succession.
Several deeply incised canyons (fig.2) are probably related to the outlet of nearby rivers.Off-shore Punta Stilo south-eastward dipping reflectors (fig.3) are truncated by an erosional surface at about 150 m depth.A subhorizontal erosional surface, mantled by reflectors and probably related to the last glacial lowstand, can also be identified in this area at about 80-100 m b.s.l.
At least two non reflective mounds, 100-150 m high and 2500 m across, can be identified in this area, off-shore Rocella Ionica; multi-corer, put down twice on the top of one of these mounds, recovered no sediments at all, thus confirming the total lack of soft sediments on the top of the mounds.On the basis of seismic response and morphology they are interpreted as mud volcanoes.
Accretionary wedge
The accretionary wedge is characterised by a rough topography, ranging between 1400 m and 2000 m depth.The shallow sediments are gently deformed by symmetrical and asymmetrical waves, referred to echo type III-3 and interpreted as the result of creeping on the slope of the accretionary wedge; some patches of debrites (echo type III-2) are also present.Steep flanks are characterised by slumps (fig.4).
A distinctive character of this area is a flat plateau, delineated by 1600 m and 1800 m isobaths and gently dipping south-eastward (0.2°), here called Punta Stilo plateau (fig.4).It is characterised by an abrupt change in seismic character: the sea bottom is highly reflective and reflects nearly all the acoustic energy; only occasionally very few reflectors can be identified below sea bottom (echo type I-1).Micro undulations of the sea bottom of this plateau (echo type III-2) are visible only on Arco Calabro line (SE-NW).Punta Stilo plateau is interpreted as the outcropping of more resistant lithologies.We suggest that these lithologies are coarse grained turbidites, on the basis of «Rocella» core, which recovered several meters of mud intercalated with sandy layers; this interpretation is supported by other seismic surveys in this area, which outline the pinching out of reflectors below the sea bottom, suggesting lobate and basin-fill geometry of deposits (Merlini, pers. comm.).The origin of sediments involved in turbidites is probably the steep continental margin off-shore Calabria.
Geophysical evidence of mud volcanism
Several transparent mounds, 100-150 m high and from 2000 m to 3500 m across, have been identified.They occur in small groups of two or three elements and in many cases they present asymmetric flanks; they can be mantled by sediments, sometimes characterised by slumpings (fig.4).Dimensions of the mounds are consistent with the mud volcanoes and mud cones identified on the Mediterranean ridge (Camerlenghi et al., 1992(Camerlenghi et al., , 1995;;Limonov et al., 1994;Fusi and Kenyon, 1996 among others).Two of these mounds were investigated by a detailed survey, that included sampling.Core GC 19, taken on top of one mound (fig.1), recovered only soft mud, alternated with a tephra layer and some sandy layers; it can thus be inferred that the mud breccia, if present, is buried below several meters of layered sediments, as suggested by reflectors on top of the mound.
A peculiar morphologic and seismic feature of this area is an U-shaped valley, here called Thorgatten, about 130 m deep (fig.4); sub-horizontal reflectors can be seen on its flat bottom, about 3.5 km wide.It is bordered by two acoustically transparent mounds, interpreted as mud volcanoes, presenting steep internal flanks, whereas the outer flanks are characterised by dipping reflectors sometimes chaotic, due to slumping.The Thorgatten lies on the outcrop of the minor southwestward verging thrusts, identified on CROP line M5.A similar relationships between mud volcanoes and compressive deep tectonics has been delineated on the Mediterranean ridge (Camerlenghi et al., 1995).
Although we did not observe any chimney cross cutting the sedimentary sequence below the mud volcanoes, we did observe (fig.4) a typical character of mud volcanoes worldwide, that is the ring shaped depression filled by re-deposited sediments surrounding the volcano (see fig. 15 of Camerlenghi et al., 1995).
The presence of mud volcanoes on the Calabrian ridge was already known: some few small isolated mud cones were identified by means of a discontinuous side scan sonar survey (GLORIA; Fusi and Kenyon, 1996); they have a round shape and seem to recur as isolated spots, whereas on the Mediterranean ridge mud volcanoes are found in groups, of different shapes and dimensions, on the northern edge of the ridge (Camerlenghi et al., 1992(Camerlenghi et al., , 1995;;Limonov et al., 1994;Fusi and Kenyon 1996); mud volcanoes on the Calabrian ridge are generally smaller than those on the Mediterranean ridge (see fig. 3 of Fusi and Kenyon, 1996).On the basis of GLORIA data (Fusi and Kenyon, 1996) it can be hypothesized that most mud volcanoes on the Calabrian ridge are circular or sub-circular, but it cannot be excluded that some of them, especially those located along thrust planes, such as Thorgatten, are mud ridges.
Taranto Trench
The Taranto Trench appears as an almost symmetrical depression (fig.1), about 2500 m deep (fig.5) and about 9.5 km wide; both NE and SW slopes of the trench have the same gradient (about 18°).Its axis, which follows the outcropping of the accretionary wedge front, has an apenninic direction (NNW-SSE) in this part of the Ionian Sea.Along its axis, the topography of the Taranto Trench is characterised by two main steps of about 200 m each at 2800 m and 2700 m depth (fig.6).The deepest part of the line is characterized by sharp surface echo, with no sub-bottom reflectors, referred to echo type I-1.The flat area starting after the second steps, from 2600 m to 2400 m depth shows two main characteristics: a) reflectors, generally mantling the topography but in some cases pinching out on small mounds; b) small basins, about 1 km wide and 50 m deep, filled up by about 20 m of sediments.The upper part of the trench is characterised by a hummocky topography with some chaotic reflectors, referable to echo type IV-3, interpreted as interlayered debrites and turbidites.
As for the Punta Stilo plateau, we suggest that the Taranto Trench is filled by coarse grained turbidites; reflectors identified on CROP seismic line M5 in the Taranto Trench, which onlap the Apulian platform, suggest different turbiditic episodes.Turbidity currents, probably originating from the inner parts of the Taranto Gulf, are channelled along the Taranto Trench axis, eroding in part former deposits and filling small basins.
Apulian plateau
The Apulian plateau rises from more than 2400 m to about 200 m depth through several westward dipping deep seated faults, with a total throw of about 1500 m, also affecting the sea-bottom.Merlini et al. (2000) tentatively interpret these south-western scarps on the Apulian plateau as an example of foreland inversion and thus they hypothesise two thrust planes (fig.5 of Merlini et al., 2000).On chirp we have no evidence of such faults.
Few probable mud volcanoes were identified on the slope toward the Taranto Trench and are probably related to these deep seated structures.In this area the seismic signal has no penetration, due to the steepness of relief.The diffuse presence of irregular overlapping hyperbolae (echo type III-1) prevents identifications of the regional westward dipping monocline visible on CROP line M5.
At about 500 m depth, three antiforms can be recognised on chirp profiles (figs.7 and 8); their axis is appeninic (NNW-SSE) and parallel to the main dip slip faults of the Apulian Plateau.The first one from SW is the more evident, both from bathymetry and from seismics.The chirp survey suggests an asymmetric antiform, with its NE flank mantled by reflectors, 50 ms thick, onlap- ping the crest of the structure and probably disrupted by a normal Apenninic fault, which could be interpreted as a conduit for fluid expulsion (see 3.4.1).The south-western flank of this structure presents sub-horizontal reflectors, truncated by a sharp erosional surface at about 750 m depth; the outcrop or subcrop of old consolidated sedimentary successions can be reasonably supposed.On the top of this anticline small reverse faults were identified, suggesting a recent activity of this structure.This structure (figs.7 and 8) can be recognised also on CROP line M5, where it is underlined by two reverse faults which isolate a wedge, extruded upward.These reverse faults seem to affect both the Mesozoic Apulian Carbonate platform and the Neogene-Pleistocene sediments (fig.6 of Merlini et al., 2000).
A gravity core taken at the top of this anticline structure (GC 13, 666 m depth) recovered 261 cm of mud, stiffer toward the bottom of the core, with pteropod fragments and foram tests in the upper 13 cm (fig.9).This structure is followed to the NE by two other antiforms, less ev-ident from their morphology, but still identifiable by seismics (fig.8).
On the basis of chirp data we interpret both the three anticlines and the reverse faults as minor structures, ductile and tensile respectively, connected to the Apulian lithospheric anticline.
On the Apulian plateau an erosional surface at about 900 m depth is cut by several diapiric structures, sometimes crosscutting each other, and covered by a transparent sedimentary interval, which is thinner on structural highs and thicker in small basin (fig.10).Some strong reflectors, thicker north-westward, mantle some of the diapiric structures; they are reduced southeastward to one single reflector, which is cut by the diapiric structures.A semi transparent sedimentary interval mantles all the diapiric structures, onlapping south-eastward on the swelling of a nearby horst structure.In analogy with the other transparent mounds, these structures are interpreted as sub-cropping mud diapirs, deforming all the sedimentary sequence.Their intrusion is thus interpreted as an actual deformation structure, since it offsets the sea bottom; as-suming this hypothesis, the sediment deformation rate is higher than the sedimentation rate.
Geologic and structural interpretation of the study area
A synthetic structural-geological map and a cross section of the study area were produced (figs.11 and 12) on the basis of: a) the echo characters and shallow structures identified on chirp survey; b) core logs collected as ground truth data; c) deeper seismics (CROP line M5) and its structural interpretation.
The transition between the extensional tectonics of the Calabrian margin and the compressive tectonics of the accretionary wedge is represented by the Punta Stilo plateau (figs.11 and 12), dominated by the sedimentation of coarse grained turbidites and not affected yet by active tectonics.This plateau can be interpreted as a sort of huge talus, which collect coarse sediments, deriving from the erosion and mass movements from the steep Calabrian margin and from the main land.
The accretionary wedge, marked to the front by an eastward verging thrust, is topographically and structurally deeper than the foreland and is mainly subsiding (Doglioni et al., 1999;Merlini et al., 2000).On chirp profiles there is no sign of the compressive tectonics identified on CROP line M5 and in particular of the accretionary wedge front, but the widespread presence of creep deposits and debrites suggests a recent activity of this area.
The Apulian foreland, separated from the accretionary wedge by the narrow Taranto Trench, filled up by coarse turbidites, is folded by a 100 km wide crustal-lithospheric anticline, currently uplifting (Merlini et al., 2000).Superficial anticlinal folds and reverse faults identified on chirp profiles (figs.7, 11 and 12) are interpreted as minor tensional and ductile compressive structures, connected with the hinge zone of the Apulian lithospheric anticline.
In the study area mud volcanoes do not seem to be related to any particular geodynamic context, since they occur both on the accretionary wedge and on the margin of the Apulian foreland (see 4.4.2).This difference from the Mediterranean rigde, where deep origin mud volcanoes occur only at the NE edge of the ridge due to reduced thickness of Messinian evaporites and to the presence of thrusts (Camerlenghi et al., 1995) suggests that: a) the source rock is not as deep as on the Mediterranean ridge or there are possibly different source rocks at different depths; in fact, according to well logs (Merlini, pers. comm.), different age clays are widespread in all this area; b) mud diapris are related to deep seated tectonic structures, such as thrusts and/or dip slip faults.c) Messinian evaporites are absent or highly reduced in thickness, as suggested both by well logs on the mainland (Casero, 2004) and seismic sections, where a Messinian unconformity can be traced on the Calabrian margin and on the accretionary wedge (Doglioni et al., 1999;Merlini et al., 2000).In the Ionian sea the only deep well drilled (ODP Site 964 on the Pisano plateau) recovered 112 m of Plio-Pleistocene nannofossil ooze (Shipboard Scientific Party, 1996), adding no further information to the presence/absence of evaporites.
Geophysical evidence of coral mounds
The Apulian plateau is characterised by the widespread presence of transparent mounds, about 20 m high and 200-300 m large each; they occur as isolated mounds on the top of the fault steps in the deepest zones (1600-800 m -fig.7) and become widespread from 800 m depth up, both in groups and as single mounds separated by reflectors.In some areas the mounds are underlined by concave downward reflectors, repeating the morphology of the mounds themselves.A particularly high (about 50 m) transparent mound was identified at a depth of about 750 m (fig.8).Three gravity cores have been collected on these mounds (figs. 1 and 7).Core GC 11, taken on a fault step with a transparent mound, at Doglioni et al. (1999) and Merlini et al. (2000).1170 m depth recovered only 93 cm of brown mud.Core GC 12 (800 m depth), taken on an isolated transparent mound, and core GC 16 (750 m depth), taken in the area of widespread transparent mounds, recovered respectively: 100 cm of grey mud with forams and pteropods at the top and coral fragments in the upper 40 cm; 341 cm of mud, with forams and pteropods at the top (fig.9).
These mounds are here interpreted as coral mounds, on the basis of their acoustic and morphological characters.In fact, due to the high reflectivity at their top, reef structures mask the acoustic response further down, appearing thus transparent.Furthermore, these transparent mounds rise from the sea bed with a relatively high angle, that is about 50°, as described also for Norwegian reefs (Hovland and Thomsen, 1997).Seismic data are supported by the recent discovery in this area of dredged living corals (Tursi and Mastrototaro, pers.comm.) and of coral fragments in core GC 12. Hovland and Thomsen (1997) proposed that the presence of cold deep water coral could be associated with microseepage from the seabed of light hydrocarbons which provide nutrients for bioconstructor organisms.Coral colonies become widespread from the apenninic dip slip fault disrupting the anticline (fig.7); this part of the Apulian plateau is intensively fracturated by conjugate dip slip faults affecting the Neogene-Pleistocene sedimentary sequence (Merlini et al., 2000) and providing a local micro-seepage through the seabed of light hydrocarbons, useful for the life of cold water corals (Hovland and Thomsen, 1997).
The areal extension and shape of coral colonies is currently unknown.On the basis of our survey, the area characterised by the presence of coral mounds should cover about 250 km 2 .In any case, this coral site on the Apulian plateau seems to be one of the more extended and richer deep water coral site of the Mediterranean Sea and is worth additional detailed investigations.According to our survey coral mounds are present in two main different scenarios (fig.13): 1) On the slope of the Apulian swell rising from the Taranto Trench, in water depth as deep as 1600 m, coral mounds occur as: a) isolated mounds, located on the slopes; in this case no deep structure can be reliably connected with biohermes and probably these bio-constructions can be seen as a sort of pioneer organisms; b) small groups of several bioconstructions; their presence is strictly related to the dip slip fault steps, which offset the slope of the Apulian swell in this area.We infer that the presence of coral reefs at this water depth is connected to fluids rising along faults and providing a food source for suspensionfeeders, such as corals (Hovland and Thomsen, 1997).
2) In the shallower part of the Apulian plateau, from about 700 to 400 m water depth, coral mounds are widespread, covering an area of at least 250 km 2 ; furthermore, coral reefs are located immediately above a nearly surfacing up-dipping and high-reflective layer, that can be interpreted as partly gas charged and an upward micro-seepage site (Hovland and Thomsen, 1997).They can occur as: a) groups of several tens of high reflective mounds, cross cutting each other, below which no reflector can be detected.Due both to concentration of reefs and to their particular height (up to 50 m) we interpret this scenario as characterised by fast growing bio-constructions; b) groups of several tens of high reflective mounds, cross cutting each other, underlined by upward convex reflectors, interpreted as dead coral reefs.This context suggests superimposed cycles of biohermes; it can thus be inferred that the presence of corals in this area is long lasting; c) small coral mounds, few meters high, separated by ponded sediments; one of this coral reefs grows on one small fault, offsetting the sea bottom, with a throw of about 3 m.We interpret this situation as isolated, relatively slow growing bioconstructions, separated by ponded sediments.
Conclusions
The peculiarity of the study area is the passage from an extensional tectonic regime (Calabrian margin) through a compressive one (Cal-abrian ridge) to a stable zone (Apulian foreland).The main results of our chirp survey in this area are the following: 1) The transition between the extensional tectonics of the Calabrian margin and the compressive tectonics of the accretionary wedge is represented by a wide flat plateau (Punta Stilo plateau), gently dipping south-eastward (0.2°) and dominated by the sedimentation of coarse grained turbidites; no evidence of active tectonics can be identified on this plateau.
2) Superficial anticlinal folds and reverse faults, identified on the Apulian plateau, are interpreted as minor tensional and ductile compressive structures, connected with the hinge zone of the Apulian lithospheric anticline.
3) Some mud volcanoes have been identified in the whole study area.In analogy with what has been observed on the Mediterranean ridge, their presence suggests the activity of deep tectonic structures (both thrusts and faults) and/or a reduced thickness (or absence) of Messinian evaporites in this part of the Ionian Sea.
4) On the Apulian plateau, the widespread presence of mounds, up to 50 m high, occurring as isolated mounds in the deepest zones (1600-800 m) and in groups in the shallower ones (800-600 m), has been interpreted as deep water coral bioconstructions, according to a recent discovery of living deep corals in this zone.The intense fracturation of the Apulian plateau and the possible related cold seepage of hydrocarbons could sustain the deep coral communities through the presence of chemiosyntetic bacteria as primary producers.
Fig. 1 .
Fig. 1.Map of Northern Ionian Sea, showing the chirp lines and the main core locations, obtained during the Urania cruise, with the main structural tectonics elements of the Central Mediterranean region shown in the right down corner.
Fig. 2 .
Fig. 2. Chirp profile across the Calabrian ridge (Calabrian ridge line -see fig. 1 for location), with the insets of the main structural features identified (A, B, C).A -example of a canyon; B -onlapping reflectors on the top of a horst; C -landslide scarp on the steepest flanks of horst structures and relative deposits.
Fig. 3 .
Fig. 3. Chirp profile across the Calabrian ridge and the western section of calabrian margin (Ionica 1 line -see fig. 1 for location).The zoom (A) shows the subhorizontal erosional surface of the shelf break, mantled by reflectors and probably related to last glacial lowstand at about 80-100 m b.s.l.
Fig. 4 .
Fig. 4. Chirp profile across the Calabrian margin (Calabrian Arc line -see fig. 1 for location), with the zoom of the main geophysical features that could be related to mud volcanism (A, B, C, D).A -Two mounds with asymmetric flanks mantled by sediments, characterised by slumpings with the location of the GC-19 core (see fig. 1 for location).B -Isolated symmetric mound with chaotic sediment, with the ring shaped depression filled by re-deposited sediments surrounding the volcano, that is a typical character of mud cones.C -the U-shaped valley (Thorgatten), with sub-horizontal reflectors on the flat bottom, bordered by two acoustically transparent mounds, interpreted as mud volcanoes.D -Particular in rough topography area of a transparent symmetric mound at the top of a horst (interpreted as mud volcano).
Fig. 5 .
Fig. 5. Chirp profile across the Taranto Trench and the eastern section of Apulian plateau (Apulian 1 line -see fig. 1 for location), that rises through westward dipping deep seated faults.The zoom shows the sharp surface echo, with no sub-bottom reflectors, that characterised the Taranto Trench.
Fig. 6 .
Fig. 6.Chirp profile across the axis of Taranto Trench (Apulian 6 line -see fig. 1 for location), with the zoom of the main features identified (A, B): A -a hummocky topography with some chaotic reflectors and small basins filled up by sediments; B -the deepest part of the trench with steplike topography.
Fig. 7 .
Fig. 7. Chirp profile across the Apulian plateau (Apulian 2 line -see fig. 1 for location), with the insets (A, B, C, D) of the main tectonic and geophysical features that mark the transparent mounds recognised in this area and the location of the main cores collected.A -Group of transparent mounds on the top of the fault steps; B -isolated transparent mound on a fault step; C -the asymmetric antiform, with the NE flank mantled by reflectors, onlapping the crest of the structure, and the south-western flank characterised by sub-horizontal reflectors, truncated by a sharp erosional surface; D -area with widespread transparent mounds and ponded sediments.
Fig. 8 .
Fig. 8. Chirp profile across the Apulian plateau (Apulian 9 line -see fig. 1 for location), with the zooms (A, B) of the main geophysical features that characterise the mound fields recognised in this area.See on zoom B the highest transparent mound identified on Apulian plateau.
Fig. 10 .
Fig. 10.Chirp profile across the Apulian plateau (Apulian 4 line -see fig. 1 for location), with the zooms of the main geophysical features that characterise the mound fields recognised in this area (A, B) and the diapiric intrusions (C).
Fig. 11 .
Fig. 11.Synthetic structural-geological map of the Northern Ionian Sea produced on the basis of the echo characters and shallow structures identified on chirp survey and the core logs.
Fig. 12 .
Fig. 12. Cross section of the Northern Ionian Sea with shallow tectonics structures identified by chirp survey, related to deeper ones from Doglioni et al. (1999) and Merlini et al. (2000).
Fig. 13 .
Fig.13.Different scenarios of the geophysical evidence of transparent mounds identified on the Apulian plateau: 1A -isolated mounds, located on the slopes; 1B -small groups near the main dip slip fault steps; 2A -widespread mounds, cross cutting each others, below which no reflector can be detected; 2B -widespread mounds, cross cutting each others, underlined by upward convex reflectors; 2C -small mounds separated by ponded sediments. | 6,955.2 | 2006-12-25T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
A possible macronova in the late afterglow of the long–short burst GRB 060614
Long-duration (>2 s) γ-ray bursts that are believed to originate from the death of massive stars are expected to be accompanied by supernovae. GRB 060614, that lasted 102 s, lacks a supernova-like emission down to very stringent limits and its physical origin is still debated. Here we report the discovery of near-infrared bump that is significantly above the regular decaying afterglow. This red bump is inconsistent with even the weakest known supernova. However, it can arise from a Li-Paczyński macronova—the radioactive decay of debris following a compact binary merger. If this interpretation is correct, GRB 060614 arose from a compact binary merger rather than from the death of a massive star and it was a site of a significant production of heavy r-process elements. The significant ejected mass favours a black hole–neutron star merger but a double neutron star merger cannot be ruled out.
L ong-duration (42 s) g-ray bursts (GRBs) are believed to originate from Collapsars that involve death of massive stars and are expected to be accompanied by luminous supernovae (SNe). GRB 060614 was a nearby burst with a duration of 102 s at a redshift of 0.125(ref. 1). While it is classified as a long burst according to its duration, extensive searches did not find any SNe-like emission down to limits hundreds of times fainter 2-4 than SN 1998bw, the archetypal hypernova that accompanied long GRBs 5 . Moreover, the temporal lag and peak luminosity of GRB 060614 fell entirely within the short duration subclass and the properties of the host galaxy distinguish it from other long-duration GRB hosts. Thus, GRB 060614 did not fit into the standard picture in which long-duration GRBs arise from the collapse of massive stars while short ones arise from compact binary mergers. It was nicknamed the 'long-short burst' as its origin was unclear. Some speculated that it originated from compact binary merger and thus it is intrinsically a 'short' GRB 1,4,[6][7][8] . Others proposed that it was formed in a new type of a Collapsar which produces an energetic g-ray burst that is not accompanied by an SNe [2][3][4] .
Two recent developments may shed a new light on the origin of this object. The first is the detection of a few very weak SNe (for example, SN 2008ha 9 ) with peak bolometric luminosities as low as LB10 41 erg s À 1 . The second is the detection of an infrared bump, again with a LB10 41 erg s À 1 , in the late afterglow of the short burst GRB 130603B 10,11 . This was interpreted as a Li-Paczyński macronova (also called kilonova) 12-19 -a nearinfrared/optical transient powered by the radioactive decay of heavy elements synthesized in the ejecta of a compact binary merger. Motivated by these discoveries, we re-examined the afterglow data of this peculiar burst searching for a signal characteristic to one of these events.
The X-ray and UV/optical afterglow data of GRB 060614, were extensively examined in the literature 20,21 and found to follow very well the fireball afterglow model up to tB20 days 22 . The J-band has been disregarded because only upper limits B19-20 th mag with a sizeable scatter are available at t 42.7 days, and these are too bright to significantly constrain even supernovae as luminous as SN 1998bw 23 . In this work we focus on the optical emission. We have re-analysed all the late time (that is, t Z1.7 days) very large telescope (VLT) V, R and I -band archival data and the Hubble space telescope (HST) F606W and F814W archival data, including those reported in the literature 3,4 and several unpublished data points. Details on data reduction are given in the Methods.
Results
The discovery of a significant F814W-band excess. Figure 1 depicts the most complete late-time optical light curves (see Supplementary Table 1; the late VLT upper limits are not shown in Fig. 1) of this burst. The VLT V, R and I-band fluxes decrease with time as pt À 2.30±0.03 (see Fig. 1, in which the VLT V/I band data have been calibrated to the F606W/F814W filters of HST with proper k-corrections), consistent with that found earlier 3,20,21 . However, the first HST F814W data point is significantly above the same extrapolated power-law decline. The significance of the deviation is B6s (see the estimate in the Methods). No statistically significant excess is present in both the F606W and the R bands. The F814W-band excess is made most forcibly by considering the colour evolution of the transient, defined as the difference between the magnitudes in each filter, which evolves from V-IE0.65 mag by the VLT (correspondingly for HST we have F606W-F814WE0.55 mag) at about t B1.7 days to F606W-F814WE1.5 mag by HST at about 13.6 days after the trigger of the burst. With proper/minor extinction corrections, the optical to X-ray spectrum energy distribution for GRB 060614 at the epoch of B1.9 days is nicely fitted by a single power law 3,20,21 F v pv À 0.8 . In the standard external forward shock afterglow model, the cooling frequency is expected to drop with time as 22 v c pt À 1/2 . Thus, it cannot change the optical spectrum in the time interval of 1.9-13.6 days. Hence, the remarkable colour change and the F814W-band excess of B1 mag suggest a new component. Like in GRB 130603B this component was observed at one epoch only. After the subtraction of the power-law decay component, the flux of the excess component decreased with time faster than t À 3.2 for t 413.6 days. Note that an unexpected optical re-brightening was also detected in GRB080503, another 'long-short' burst 24 . However, unlike the excess component identified here, that re-brightening was achromatic in optical to X-ray bands and therefore likely originated by a different process.
Discussion
Shortly after the discovery of GRB 060614 it was speculated that it is powered by an 'unusual' core collapse of a massive star 2,3 . We turn now to explore whether the F814W-band excess can be powered by a weak supernova. Figure 2 depicts the colour F606W-F814W of the excess component (we take F606W-F814WE1.5 mag as a conservative lower limit of the colour of the 'excess' component due to the lack of simultaneous excess in F606W-band) with that of SN 2006aj 25 , SN 2008ha (i.e., the extremely dim event) 26 and SN 2010bh 27 . The excess component has a much redder spectrum than the three supernovae. If the 'excess component' was thermal it had a low effective temperature T eff o3,000 K to yield the very soft spectrum. Such unusually low effective temperature is also needed to account for the very rapid decline of the excess component. The expansion velocity can be estimated as uB1.2 Â 10 4 km s À 1 (L/10 41 erg s À 1 ) 1/2 (T eff /3,000 K) À 2 (t/13.6 days) À 1 . The implied 56 Ni mass is $ 10 À 3 M if this was a supernova-like event that peaked at B13.6 days 9 . We take a standard cosmology model with H 0 ¼ 71 km s À 1 Mpc The low luminosity as well as the low effective temperature of the transient emission are typical characteristics of a macronova, a transient arising from the radioactive b-decay of material ejected in a compact binary merger. The opacity of the macronova material is determined by the Lanthanides that are produced via r-process in the neutron-rich outflow. This opacity is very large (kE10 cm 2 g À 1 ) resulting in a weak, late and red emission. The emerging flux is greatly diminished by line blanketing, with the radiation peaking in the near-infrared and being produced over a timescale of B1-2 weeks 17,18 . Simple analytic estimates, using a radioactive b-decay heating rate 16,28 of 10 10 erg s À 1 g À 1 [t/(1 þ z)1 day] À 1.3 , suggest that in order to explain the observed F814W-band excess, the required ejecta mass and expansion velocity are: M ej $ 0:13M ðL=10 41 erg s À 1 Þðt=13:6 dayÞ 1:3 and uB0.1c (L/10 41 erg s À 1 ) 1/2 (T eff /2,000 K) À 2 (t/13.6 days) À 1 , respectively. Note that the macronova outflow is quite cold at such a late time 17,18 . The effective temperature is T eff E2,000 K and the observer's F814W-band is above the peak of the black body spectrum. The emitting radius and the corresponding expansion velocity are much larger than in a supernova at this stage. Scaled up numerical simulations of lighter ejecta from black holeneutron star mergers 28 suggest that M ej $ 0:1M and a velocity B02c can account for the observed F814W-band excess. This numerical example is presented in Fig. 1 in dashed lines.
The implied ejecta mass is large compared with the mass ejection estimated numerically to take place in double neutron star mergers. However, it is within the possible range of dynamical ejecta of black hole-neutron star mergers with some extreme parameters (a large neutron star radius and a high black hole spin aligned with the orbital angular momentum) 14,[29][30][31][32] . An accretion disk wind may contribute some additional mass as well 15,33,34 . However, the radioactive heating due to fission of the heavy r-process nuclei, which is quite uncertain and subdominant in current heating estimates 16 , may play an important role in the energy deposition. It may increase the energy deposition rate at around 10 days by a significant factor 35 . This may reduce the required ejecta mass to $ 0:03 À 0:05M . This range of the ejecta masses is well within the range of the dynamical ejecta of black hole-neutron star mergers and it is even compatible with some estimates of double neutron star mergers.
We conclude that while a weak supernova cannot explain the observations, a high mass ejection macronova may. Like in GRB 130603B we must caution here that this interpretation is based on a single data point. However, if this interpretation is correct, it has far reaching implications. First, the presence of macronovae in both the canonical short burst GRB 130603B and in this 'long-short' one, GRB 060614, suggests that the phenomenon is common and the prospects of detecting these transients are promising. A more conclusive detection based on more than a single data point could be achieved in the future provided that denser HST observations are carried out. Moreover, as a black holeneutron star merger is favoured in explaining the large ejected mass this implies that such binary systems may exist and their mergers are also responsible for GRBs. It also suggests that the 'long-short' burst was in fact 'short' in nature, namely, it arose from a merger and not from a Collapsar. The fact that a merger generates a 100 s long burst is interesting and puzzling by itself.
Clearly such events would contribute a significant fraction of the r-process material 36 . The actual contribution relative to the contribution of 130603B-like events is difficult to estimate as it is unclear which fraction of the macronovae/kilonovae behave as each type. Because of beaming most mergers will not be observed as GRBs. However, they emit omnidirectional gravitational radiation that can be detected by the upcoming Advanced LIGO/VIRGO/KAGRA detectors. These near-infrared/optical macronovae could serve as promising electromagnetic counterparts of gravitational wave triggers in the upcoming Advanced LIGO/VIRGO/KAGRA era.
Methods
Data reduction. We retrieved the public VLT imaging data of GRB 060614 from European Southern Observatory (ESO) Science Archive Facility (http://archive.eso.org). The raw data were reduced following standard procedures, including bias subtraction, flat fielding, bad pixel removal and combination. Observations made with the same instrument and filter at different epochs are compared with that of the last epoch. The software package ISIS (http://www2.iap.fr/users/alard/package.html) is used to subtract images and measure the GRB afterglow from the residual images. Photometric errors are estimated from the photon noise and the sky variance to 1s confidence level. The 3s of the background root mean square of the residual images is taken as the limiting magnitude. Finally, standard stars Table 1, being well consistent with these given by other groups 3,21 . We assumed that the afterglow is characterized by the same power-law spectrum with index b ¼ 0.80 during these observations 20 , with which we get the k-corrections between the VLT V/I and HST F606W/F814W magnitudes, namely 0.12 mag and 0.02 mag, respectively. Such corrections have been taken into account in Fig. 1.
HST archive data of GRB 060614 are available from the Mikulski Archive for Space Telescopes (MAST; http://archive.stsci.edu), including one observation with the Wide Field and Planetary Camera 2 (WFPC2) and four observations with the Advanced Camera for Surveys (ACS) in F606W and F814W bands. The reduced data provided by MAST were used in our analysis. The last observation has been taken as the reference and the other images of the same filter are subtracted in order to directly measure fluxes of the afterglow from the residual images. Empirical point spread functions (PSFs) were built with bright stars in each image. Bright compact objects in the same field were used to align and relatively calibrate these images. WFPC2 image differs from ACS image in PSF. Before image subtraction, the WFPC2 and ACS images were matched to the same resolution by convolving each with the other's PSF. The PSF-matched WFPC2 and ACS images were aligned and subtracted. Aperture photometry was carried out for the afterglow in the residual image. The aperture correction derived from the empirical PSF was applied to yield the total flux. The host galaxy was used to relatively calibrate the afterglow between images, and the ACS zeropoints were used for absolute calibration. If the signal of the afterglow is too faint to be a secure detection, an upper limit of 3s background root mean square is adopted. The results are reported in Supplementary Table 1, being well in agreement with these published in the literature 4 . The magnitudes of the host galaxy are measured in the last observation of all filters and can well be fitted by an Sc type galaxy template ( Supplementary Fig. 1), demonstrating the self-consistence of our results.
VLT light curve decline rate and significance of the excess. As found in previous studies, the late-time optical/X-ray afterglow emission of GRB 060614 can be interpreted within the fireball forward shock model 20,21 . Motivated by such a fact, we assume that the I, R and V light curves follow the same power-law decline. In our fit there are four free parameters, three are related to the initial flux/magnitude in these three bands and the last is the decline rate needed in further analysis. We fitted all the VLT data (combined I, R and V band together) during the first 15 days (after which there are just upper limits) to determine these four parameters as well as their errors. The best-fit decline is found to be pt À 2.3±0.03 , well consistent with that obtained in optical to X-ray bands in previous studies 3, 20,21 . As a result of the propagation of uncertainties, the errors of the best-fit light curves are consequently inferred (the shadow regions in the residual plot of Fig. 2 represent the 1s errors of the best-fit light curves). Please note that in Fig. 2 the VLT V/I band emission have been calibrated to HST F606W/F814W filters with proper k-corrections. The flux separation between the HST F814W-band data and the fitted curve at tB13.6 days is F excess ¼ 0.182 mJy. The flux error of the F814W-band emission at tB13.6 days is dF obs E0.024 mJy. The flux error of the best fitted F814W-band light curve at tB13.6 days is dF fit E0.012 mJy. The significance of the excess component is estimated by R ¼ F excess = ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi dF 2 obs þ dF 2 fit p $ 6. We therefore suggest that the excess component identified in this work is statistically significant at a confidence level of B6s. | 3,750.6 | 2015-03-26T00:00:00.000 | [
"Physics"
] |
A Simple Stochastic Reaction Model for Heterogeneous Polymerizations
The stochastic reaction model (SRM) treats polymerization as a pure probability‐based issue, which is widely applied to simulate various polymerization processes. However, in many studies, active centers were assumed to react with the same probability, which cannot reflect the heterogeneous reaction microenvironment in heterogeneous polymerizations. Recently, we have proposed a simple SRM, in which the reaction probability of an active center is directly determined by the local reaction microenvironment. In this paper, we compared this simple SRM with other SRMs by examining living polymerizations with randomly dispersed and spatially localized initiators. The results confirmed that the reaction microenvironment plays an important role in heterogeneous polymerizations. This simple SRM provides a good choice to simulate various polymerizations.
Introduction
Heterogeneous polymerizations are polymerization processes with two or more phases during or after polymerization, as well as polymerizations at the surface or interface [1,2]. Typical heterogeneous polymerizations, such as suspension, emulsion, dispersion, precipitation polymerizations, etc., are widely applied for the synthesis of high-performance materials such as paints, coating, and many others. Other systems include polymerization-induced phase separation (PISA) [3][4][5], surface-initiated polymerization (SIP) [6,7], and polymerizations in living cells [8,9]. It is challenging to characterize the polymerization kinetics since the distributions of species are inhomogeneous in space and variant with time.
We are interested in the stochastic reaction model (SRM) [37,38], which treats polymerization as a pure probability-based issue. Due to the simple idea, various SRMs have been proposed and applied in CG simulations to study polymerizations in homogeneous as well as heterogeneous systems [10,12,19,25,27,[31][32][33][34][35]. However, as pointed out by Arraez et al., most SRMs are based on simplified constant reaction probabilities, which fail to capture the kinetics of heterogeneous polymerizations since the concentration variations are inevitable [29].
To better understand the dilemma of the SRM, we would like to briefly introduce how to simulate a living polymerization with different versions of the SRM. As known, the polymerization rate R p of a living polymerization is where k p is the reaction rate constant, and [M] and [I] 0 are the concentrations of free monomers and active centers, respectively. A successful SRM should yield the correct polymerization kinetics.
In the SRM, usually, an active center reacts with a randomly selected free monomer [27,33] or a closed one [39,40] in the given reaction radius with a probability P r (Figure 1a). The number of consumed monomers in one reaction step is I 0 P r . P r is an important characteristic for controlling the kinetics of the process. According to Equation (1), P r should be proportional to the concentration of free monomers. In a very popular version of the SRM (Version I), P r is simply calculated as [27] P where [M] and [M] 0 are the instantaneous and initial concentration of free monomers, respectively, and the initial addition probability P 0 is a constant. Equation (2) implies that the system is well-mixed, i.e., all species are homogeneously distributed in space so that all active centers react with the same P r in Version I. the probability to find a free monomer at a given site is determined by the local concentration of free monomers. On the contrary, it is not considered in Versions I and II.
Recently, we proposed a simple SRM (Version V) for heterogeneous polymerizations [38]. When an active center tries to react with a randomly selected free monomer in the reaction radius, the reaction probability Pr is calculated according to the number of local free monomers m as mP0 (Figure 1a). This idea follows the work by Akkermans et al. [10], in which the probability was suggested to be m/mmax, where mmax was a preset value and always larger than m. To our understanding, however, it was the average number of free monomers <m> that was applied in [10], which approached a constant value for not short chains (Equation (4) in [10]). We applied version V as well as Version I for comparison to study surface-initiated polymerization (SIP) [38]. Essentially, SIP is a heterogeneous polymerization that is easy to be ignored because free monomers are gradually distributed in the system [38]. The results suggested that the reaction microenvironment plays an important role in SIP. To our understanding, the influence of the reaction microenvironment has not attracted enough attention in many simulations with SRM. For example, inappropriate versions have been applied in the simulations of SIP and PISA [27,32,33]. Sometimes, the description of the SRM is too simple to judge whether the heterogeneous reaction microenvironment has been considered or not. To provoke enough attention to the reaction microenvironment and promote the application of the SRM in heterogeneous In Version I, the reaction probability P r decreases with the decreasing of the monomer concentration [M] and should be calculated at every reaction step. When the monomer conversion is low, the variance of [M] can be ignored [36]. There is no need to calculate P r at every reaction step. In some simulations (Version II), P r was further simplified as a constant, which was independent of [M] [31,33,41]. It is a type of polymerization with a constant reaction rate R p instead of a constant reaction rate constant k p .
Version I differs from Version II, as the influence of the monomer concentration has been considered in Version I. However, in both versions, all active centers react with the same P r . The divergence of the local reaction microenvironment is not considered. To simulate heterogeneous polymerizations, a step forward is required to consider the reaction microenvironment of each active center [29].
In the DPDChem software [42], another strategy has been applied. An active center sequentially reacts with nearby monomers according to the distance with a fixed probability of P 0 until a bond is created or there are no more unchecked monomers (Figure 1b). We can calculate the probability of the active center to create a bond, which is where m is the number of monomers in the reaction radius. When P 0 is small, P r can be expressed as mP 0 . In this version (Version III), each active center reacts with its P r , and the effect of the reaction microenvironment is well considered. DPDChem has been widely applied to simulate various homogeneous and heterogeneous polymerizations [35,[43][44][45][46][47].
In lattice Monte Carlo (MC) simulations, such as BFM and DLL [20][21][22][23][24][25][26]48,49], another version (Version IV) has been applied ( Figure 1c). For an active center, one of the neighboring sites is randomly selected. If this site is occupied by a free monomer, a reaction is tried with a fixed probability P 0 . Attention must be paid that Version IV (select a site and try to react if it is a monomer) is essentially different from Versions I and II (select a monomer and try to react). The reaction microenvironment is considered in Version IV since the probability to find a free monomer at a given site is determined by the local concentration of free monomers. On the contrary, it is not considered in Versions I and II.
Recently, we proposed a simple SRM (Version V) for heterogeneous polymerizations [38]. When an active center tries to react with a randomly selected free monomer in the reaction radius, the reaction probability P r is calculated according to the number of local free monomers m as mP 0 (Figure 1a). This idea follows the work by Akkermans et al. [10], in which the probability was suggested to be m/m max , where m max was a preset value and always larger than m. To our understanding, however, it was the average number of free monomers <m> that was applied in [10], which approached a constant value for not short chains (Equation (4) in [10]). We applied version V as well as Version I for comparison to study surface-initiated polymerization (SIP) [38]. Essentially, SIP is a heterogeneous polymerization that is easy to be ignored because free monomers are gradually distributed in the system [38]. The results suggested that the reaction microenvironment plays an important role in SIP.
To our understanding, the influence of the reaction microenvironment has not attracted enough attention in many simulations with SRM. For example, inappropriate versions have been applied in the simulations of SIP and PISA [27,32,33]. Sometimes, the description of the SRM is too simple to judge whether the heterogeneous reaction microenvironment has been considered or not. To provoke enough attention to the reaction microenvironment and promote the application of the SRM in heterogeneous polymerizations, we compared the new SRM with Versions I, II, and III in this paper. Version IV is not examined here as it can be only applied in lattice simulations. Theoretically, it should obtain the same results as Version V. The chance that a randomly selected neighbor site is occupied by a free monomer is proportional to the number of local free monomers. The algorithms of different versions of the SRM are introduced in Section 2. The results of living polymerizations with randomly dispersed and spatially localized initiators are shown in Section 3. A brief conclusion is given in Section 4.
Lattice Monte Carlo Simulation
The simulation was carried out in a simple cubic lattice with a volume V = L × L × L with the periodic boundary condition in three directions. The Larson-type bond fluctuation model has been adopted [38,[50][51][52] since the corresponding theoretical polymerization kinetics can be easily obtained to guide the simulation, as shown later. In this model, each monomer (or initiator) occupies one lattice, and the permitted bond length is 1 or √ 2 .
During relaxation, a monomer randomly selects one of 18 nearest and next nearest neighbor sites and tries to move. In any elementary movement, a bond intersection is forbidden. Meanwhile, each lattice can be occupied only once; thus, the excluded volume effect is well considered in this simulation. The stimulation time is measured in units of MC steps (MCs), which is defined as all monomers are tried to move once on average.
Implementation of Stochastic Reaction Model
While implementing SRM, one question is when to model a reaction. In the work by Genzer [27], a decision was made by a comparison of a generated random number with a probability of choosing motion over reaction. While studying the influence of diffusion with BFM, Lu and Ding applied a probability to reduce the movement of a monomer to a vacancy [24]. Here, a characteristic delay time, or reaction interval time, τ was applied to separate two successive reaction steps [33,38,53]. The effect of diffusion can be simply tuned by adjusting the value of τ.
When an active center tries to react with a free monomer in a given reaction radius R cut , the value of R cut influences the polymerization kinetics. A bigger R cut can speed up the polymerization, as there are more reaction candidates. In simulations such as MD, however, instabilities might be provoked due to the strong force after the creation of a new bond [15]. A proper reaction radius is needed to achieve the balance between the polymerization speed and the simulation stability. A larger reaction interval time is helpful for the dissipation of the new bond energy and the simulation stability. Here, R cut was set as √ 2 , the same as the longest bond length in the Larson-type bond fluctuation model. The maximum reaction candidates m max around an active center is 18. Section 1 has introduced the basic idea of different SRMs. In this simulation, the procedures of different versions are as follows: In Version III, an active center sequentially reacted with nearby monomers with a reaction probability P r,seq = P 0 , where P 0 is a constant and defined as the reaction probability between one active center and one free monomer. In Version V, an active center tried to react with a randomly selected free monomer in the reaction radius with a reaction probability P r,loc = mP 0 , where m is the number of monomers in the reaction radius of the active center. Here, P 0 is restricted to be no larger than 1/m max , i.e., P r,loc is no larger than 1; thus, the effect of the concentration of free monomers could be correctly considered.
In the literature [27,36], the reaction probability of Version I was calculated according to the instantaneous concentration of free monomers [M] and scaled by the initial concentration [M] 0 (Equation (2)). It must be pointed out that [M] 0 in Equation (2) should be the maximum concentration of monomers in a bulk polymerization. Otherwise, it will fail to compare the polymerization kinetics of systems with different initial monomer concentrations. In this study, the reaction probability of Version I, P r,av , was calculated according to the average number of free monomers around an active center Here, m max − 1 instead of m max is used as one nearby position is occupied by the previous monomer of the same chain. For Version II, the reaction probability P r,const was a constant, which was calculated according to the initial concentration of free monomers as Once the reaction probability (P r,av , P r,const , P 0 , and P r,loc in Versions I, II, III, and V, respectively) between the reaction pair is determined, a random number is generated. If the randomly generated number is no larger than the reaction probability, the reaction is accepted, and the free monomer turns out to be the active center for future reaction. The information such as the chain length will be updated. The illustrations of one reaction cycle of different versions are shown in Figure 1.
Polymerization Kinetics
The polymerization kinetics of a living polymerization has been deduced to guide the simulation [38]. Since the time between two reaction steps is τ, the concentration change of free monomers during polymerization can be calculated as According to Equation (4), the above equation can be written as The theoretical monomer conversion of a homogeneous living polymerization is
Homogeneous Polymerization
Firstly, different versions of the SRM were applied to examine a living polymerization system with randomly dispersed free monomers and initiators. The length of the cubic lattice was L = 60, the initial monomer concentration was [M] 0 = 0.4 monomer per lattice, the number of initiators was I 0 = 1000, the reaction interval time τ was 10 MCs, the simulation time was 10 6 MCs, and the reaction probability between one active center and one monomer was P 0 = 0.001. In the reaction step, an initiator is randomly selected, then it tries to react with a nearby free monomer with a certain reaction probability. If the reaction is accepted, the initiator transfers the reactivity to the free monomer and itself becomes the first monomer of the polymer chain. Meanwhile, the newly reacted free monomer becomes an active center for future reactions. In this way, linear polymer chains are obtained. The results were averaged over 20 independent runs.
Living polymerization characters were obtained with all versions, and the results are almost identical (Figure 2). For example, the linear relationship between the numberaverage molecular weight M n and the monomer conversion C was observed (Figure 2a). The dispersity (Ð = M w /M n ) slightly increased with C at the start of the simulation and further decreased to reach a plateau (Figure 2b). Figure 2c shows the molecular weight distributions during the polymerization, which can be described by the Poisson distribution We studied the number of unreacted initiators I during polymerization. It was predicted that I can be calculated as [33] when P r is a constant. In the early stage of the polymerization, the monomer conversion is very low, e.g., about 3% when t = 4000 MCs. The influence of monomer concentration on the reaction probability can be ignored. Figure 2d shows that ln(I/I 0 ) decays linearly, consistent with the theoretical prediction very well.
dicted that I can be calculated as [33] / 0 (1 ) t r I I P τ = − when Pr is a constant. In the early stage of the polymerization, the monomer conversion is very low, e.g., about 3% when t = 4000 MCs. The influence of monomer concentration on the reaction probability can be ignored. Figure 2d shows that ln(I/I0) decays linearly, consistent with the theoretical prediction very well. The differences between different versions can be observed when the monomer conversion C and reaction rate Rp are concerned. As expected, the monomer conversion C of Version II increased linearly with time and quickly reached a plateau due to the constant reaction probability (Figure 3a). The results of other versions followed the polymerization kinetics of a living polymerization as predicted by Equation (8). However, a deviation could be clearly observed for Version I in the late stage of the simulation. The reason is The differences between different versions can be observed when the monomer conversion C and reaction rate R p are concerned. As expected, the monomer conversion C of Version II increased linearly with time and quickly reached a plateau due to the constant reaction probability (Figure 3a). The results of other versions followed the polymerization kinetics of a living polymerization as predicted by Equation (8). However, a deviation could be clearly observed for Version I in the late stage of the simulation. The reason is that the well-mixed assumption adopted in Version I is no longer held. When the concentration of free monomers [M] is very low (m max [M] < 1), a monomer cannot be found in the reaction radius of an active center. Both the chance to find a free monomer and the reaction probability P r,av are proportional to [M]. The effect of monomer concentration was considered twice, and a slow polymerization is observed (also shown in Figure 3b). Due to the same reason, the polymerization rate of Version II was no more a constant at the late stage of polymerization (Figure 3b). Both Versions I and II should better study polymerization systems with a concentration of free monomers higher than 1/m max [38]. There is no such limitation for Versions III and V, and the results are consistent with the theoretical predictions even when the concentration of free monomers is very low (Figure 3a,b).
probability Pr,av are proportional to [M]. The effect of monomer concentration was considered twice, and a slow polymerization is observed (also shown in Figure 3b). Due to the same reason, the polymerization rate of Version II was no more a constant at the late stage of polymerization (Figure 3b). Both Versions I and II should better study polymerization systems with a concentration of free monomers higher than 1/mmax [38]. There is no such limitation for Versions III and V, and the results are consistent with the theoretical predictions even when the concentration of free monomers is very low (Figure 3a,b).
Heterogeneous Polymerization
We examined a heterogeneous polymerization system with 1000 initiators spatially localized in a 10 × 10 × 10 cube at the beginning of the simulation (as illustrated by the inset in Figure 4a), which corresponds to an experiment with powered initiators. The initiators are allowed to diffuse apart during the polymerization. We can expect that the polymerization should be influenced by the localization of the initiators since the inner and outer initiators react differently.
When the same parameters as those in Figure 2 were applied (P0 = 0.001 and τ = 10 MCs), the obtained results are similar to those of the homogeneous systems ( Figure S1). According to the snapshots ( Figure S2), the localized initiators became quickly dispersed after several reaction steps due to the slow polymerization and small size of the initiators. The effect of the localization of the initiators was not obvious.
When the polymerization is very fast (P0 = 0.01), the influence of the localization of the initiators can be observed. All versions suggest that the polymerization of the heterogeneous system (Figure 4a) is slower than that of the corresponding homogeneous system (Figure 4b). It is easy to explain. Due to the large P0, the initiators reacted with free monomers before they diffused apart from each other. The outer initiators have more chance to react with free monomers, the inner ones are trapped. Due to the trapping effect, the initiators should react slower, and a broader molecular weight distribution should be observed. Such a slower conversion of initiators was observed with Versions III and V (Figure 4c). Meanwhile, Versions I and II obtained a conversion similar to the theoretical prediction. Figure 4e suggests that the molecular weight distributions of the heterogeneous polymerization system are very broad and cannot be described by the Poisson distribution. For comparison, the results of the homogeneous polymerization system still follow the theoretical predictions even when the polymerization is very fast (Figure 4d,f). It
Heterogeneous Polymerization
We examined a heterogeneous polymerization system with 1000 initiators spatially localized in a 10 × 10 × 10 cube at the beginning of the simulation (as illustrated by the inset in Figure 4a), which corresponds to an experiment with powered initiators. The initiators are allowed to diffuse apart during the polymerization. We can expect that the polymerization should be influenced by the localization of the initiators since the inner and outer initiators react differently.
When the same parameters as those in Figure 2 were applied (P 0 = 0.001 and τ = 10 MCs), the obtained results are similar to those of the homogeneous systems ( Figure S1). According to the snapshots ( Figure S2), the localized initiators became quickly dispersed after several reaction steps due to the slow polymerization and small size of the initiators. The effect of the localization of the initiators was not obvious.
When the polymerization is very fast (P 0 = 0.01), the influence of the localization of the initiators can be observed. All versions suggest that the polymerization of the heterogeneous system (Figure 4a) is slower than that of the corresponding homogeneous system (Figure 4b). It is easy to explain. Due to the large P 0 , the initiators reacted with free monomers before they diffused apart from each other. The outer initiators have more chance to react with free monomers, the inner ones are trapped. Due to the trapping effect, the initiators should react slower, and a broader molecular weight distribution should be observed. Such a slower conversion of initiators was observed with Versions III and V (Figure 4c). Meanwhile, Versions I and II obtained a conversion similar to the theoretical prediction. Figure 4e suggests that the molecular weight distributions of the heterogeneous polymerization system are very broad and cannot be described by the Poisson distribution. For comparison, the results of the homogeneous polymerization system still follow the theoretical predictions even when the polymerization is very fast (Figure 4d,f). It confirms that the location of the initiators influences the polymerization. Firstly, the access of free monomers to an inner initiator (active center) is limited as the area around the initiator (active center) is occupied by other initiators (active centers) and reacted monomers. Secondly, the delivery of free monomers might be blocked by outer initiators and active centers due to the large P 0 . As a result, inner initiators (active centers) might be in a starve state, i.e., they fail to find a free monomer in the reaction range. It is the reason that a broad distribution was still obtained with Versions I and II. Therefore, the distribution revealed by Versions III and V is more reliable since the reaction microenvironment is well considered.
initiator (active center) is occupied by other initiators (active centers) and reacted monomers. Secondly, the delivery of free monomers might be blocked by outer initiators and active centers due to the large P0. As a result, inner initiators (active centers) might be in a starve state, i.e., they fail to find a free monomer in the reaction range. It is the reason that a broad distribution was still obtained with Versions I and II. Therefore, the distribution revealed by Versions III and V is more reliable since the reaction microenvironment is well considered.
A Further Comparison between Versions III and V
In Versions III and V, the influence of the reaction microenvironment was considered as the reaction probability of an active center is determined by the local reaction Polymers 2022, 14, 3269 9 of 12 microenvironment. As shown, both versions can be applied to study both homogeneous and heterogeneous polymerizations, and the obtained results are almost identical. Strictly speaking, the consideration of reaction microenvironment is different in Versions III and V. It is directly considered in Version V as the reaction probability P r,loc = mP 0 , while indirectly considered in Version III according to Equation (3). One question is whether this difference can be ignored or not.
The difference between Versions III and V can be estimated by the reaction probability. According to Equation (3), the reaction probability of Version III, P r,seq , can be expressed as mP 0 when P 0 and m are small. Figure 5a shows P r,seq as a function of P 0 while fixing m = m max . P r,seq gradually deviates from the theoretical reaction probability P th = mP 0 with increasing P 0 . We calculated the relative difference between the reaction probability ∆P/P th = (P th − P r,seq )/P th . (11) The difference between Versions III and V can be estimated by the reaction probability. According to Equation (3), the reaction probability of Version III, Pr,seq, can be expressed as mP0 when P0 and m are small. Figure 5a shows Pr,seq as a function of P0 while fixing m = mmax. Pr,seq gradually deviates from the theoretical reaction probability Pth = mP0 with increasing P0. We calculated the relative difference between the reaction probability , ( ) .
th th r seq th P P P P P Δ = − The relative difference is 8.1% when P0 = 0.01 and m = 18. For a given P0, the relative difference decreases with decreasing m. As the concentration of free monomers [M] is 0.4 in this study, the relative difference is only 3.1% at the start of the stimulation and decreases with polymerization (decreasing m). Thus, no significant difference between Versions III and V can be observed in Figure 4. When P0 is extremely large (P0 = 0.05 and [M] = 0.4), the relative difference reaches 14.2%. Figure 5b shows that the polymerization of Version III is slower than that of Version V. However, the difference between the molecular weight distributions is still ignorable ( Figure S3).
According to the communication with Berezkin, the reaction probability in the simulation should be sufficiently small as the reaction probability in the real world is far smaller than that in simulations. Otherwise, the overall kinetics becomes unrealistic because it will be diffusion controlled. Typically, the reaction probability is 0.001, or even smaller in simulations. With such a small value, polymerization can be freely studied by either Version III or V. The relative difference is 8.1% when P 0 = 0.01 and m = 18. For a given P 0 , the relative difference decreases with decreasing m. As the concentration of free monomers [M] is 0.4 in this study, the relative difference is only 3.1% at the start of the stimulation and decreases with polymerization (decreasing m). Thus, no significant difference between Versions III and V can be observed in Figure 4. When P 0 is extremely large (P 0 = 0.05 and [M] = 0.4), the relative difference reaches 14.2%. Figure 5b shows that the polymerization of Version III is slower than that of Version V. However, the difference between the molecular weight distributions is still ignorable ( Figure S3).
According to the communication with Berezkin, the reaction probability in the simulation should be sufficiently small as the reaction probability in the real world is far smaller than that in simulations. Otherwise, the overall kinetics becomes unrealistic because it will be diffusion controlled. Typically, the reaction probability is 0.001, or even smaller in simulations. With such a small value, polymerization can be freely studied by either Version III or V.
Conclusions
Due to the simple idea, various stochastic reaction models were proposed to simulate different polymerization processes, which can be divided into two types according to the reaction microenvironment considered or not. In the first type (Versions I and II), all active centers react with the same probability, ignoring the divergence of the reaction microenvironment, in the second type (Versions III, IV, and V), each active center reacts individually according to the local reaction environment, considering the influence of reaction microenvironment.
In this paper, we applied different versions of the SRM to study a homogeneous polymerization system with randomly dispersed initiators and a heterogeneous polymerization system with spatially localized initiators. For the homogeneous polymerization system, all versions obtained typical characteristics of a living polymerization such as the linear increase in M n with the monomer conversion C and the Poisson molecular weight distribution. For the heterogeneous polymerization system with spatially localized initiators, the expected behaviors, such as the slow conversion of initiators and broader molecular weight distribution, were observed with the second type of the SRM (Versions III and V). Though Versions I and II obtained broad MWDs when polymerization is very fast, the results are no more reliable. Similar conclusions were also obtained from the study of surface-initiated polymerization [38] that the properties such as MWD and dispersity are strongly influenced by the reaction microenvironment.
In a summary, caution must be paid while studying heterogeneous polymerization with a stochastic reaction model. Each active center should react with its own probability, which is determined by the active center's local reaction microenvironment. As far as the three versions (III, IV, and V) are concerned, we recommend Version V due to its simplicity, popularity for both lattice and off-lattice simulations, and convenience for theoretical analysis.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/polym14163269/s1, Figure S1: comparison of living polymerization systems with spatially localized (left column) and randomly dispersed (right column) initiators when the polymerization is slow; Figure S2: Snapshots of the heterogeneous polymerization system with spatially localized initiators with given simulation times; Figure S3: The molecular weight distributions of the homogeneous polymerization obtained with versions III and V. | 7,084 | 2022-08-01T00:00:00.000 | [
"Engineering"
] |
prOACTiVE dECiSiON-MAKiNG MEChANiSM BASEd ON MiNiNG TEChNOlOGY
The main idea of this study is to connect the possibilities of mining technology with the methodology of proactive management by social and economic systems. The permanent process of complication of all spheres of social life requires improving the management forms and methods. Modern methods of decision support, appropriate information technology make it possible to improve the classical approaches, one of which is proactive management. Taking into account the limits of classical methods, proactive management should be chosen as an appropriate mining technology that can automatically extract the new non-trivial knowledge from data in the form of patterns, relationships, laws, etc. This synthetic technology combines the latest achievements of artificial intelligence, mathematics, statistics, heuristic approaches, including Data Mining, OLAP and others. Using the mining technology enables: to implement data monitoring, preparation and analysis (collection and presentation of data, detection of situations), to identify problem situations (to recognize patterns of problem situations; to correlate the pattern of the current situation with patterns of problem situations; to determine the structure of the problem situation, to identify factors and relationships), to prioritize the problems, trends and challenges, their expectations, effects (to predict the situation development with managerial influence and without it), to pose the tasks (to analyze deviations in terms of activity; to define goals, criteria, operating conditions) and so on. The following models (using the methods of “nearest neighbour”, rules induction, causal networks, statistical methods, associations, neural networks, decision trees, etc.) can be used: cluster allocation situations, classification of patterns, models of situation identification, pattern recognition models, prediction models, optimization models, causal relationships models.
Introduction
The modern management practice employs quite a wide range of methods and instruments which allow an effective management of social and economic systems (SES).However, the rate of changes in the world, their incredible complexity and close relations with all spheres of human life necessitate a constant search of new, more effective, more modern methods and instruments of economic activity organization and management.
Preventing the SES transition into the problematic state and finding an efficient response to problematic situations depend on the effective organization of the situation recognition process, the ability of the decision-making mechanism in a short space of time to characterize the problem.The use of the proactive management conception provides for an early diagnostics of problematic situations and realization of oppropriate measures of their prevention (Plunkett, Hale, 1982).
The purpose of proactive management is formation of the SES ability, using information obtained by monitoring, data acquisition, generalization and analysis about its current functioning and changes taking place in the environment, to effectively perform the current and perspective tasks whose consequences provide for the ability of the further system development, stability in achieving competitive advantages.
Modern information systems, which are widely used in all areas of human activityies, allow to store large data arrays about the past and current SES functioning and its surroundings.That is why nowadays first of all there arises the problem of access to the large volumes of accumulated information necessary to solve proactive management tasks.
The proactive mechanism of management decision-making modeling is a complicated interdisciplinary task.Its solution was significantly activated by the development of applied system analysis methods, artificial intellect, information technologies (Turban, Sharda, 2010).
Scientists' efforts in this direction led to the development of a number of methods of proactive management, mathematical models allowing to describe some aspects of social and economic phenomena (Andreeva, 2005;Taylor, 2007;Podsolonko, 2007;Abdikeev, 2010).
The need of a new instrumentation for the proactive management of decision-making modeling in the SES is conditioned by the inability to solve a number of important problems, first of all process forecast, description of phenomena and decision-making when the classical instruments are impossible to use because of complicated intertwine of diverse, considering social-psychological. New and modernized statistical methods, new technologies and instruments of inductive information processing, combined with the power of modern computers, have provided for a breakthrough in the empiricalinductive methodology and were implemented in the form of the mining technology.
Taking into consideration the limitations of classical methods, while solving the tasks of proactive management decision-making mechanism modeling, it is appropriate to involve the mining technology which is able to automatically extract from the data new non-trivial knowledge in the form of models, dependences, laws, etc.
The aim of the study was to justify the theoretical and methodological foundations and conceptual positions of the proactive management decision-making mechanism based on the mining technology, selection of appropriate methods, tools and models.
Relation between the processes of proactive management and mining technology
Mining technology (MT) is an artificial technology which combines the latest achievements of artificial intelligence, numerical mathematical methods, statistics, heuristic approaches.Its methods include data mining, text mining, oLAP, knowledge discovery, intelligent analysis data, etc. (Maimon, 2005) The mining technology, like any other cognition method, has a number of drawbacks: the need of a large set of input data for successful training; forming a model in latent form ("black box"); a significant percentage of false results, the use of special databases, etc.For many years, there has been a controversy between scientists about the advantages and disadvantages of intellectual analysis, but the facts of a successful use of technology in scientific, technical, economic and social spheres are an important confirmation of the viability of this approach.
The basis of the proactive management techniques is the process approach, when the main accent is put on a certain sequence of manage actions, providing a basis for the application of logic, reasoning and analysis about the problems.
Proactive management covers the following basic processes: causal analysis, decisionmaking, programme analysis, and situation review (Diagram 1).These processes are classified by time (past, present, future), each of them has its own orientation and contains a sequence of steps (Diagram 2), and a set of techniques that can be used separately and in sequence.All of these processes are interrelated.
The basis of all proactive management processes is a system functioning data analysis, and the quality of such an analysis determines the success in timely spotted problematic situations, ways to ovoid a search, elucidation of cause-effect relationships among the events.
The modern state of the methods of processing and analysing information development allows to work with large amounts of data and to make an in-depth analysis of data related to the problem.The modern analytics combines the power and complexity, including statistics, profiling, pattern recognition, behavioural analysis, time series analysis, predictive modelling, visualization, analysis of cause-effect relationships, etc.Using mining instruments makes it possible to improve the methodology of proactive management carrying out its basic processes by relevant methods of template discovery, predictive modeling, forensic analysis (Diagram 2).
Theoretical-methodological and methodical levels
The theoretical-methodological and methodical level of the proactive mechanism of the decision-making concept based on the mining technology is defined by grounding the possibility of analysis at each process level and relations among the processes.
Management tasks largely depend on the situation, which may be problematic due to the malfunction of socio-economic, political and other mechanisms, inadequate management structures and errors in management processes.Therefore, for proactive management decision-making realization, it is necessary to develop effective methods of identifying problem situations.
The important point is an early detection of the problem situations long before they start acting.The diagnosis mechanism takes into account the recurring problem situations, logically expected and new changes with a different frequency of occurrence, determines the degree of threat to these changes, accelerating the speed of response to the changes that may adversely affect the SES activity.
The difficulty in identifying problem situations in the SES is that at the early stages the monitoring data on the deterioration of performance are fragmented.Hence, the task of reconstructing a coherent picture on the basis of fragmentary data and a qualitative interpretation of the obtained image of the situation from the perspective of its impact on the SES during its development arises.To solve this problem, it is appropriate to use MT methods, applied statistics, non-numeric statistics, the instruments of fuzzy sets, genetic algorithms, neural networks,etc.
The use of these methods enables: to perform monitoring, preparation and data analysis; collection and presentation • of data; filtering, grouping and comprehensive data reporting; situation identification; to identify problem situations, their patterns; to correlate the pattern of the current • situation with patterns of problem situations; to determine the structure of the problem situation; to identify factors and relationships; to set the priority on the solution of problems, their importance, periods of solution, • trends of problem development and their expected consequences (rank patterns of the situations); to predict the development of situations without and with the managerial influence; to assess the situation according to different criteria; to set managerial tasks: to analyse deviations in performance indicators; to define • goals, criteria, operating conditions; to identify areas of SES tracking, control points (the tree of directions).
In this case, we are talking about using the following types of models: cluster allocation situation models by k-means methods; classification of the • patterns using decision trees and neural networks; situation identification models by the "nearest neighbour" method, the rules • induction method, using neural and causal networks, associations, the limited exhaustion algorithm; models of the discovery (mining) patterns by induction of rules, using genetic Synthetic methods for the detection of situations in the SES are necessary; they organically include logically constructed, mathematically formalized algorithms that enable an interactive impact of experts, the ability to implement the heuristic procedure (visualization).
Situation review procedures are inextricably linked with a continuous object diagnosis, identification and systematic classification of regular and abnormal situations that are an integral part of the process of knowledge and experience accumulation in the SES.Development of the models and methods that promote an effective implementation of these tasks is one of the stages of control system synthesis for the development and operation of the SES.
Much attention to problem revealing models using the mining technology is given in works related to studying the situation mechanism of management decision-making (Lepa, 2006).However, most attention is paid there to the methods based on data retention.A significant potential of methods based on the distilled data should be noted.
In finding some problems, two aspects -the urgency and importance -should be considered.Urgent problems require an immediate response and implementing adequate managerial measures, and significant problems need well-planned, long-term management actions.
Problem identification should be accompanied by a list of their sources, causes and options and determination of situation development vectors including and excluding control impacts.
Identifying patterns and relationships between events and phenomena occurring in the SES is a problem which can be solved by studying situation development in the multidimensional space of the parameters that characterize it from different sides.The problem situation has long chains of cause-effect relationships.Its appearance and availability can be investigated by the scheme: the problem (result) -ymptoms (indicators) -parameters -factors -causes -root causes.In order to anticipate a problem and to proactively pursue preventive measures, its root causes need to be known.on the other hand, a problem can be seen as a reason, so in the mechanism of proactive management implemented in a decision-making system of any SES, an essential part should be forecasting situation development in two directions -without management impact and with a managerial influence on the basis of causal relationships among the events.
The process of causal analysis can and should be implemented at all stages of the full cycle of problem solving, but in certain situations its application is self-sufficient to ensure an effective management.This is a situation where from a clear formulation of the problem its apparent resolution is derived, although at first glance the problem and the nature of the reasons that have caused it may be far from obvious.
Problem situation recognition is based on the determination of symptoms -signs of phenomena in the internal and external environment, which is associated with a certain influence on the system, the source of action, factors and causes.However, symptoms often do not work, because the classical methods that are used are no more sensitive to small changes in characteristics or a compensatory influence on each other.
The use of the mining technology methods for the process of causal analysis enables: to perform monitoring, preparation and data analysis; collection and presentation • of data; filtering, grouping and comprehensive data reporting; localization of the identified causal relationships; to identify causal relationships and their patterns; to correlate the identified • factors and relationships with the patterns of causal relationships, to determine the structure of a causal relationship; to create the causal chain hierarchy -a chain of interrelated causes and effects: to • predict the situation without and with managerial influences, to construct sets of situations on the basis of relationship models.
Analyzing the activities of the SES, a situation needs to be modelled in the current time that is taken into account in determining the state of the current system, and aggregate them from the beginning of the reporting or planned period in order to determine first the overall and then the global situation.The final state of the system is defined as a global integral state.
Disclosure of problems allows identifying and formalizing the situation to the level of a particular model in order to make substantial management decisions based on its analysis.At this stage, a particular technology for problem solution is chosen, the situational approach is implemented effectively through the procedure depending on the type of solutions (standard, binary, multivariate or innovative).
The methodical basis for the decision-making process as a proactive management process is the utility theory methods, statistical methods, operation research methods, expert methods, etc.However, it is not always possible to obtain a decision based on the model that contains only quantitative indicators.The exact value of indicators in many cases does not bear any substantial load, otherwise the nature or type of system's behaviour and the expected state are unknown.
The use of MT methods extends the classical approach instruments and makes it possible to define goals and criteria through constructing the sets of targets, ranking the criteria, to use the optimization models based on neural networks, genetic algorithms, decision trees, induction methods, to search for solutions by specific methods.
Highly promising are the hybrid approaches, namely improvement of classical methods by incorporation of mining technology into them.An example could be a mechanism for increasing the objectivity of expert estimations through the use of neural networks in the block of estimate correction, evolutionary modelling of expert estimations (Gnatiyenko, 2008), etc.
The informational basis for defining a set of alternatives and management decision making to break the problem situation is the procedure of identifying problematic situations.Thus, in the system of training and management, decision making is needed to integrate the original and systematic blocks for evaluation of the effect of their implementation on the solution of problem situations.The information for such an assessment should be based on the results of monitoring the implementation of solutions.This allows a real-time monitoring of the performance dynamics and preparation of the decision-making information for additional administrative impacts with adjustment of the way of achieving the planned values of key indicators.Analysis of a systematic evaluation of the management decision feasibility is the basis for updating the knowledge of the SES for solving problem situations within the blocks of situation recognition and decision making.
It is important not only to elucidate the problem and to assess the consequences and risks associated with the implementation of each of the alternatives; the main thing is to focus on the process of the practical implementation of the alternative that has been chosen for solving this problem.It shifts the emphasis from the plan as a set of indicators to be achieved on the programme of actions as a sequence of steps to achieve this final result.In the centre of the programme analysis there are potential problems identified as the negative factors and the possibilities as the positive factors that can be expected during programme implementation.A. detailed development of these aspects of the problem solving and decision-making is one of the most powerful methods of proactive management.Too often the problem is aggravated due to the fact that the SES is not provided with "alarms" that indicate a problem, security measures are not planned in advance, reserve actions for a rapid elimination of unexpected complications are absent, etc.
Identifying the patterns and relationships between events and phenomena occurring in the SES is a complex task which can be solved through an in-depth study of the situation development in the multidimensional space of parameters that determine the need to develop forecasting situation models without and with the managerial influence and scenario analysis.The scenario model of the situation development related to the implementation of the chosen alternative reveals the perspectives of the SES, usually in three versions -pessimistic, realistic and optimistic.The mining technology allows using for this purpose the rule induction instruments, case-based reasoning, statistical methods, associations, neural networks, decision trees.
The control of effectiveness will depend on the ability to focus on several critical areas, to identify the related threats and opportunities, their probable causes and to develop appropriate actions for a successful implementation of the programme.
Model and application levels
The model level of the proactive management decision-making mechanism concept based on mining technology involves a synthesis of situation and relationship identification models, visual electronic models, forecasting models for situations, optimization models of decision making and scenario models of the situation development.
The application level of the proactive management decision-making mechanism concept based on mining technology is defined by formalization of its main methodological and model structures to the level of specific information technologies; their implementation into the SES management practice will improve decision-making accountability and efficiency.In general, the relationship of basic elements of theoretical and methodological, methodological, instrumental, model, organizational and practical levels of the proactive management decision-making mechanism in SES presented as a conceptual scheme (Diagram 3).
DIAGrAM 3. Conceptual diagram of the proactive mechanism of decision support based on mining technology
Source: author's own study.
Computer support of the proactive management decision-making mechanism should be resolved through the distribution of functions between the computer and the manager when the computer is given an auxiliary role, the role of "support", and the main role belongs to man.To provide the most effective assistance of artificial intelligence to managers, information should be submitted in a form suitable for human perception.
Among the facilities offered today by the information technology market for the processing and visualization of data for management decisions, promising are analytical applications and the analytical framework, which are associated with the implementation of Business Intelligence (BI) [Turban, Sharda, 2010].The appearance of the BI technology gave start to a new generation of information-analytical systems which today include various tools of the mining technology (Table 1).
of great value for BI implementation is the development of analytical applications presented as a service (Software as a Service, SaaS).The SaaS, Web 2.0 and some other technologies are included into the Cloud Computing Technology.Significant prospects are opened by the development of BI technologies presented in the form of hybrid applications in which the analytics is implemented without a full review of the existing software.A powerful way to support the proactive management decision-making mechanism can be the situation centres equipped with all necessary multimedia tools that provide a rapid and deep "dive" of the manager in the situation.
The importance in the proactive management decision-making mechanism support should be given to group interaction, i.e. information technologies of collegial decisionmaking support.
Conclusions and directions for future research
The results of the present research allow formulating a series of conceptual conclusions, namely to substantiate that to solve the tasks of proactive management decision making, the mechanism of mining technology should be involved; to select the basic methods, tools and models for decision support, ways of mechanism implementation through the use of modern BI technologies, development of situation centres technology, collegial approach to decision making.The proposed approach meets the essential qualifications required for management: performance management functions, the presence of feedback, adaptivity.
The problem of identifying proactive management mechanism realization areas is very important, but its solution is likely to rely on the leadership of the SES and its consultants (experts) who need to make a thorough systematic analysis of the structure and activities of the SES associated with a unique decision-making in each individual SES; to set the task of developing a universal set, for example, areas of tracking the situations in isolation from the analysis of specific SES, and to use it as a basis for forming a set of indicators for any SES does not make any sense.
As the prospects for the further research, the realization of the application level concept of proactive management decision-making mechanism based on mining technology for specific types of the SES can be considered.
DIAGrAM 1. interrelation among proactive management processes Source:(Plunkett, Hale, 1982) DIAGrAM 2. Correspondence between the proactive management processes and mining technology processes Source: author's own study.
. The selection of the intelligent analysis as a means of modeling proactive mechanisms is based on its properties listed | 4,902.8 | 2012-01-01T00:00:00.000 | [
"Computer Science"
] |
Estimation of Impedance Features and Classification of Carcinoma Breast Cancer Using Optimization Techniques
: Breast cancer is the most prevalent form of cancer and the primary cause of cancer-related mortality among women globally. Breast cancer diagnosis involves multiple variables, making it a complex process. Therefore, the accurate estimation of features for diagnosing breast cancer is of great importance. The present study used a dataset of 21 patients with carcinoma breast cancer. Polynomial regression analysis was used to non-invasively estimate six impedance features for the diagnosis of breast cancer, including the phase angle at 500 KHz (PA500), impedance distance between spectral ends (DA), area normalized by DA (A/DA), maximum of the spectrum (Max IP), the distance between impedivity (ohm) at zero frequency and the real part of the maximum frequency point (DR), and length of the spectral curve (P). The results indicated that the polynomial degrees needed to estimate the PA500, DA, A/DA, Max IP, DR, and P features based on tumor size were 2, 2, 3, 3, 2, and 2, respectively. Additionally, we utilized a nonlinear constrained optimization (NCO) analysis to calculate the eight threshold levels for the classification of the impedance features. The deduction of eight classifications for each feature may also be an effective tool for decision-making in breast cancer. These findings may help oncologists to estimate the impedance features for breast cancer diagnosis non-invasively.
Introduction
Breast cancer is the most common type of cancer and the leading cause of cancerrelated death for women worldwide [1]. In the United States, breast cancer is the second leading cause of cancer death after lung cancer [2]. The most effective way to reduce the mortality rate of breast cancer is through correct and early diagnosis of cancer. Therefore, clarifying any ambiguities in the screening and diagnosis of breast cancer is of great importance. Various factors are involved in the diagnosis and development of breast cancer, including patient characteristics, the presence of proliferative breast lesions with atypia, genetic factors, and lifestyle [3]. These multivariable parameters have a prominent role in the complexities of breast cancer diagnosis. There are various diagnostic tools available to clarify these complexities for accurate decision-making about breast cancer.
Medical imaging is a common diagnostic tool for healthcare professionals. Computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single-photon emission computed tomography (SPECT), ultrasonography (US), and X-ray mammography (XRM) are the most common imaging techniques used to diagnose breast cancer. The primary method used by physicians to diagnose breast cancer is based on the interpretation of medical images that are qualitative and visual in nature. However, image processing of these images has opened new windows for quantitative diagnosis of breast cancer [4]. There are still many contradictions and conflicts about the accuracy of the diagnosis of breast cancer based on medical images. For example, the XRM method's efficiency in breast cancer diagnosis is only 4-10% [5,6]. Molecular biotechnology examinations are an additional diagnostic tool for breast cancer, utilizing classifications BioMedInformatics 2023, 3 370 at the molecular level. These examinations include a real-time fluorescence quantitative polymerase chain reaction system, acid hybridization system, protein hybridization system, needle biopsy, flow cytometer, and immunohistochemistry. They can work earlier than medical images to diagnose breast cancer [7]. However, concerning the valuable clinical data of medical images, molecular biotechnology examinations can only be an auxiliary method for breast cancer diagnosis. Circulating tumor cells (CTCs) and circulating tumor deoxyribonucleic acid (ctDNA) are the next parameters that can serve as diagnostic biomarkers for early cancer screening and establishing cancer staging [8]. Long noncoding ribonucleic acid (lncRNA) and circular RNAs (circRNAs) are also emerging biomarkers to initiate breast cancer diagnosis. These new biomarkers can be diagnostic tools for breast cancer. However, there are many ambiguities about their actual function and effectiveness for breast cancer diagnosis.
Related Work
Bioinformatics methods, such as machine learning and deep learning, have been shown to be powerful tools for accurately classifying cancer in various types of diseases and have been extensively applied in recent studies. Hrizi et al. demonstrated that the optimized machine learning model outperformed traditional diagnostic methods and has the potential to be used as an effective tool for tuberculosis diagnosis [9]. Ammar et al. showed a hybrid optimal deep learning-based model for tuberculosis disease recognition using MRI images [10]. Yao et al. used a machine learning analysis technique to predict survival in pancreatic cancer patients [11]. The model was trained using gene expression data and clinical features and achieved high accuracy in predicting patient survival. Shan presented a machine-learning approach for predicting lymph node metastasis in patients with early stage cervical cancer [12]. They used a random forest model to improve the performance of the neutrophil-to-lymphocyte ratio.
Given the importance of bioinformatic methods in the diagnosis and management of cancer, several studies have focused on utilizing these methods for breast cancer. Some studies identified crucial genes associated with breast cancer using integrated bioinformatic analysis [13][14][15]. They suggested some novel genes using bioinformatic methods to diagnose breast cancer. Wu et al. also used a machine learning algorithm to classify triple negative and non-triple negative breast cancer types [16]. Omondiagbe et al. used a new reduced feature dataset to support vector machines to classify breast cancer by linear discriminant analysis [17]. Amrane et al. compared the efficiency of Naive Bayes and k-nearest neighbor to find a more accurate classifier for breast cancer [18]. Assiri et al. proposed a novel ensemble classification method for breast cancer using various machine-learning algorithms [19]. They utilized three classifiers and examined five unweighted voting mechanisms. They found that majority-based voting outperformed the others. Islam et al. found that artificial neural networks are the most accurate and machine-learning modeling method for diagnosing breast cancer [19].
In bioinformatics, deep learning has emerged as a powerful tool for the diagnosis of breast cancer in recent years. This approach involves using artificial neural networks with multiple layers to automatically learn and extract relevant features from complex data [20][21][22]. Zhou et al. have provided a comprehensive evaluation of the various methods employed in breast cancer diagnosis through histological image analysis based on different designs of convolutional neural networks (CNNs) [20]. Their findings suggest that CNNs are highly beneficial for the early identification and treatment of breast cancer, resulting in more successful therapy. Jiang et al. strove to evaluate the effectiveness of a deep learning model based on CNN in determining molecular subtypes of breast cancer using US images [22]. The results showed that the CNN model achieved an acceptable accuracy in determining breast cancer molecular subtypes, which is comparable to the accuracy of human radiologists. Allugunti et al. utilized deep learning techniques trained end-to-end to achieve high-accuracy diagnosis and screening the breast cancer [21]. Ghiasi et al. explored the use of deep learning algorithms to classify breast cancer based on uniformity of cell size, bland chromatin, mitoses, and clump thickness [23]. Their results indicate that the proposed methods offer accurate classification and diagnostic performance compared to previous methods. In addition, some studies have focused on biostatistical methods to extract effective features of breast cancer [24]. Terry et al. used concordance statistics to predict breast cancer risk [25]. Despite significant progress in computer-aided diagnosis methods, there are still many unknown points in the diagnosis of breast cancer using these methods [26][27][28]. Hence, breast cancer diagnosis using these methods is still controversial and challenging.
Aims and Objectives
Many researchers have attempted to improve bioinformatic and biostatistical methods to suggest more accurate biomarkers. This study aims to develop diagnostic tools for the management of breast cancer. The approach involves using polynomial regression analysis to estimate six impedance features for diagnosis and nonlinear constrained optimization (NCO) to determine the threshold level of each feature.
Materials and Methods
We utilized the database of a study conducted by Thirumalai et al., which includes 21 patients with breast cancer (Table 1) [29]. It should be noted that the present database belongs to the carcinoma category, which is one of the most common categories, as cancer develops from the epithelial cell lining [22]. We employed polynomial regression analysis to estimate six impedance features for the diagnosis of breast cancer without minimal-invasive electrical impedance spectroscopy. In addition, we tried to dedicate eight classes for breast cancer using the NCO method. Furthermore, using the NCO method, we attempted to classify breast cancer into eight categories. There are three primary stages include preprocessing, regression, and optimization. Figure 1 shows the workflow diagram of the system, illustrating the various components and their relationships. In the stage of preprocessing, the normality test was conducted for the 21 observations by the Shapiro-Wilk test. The result of this test confirmed the normal distribution of input data. In the model selection step, we examined many types of fitting equations, such as exponential, logarithmic, polynomial, and power. The results showed that the polynomial estimation made the best fitting with regard to R 2 values. This procedure is common in previous biological studies [30][31][32].
to estimate six impedance features for the diagnosis of breast cancer without minimalinvasive electrical impedance spectroscopy. In addition, we tried to dedicate eight classes for breast cancer using the NCO method. Furthermore, using the NCO method, we attempted to classify breast cancer into eight categories.
There are three primary stages include preprocessing, regression, and optimization. Figure 1 shows the workflow diagram of the system, illustrating the various components and their relationships. In the stage of preprocessing, the normality test was conducted for the 21 observations by the Shapiro-Wilk test. The result of this test confirmed the normal distribution of input data. In the model selection step, we examined many types of fitting equations, such as exponential, logarithmic, polynomial, and power. The results showed that the polynomial estimation made the best fitting with regard to R 2 values. This procedure is common in previous biological studies [30][31][32].
Polynomial Regression Analysis
Polynomial regression is a specific form of the regression model which explain one variable variation based on another variable. In this regression, the relationship between the independent variable (tumor size = x), and the dependent variable (impedance features = ( )) is curvilinear. The general representation of the model is shown as [33]:
Polynomial Regression Analysis
Polynomial regression is a specific form of the regression model which explain one variable variation based on another variable. In this regression, the relationship between the independent variable (tumor size = x), and the dependent variable (impedance features = f (x)) is curvilinear. The general representation of the model is shown as [33]: where n is the degree of a polynomial function, a i (i = 1 . . . n) is coefficients of the polynomial terms, and ε is the residual error which is the average distance of the data from the regression curve.
The determination of the model degree can be completed by examining the relationship using a scatter plot and testing different degrees of the polynomial until the best fit is achieved. It is worth mentioning that polynomial regression is sensitive to outliers. Preprocessing of data is essential to ensure that the data used in the analysis is accurate, reliable, and properly prepared for modeling. The most important step of regression analysis is model validation. The model is validated by evaluating its performance using various metrics, such as root mean squared error (RMSE) as follows [33]: (2) wheref j and f j represent the estimated and actual values, respectively, and m denotes the number of data points. All of the calculations for regression analysis were performed using IBM SPSS software, version 20.0, IBM Corp., Armonk, NY, USA.
Nonlinear Constrained Optimization
NCO is a mathematical technique used to find the optimal values of a set of decision variables subject to a set of nonlinear constraints. Nonlinear constraint optimization can be mathematically formulated as [34]: where x is the tumor size that is a decision variable; x = (x 1 , x 2 , . . . , x n ). f (x) is the objective function; f : R n → R . The objective function represents one of six estimated impedance features that should be optimized. c(x) represents a vector of constraints that x must satisfy, in which c: R n → R m . n and m are the numbers of decision variables and the number of constraints.
The present study used the generalized reduced gradient (GRG) method to solve NOC. GRG is commonly used for problems with continuous variables and smooth nonlinear functions [35]. This method can be applied to problems that are more general than Equation (3). An appropriate form for this problem is modeled in the following: where l x and u x are lower and upper bounds of the decision variable (tumor size). l c , and u c are the lower and upper bounds of optimized objective functions. The minimization problem can be converted to a maximization problem by multiplying −1 in the objective function. The basic idea of the GRG algorithm is to iteratively solve a series of linear programming problems that approximate the original nonlinear problem while updating the values of the decision variables in a way that reduces the objective function and satisfies the constraints.
In the GRG algorithm, convergence testing is critical to ensure that the algorithm has found the optimal solution [35]. Common convergence tests used in the GRG algorithm include the objective function value, constraint satisfaction, reduced gradient norm, step size, and change in decision variables. The convergence test on the objective functions (six impedance features) is conducted with a precision of 0.0001.
Results and Discussion
Many effective parameters are involved in the diagnosis of breast cancer. Tumor size is one of the morphometric parameters which is available in medical images. Many software can measure tumor size easily, such as Mimics, 3D Slicer, ImageJ, OsiriX, and MIPAV. In addition, there are also other effective and important indicators that are necessary for oncologists in decision-making about breast cancer. For example, the impedance distance between spectral ends (DA) and area normalized by DA (A/DA) are two important features to classify non-fatty cancer tissues [36]. However, we can only measure these features using electrical impedance spectroscopy. We aimed to calculate six impedance features that can evaluate the capacitive characteristics of breast cancer tissues [37]. These six impedance features are highly important clinically to the diagnosis of breast cancer in the early stages [38]. These features include phase angle at 500 KHz (PA500), DA, A/DA, maximum of the spectrum (Max IP), distance between impedivity (ohm) at zero frequency and real part of the maximum frequency point (DR), and length of the spectral curve (P). Some studies raised pieces of evidence about the risks of electrical impedance spectroscopy for the health condition of patients [39]. Hence, the present study strove to use polynomial regression analysis to non-invasively estimate the values of these six impedance features without the electrical impedance spectroscopy method. We estimate these impedance features based on the tumor size that is available data and physicians can measure this size by common imaging methods. Regression analysis is a powerful statistical tool to estimate necessary parameters for decision-making. The results of regression analysis are shown in Figure 2. The present study used polynomial regression analysis to find the relationship between these features and tumor size based on the reported data for our patients (see Table 1). The results of Table 2 showed the estimated equations of PA500, DA, A/DA, Max IP, DR, and P. It should be noted that R-values for all estimations are greater than 0.51.
There are many classifications for breast cancer, such as cancer stages (from 0-IV), and type of tumor (benign and malignant). Estrela et al. suggested a threshold level to classify impedance ratio (IO), one of the other impedance features [36]. One of the important indicators is extracted from the results of a function that is defined based on the proportions of metastases based on the time after treatment [40]. This indicator is a common quantitative factor that dedicates eight classes for breast cancer. Koscielny et al. revealed that there is a correlation between clinical volume and the percentage of metastases diagnosed at the time of initial diagnosis or during later stages of the dis-ease [40]. Additionally, their results showed that there is a shorter median delay between initial treatment and metastases appearance for larger tumors. Their finding suggested eight classes include 1 ≤ size ≤ 2.5 (class 1), 2.5 < size ≤ 3.5 (class 2), 3.5 < size ≤ 4.5 (class 3), 4.5 < size ≤ 5.5 (class 4), 5.5 < size ≤ 6.5 (class 5), 6.5 < size ≤ 7.5 (class 6), 7.5 < size ≤ 8.5 (class 7), and size > 8.5 (class 8). However, the corresponding classifications based on six impedance features (PA500, DA, A/DA, Max IP, DR, and P) are not available. This means that oncologists cannot dedicate a class to these six impedance features. This study used the NCO method to define the corresponding eight classes for the impedance features based on the results of a study by Koscielny et al. [40]. The results of Table 3 indicate the threshold levels of PA500, DA, A/DA, Max IP, DR, and P to classify breast cancer based on the eight classes. It should be noted that minimum and maximum optimization results are defined as lower and upper bounds of the threshold level for each class. There is no comprehensive quantitative guideline in previous studies for clinical applications of the impedance features. For example, there is no quantitative guideline to evaluate the non-fatty level of breast cancer using DA and A/DA features. The results of Table 3 can classify the non-fatty level of breast cancer quantitatively using the defined threshold levels for DA and A/DA features. This approach is also extendable for all impedance features. 2023, 11, x FOR PEER REVIEW 6 of 9 There are many classifications for breast cancer, such as cancer stages (from 0-IV), and type of tumor (benign and malignant). Estrela et al. suggested a threshold level to classify impedance ratio (IO), one of the other impedance features [36]. One of the important indicators is extracted from the results of a function that is defined based on the proportions of metastases based on the time after treatment [40]. This indicator is a common quantitative factor that dedicates eight classes for breast cancer. Koscielny et al. (4)).
Breast cancer is a complex disease, and its diagnosis depends on various features. The present study focused on impedance features that are common to screen and diagnosing breast cancer. However, for comprehensive diagnosis of breast cancer, it is recommended for future studies to expand our findings by considering the interactional effects of the clinical, cancer morphometrics, and molecular biotechnology features. Currently, the major effect of big data is approved in improving the accuracy and precision of predicted results using increasing the volume, velocity, and variety of data [41]. One potential limitation of the present study is the possibility of biases being introduced due to the small size of the dataset used. To mitigate this, future studies may consider using a larger dataset to improve the accuracy of the estimated equations presented in Table 2. Another limitation of the present study is the challenge of interpreting the results generated by the polynomial regression models. To address this, future studies could explore the use of alternative regression models, such as logistic regression. Additionally, to enhance the quality of estimations and classifications, future studies may consider utilizing feature selection algorithms to identify the most important features in the dataset. By doing so, the predictive power of the models can be improved, and the insights gained may be more informative and actionable. Taken together, this study has established a basis to open a window in the non-invasive measurement of impedance features. Future studies can explore the potential of using a machine learning algorithm with a larger database based on our insight to further enhance estimation and classification accuracy.
Conclusions
The present study utilized a database of women with carcinoma breast cancer to estimate six impedance features for breast cancer diagnosis. Polynomial regression analysis was used to estimate these features. Additionally, NCO analysis was performed to compute the threshold values of each impedance feature, which were utilized to establish eight classifications of breast cancer for every feature. These classifications serve as effective tools for decision-making in breast cancer diagnosis. This finding may provide oncologists with valuable data to help estimate and classify the effective features for breast cancer diagnosis.
Future studies can leverage larger databases and machine learning algorithms to improve estimation and classification accuracy. | 4,786.6 | 2023-05-06T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Putative biomarkers for predicting tumor sample purity based on gene expression data
Background Tumor purity is the percent of cancer cells present in a sample of tumor tissue. The non-cancerous cells (immune cells, fibroblasts, etc.) have an important role in tumor biology. The ability to determine tumor purity is important to understand the roles of cancerous and non-cancerous cells in a tumor. Methods We applied a supervised machine learning method, XGBoost, to data from 33 TCGA tumor types to predict tumor purity using RNA-seq gene expression data. Results Across the 33 tumor types, the median correlation between observed and predicted tumor-purity ranged from 0.75 to 0.87 with small root mean square errors, suggesting that tumor purity can be accurately predicted υσινγ expression data. We further confirmed that expression levels of a ten-gene set (CSF2RB, RHOH, C1S, CCDC69, CCL22, CYTIP, POU2AF1, FGR, CCL21, and IL7R) were predictive of tumor purity regardless of tumor type. We tested whether our set of ten genes could accurately predict tumor purity of a TCGA-independent data set. We showed that expression levels from our set of ten genes were highly correlated (ρ = 0.88) with the actual observed tumor purity. Conclusions Our analyses suggested that the ten-gene set may serve as a biomarker for tumor purity prediction using gene expression data.
Background
The tumor microenvironment consists of non-cancerous stromal cells present in and around a tumor; these include immune cells, fibroblasts, and cells that comprise supporting blood vessels and others. Tumor microenvironment plays an important role in tumor initiation, progression, and metastasis (for recent reviews, see [1,2]).
Most genomic and genetic studies of cancer are carried out on tumor tissue samples that are heterogenous in nature. The Cancer Genome Atlas (TCGA) provided comprehensive datasets for more than 10,000 samples in more than 30 tumor types [3]. Those studies provide valuable information about genomic changes in tumor samples compared to normal samples. However, teasing out cell-type-specific information from those heterogeneous samples remains a challenge.
Knowing the cell-type composition of a tumor and how those cell types interact with each other in the tumor microenvironment is pivotal for understanding tumor initiation, progression, and metastasis. While understanding the microenvironment is challenging, methods for addressing cell composition in tumor samples, such as single cell technologies, are starting to emerge. For instance, Zheng et al. profiled the infiltrating T cells in liver cancer [4], Puram et al. [5] surveyed the tumor ecosystems in head and neck cancer, and Kararaayvaz et al. analyzed leukocyte composition in triple negative breast cancer [6]. All of these studies used single cell RNA-seq sequencing (scRNA-seq) techniques. It is conceivable that single cell sequencing technologies will be widely applied to dissect the tumor microenvironment.
Computational methods directed at deconvolving celltype-specific signals in heterogeneous tissue samples have also been developed (for a recent review, see [7]). Currently, several computational methods can estimate the proportion of tumor cells in a tumor sample (often referred to as "tumor purity"). Perhaps, the most wellknown algorithm is ABSOLUTE [8], which uses copy number variation in tumor samples compared to normal samples to infer tumor purity and ploidy. ABSOLUTE, which is often considered as the "gold standard" for performance comparison, provided tumor purity values for many samples from the 33 TCGA tumor types [9]. Besides copy number variation, methods that use DNA methylation data [10][11][12][13] or expression data for a set of pre-selected stromal genes [14] have also been developed to infer tumor purity. Purity estimates by those methods appear to have a reasonable concordance [15].
Similarly, tumor purity estimates have been used to assess the abundance of tumor-infiltrating immune cells in tumor samples. For example, Li et al. [16] developed a computational method to estimate the abundance of six tumor-infiltrating immune cell types (B cells, CD4 T cells, CD8 T cells, neutrophils, macrophages, and dendritic cells) in tumor samples. Iglesia et al. [17] assessed immune cell infiltration in 11 TCGA tumor types using a set of immune signature genes from [18]. Senbabaoglu et al. [19] employed 24 immune cell-type-specific gene signatures from [18] to computationally infer the immune cell infiltration levels in tumor samples and defined a T-cellinfiltration score, an overall immune-infiltration score, and an antigen-presenting-machinery score to highlight the immune response differences between kidney cancer and 18 other tumor types from TCGA.
For high-dimensional data where the number of features is much greater than the number of samples (p > > n), there may not exist a single set of features that can deliver the optimal/suboptimal performance. For those data, repeated cross-validations may be needed and aggregated prediction from an ensemble of ensembles (boosting) is usually preferred [20]. Ensemble learning generates multiple prediction models from the training data, each with a different feature subset. By using multiple learners, the generalization ability of an ensemble can be much better than any of the individual constituent learning algorithms [21][22][23]. Popular ensemble learning algorithms include bagging [24], boosting [25,26], and stacking [27]. Bagging trains a number of learners each from a different bootstrap sample and combines the predictions using a majority vote. Random Forest [28] is a popular technique in this category. Boosting iteratively adds new weak learners to correct the mistakes made by previous learners and collectively, the weak learners become a strong learner. The most common implementation of boosting is Adaboost [29] and Gradient Boosting Machines (GBM) [30]. In stacking, one generates multiple different types of models to build intermediate predictions, which are subsequently combined by a second-level meta-learner. Generally speaking, ensemble learning consistently outperforms non-ensemble-based methods.
XGBoost (eXtreme Gradient Boosting) is an ensemble learning algorithm [31]. XGBoost extends simple CARTs (Classification And Regression Trees) by incorporating a statistical technique called boosting. Instead of building one tree, boosting improves prediction accuracy by building many trees and then aggregating them to form a single consensus prediction model [32]. XGBoost creates trees by sequentially using the residuals from the previous tree as the input for the subsequent tree. In this manner, subsequent trees improve overall prediction by modeling the errors of the previous tree. When the loss function is least squares, this sequential model building process can be expressed as a form of gradient descent that optimizes prediction by adding a new tree at each stage to best reduce the loss [33]. The addition of new trees is stopped either when a pre-specified maximum number of trees is reached or when the training errors do not improve for a pre-specified number of sequential trees. Both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating random sampling; this extended procedure is called "stochastic gradient boosting" [30]. Specifically, for each tree in sequence, a random subsample of the training data is drawn without replacement from the full training data set. This randomly selected subsample is then used in place of the full sample to fit the tree and compute the model update. XGBoost is an optimized distributed gradient boosting that achieves state-of-the-art prediction performances [31]. XGBoost uses second order approximation of the loss functions for faster convergence compare to traditional GBMs. XGBoost has been successfully used in mining gene expression data [34].
Previously, we used XGBoost for pan-cancer classification based on gene expression data [35]. In this work, we used XGBoost to select a subset of genes whose gene expression levels can predict tumor purity. Our work was prompted by the observation that expression of many immune genes was negatively correlated with tumor purity where tumors with high immune gene expression tended to have fewer cancer cells and vice versa. We applied XGBoost to 33 TCGA tumor types for which both RNA-seq gene expression and ABSOLUTE tumor purity estimates [8] were available. We carried out several analyses for all tumor types combined (pancancer) using all genes. We showed that XGBoost can accurately predict tumor purity values using gene expression data alone. By considering how useful or important each gene was to the model's prediction, we selected the top 10 most important genes as putative markers for tumor purity prediction. For TCGA data and an independent set of non-TCGA samples, we showed that predictions based on the expression levels of only these top ten genes is almost as accurate as predictions based on using all genes. We propose that these ten genes may serve as biomarkers for tumor purity prediction.
Data
We downloaded the processed TCGA RNA-seq gene expression data (RNA final) from the Pan-Cancer Atlas Publication website (https://gdc.cancer.gov/about-data/ publications/pancanatlas) for 11,069 samples from 33 tumor types. The data were pre-processed and normalized by TCGA to remove all batch effects [9]. We log 2transformed the normalized read counts (per million reads mapped) for RNA-seq data (all values less than 1 were assigned value 1 before transformation). We filtered out genes with missing values or zero variances. The number of remaining genes was 17,170. We also removed 8 duplicate samples that were from the same patients.
The tumor purity estimates by ABSOLUTE [8] for tumor tissue samples from 33 TCGA tumor types were obtained from [9]; these purity estimates are summarized for each tumor type in Additional file 1: Figure S1. The observed purity estimates are bound between 0 and 1. However, the predicted value can be greater than 1 or smaller than 0. To prevent the predicted value from being outside of these boundaries, we applied the logit transformation to map the original purity values in the range of [0, 1] to the real line, thereby enhancing concordance with the regression for continuous outcomes implemented in XGBoost. We re-assigned the purity values of 1.00 to 0.9975, which is the value midway between the two biggest purity values. To express logittransformed predicted purity values derived from XGBoost back in the original scale, we applied the inverse logit transformation.
For an independent test dataset, we aggregated singlecell RNA-seq (scRNA-seq) expression data (GSE118390) from patients with triple-negative breast cancer (TNBC) [6] to obtain the "bulk" RNA-seq gene expression data. The original authors carried out the control analysis and included 1189 cells and 13,280 genes after filtering. The data for cancer cell proportions in those samples (Additional file 3: Table S2) were kindly provided to us by the authors and shown in Fig. 1 of [6].
Tuning parameters
After testing a total of 1486 parameter combinations on the training data using a 10-fold cross-validation procedure, we chose the parameter set with the best crossvalidation performance to use for all subsequent analyses unless stated otherwise. The cross-validation performance of each parameter combination for pan-cancer purity prediction is shown in Additional file 4: Figure S2. There was a tradeoff between number of trees and learning rate, i.e., smaller learning rate required more trees. Therefore, we fixed the maximum number of trees to be a large number (up to 5000) with reasonable computational time and adopted an early stopping rule: stopping when training performance for the dataset did not improve in five additional trees. The best parameter set for all tumor types combined was: learning rate of 0.05, maximum tree depth of 4, minimum leaf weight of 1, 65% of genes used to grow each tree, and 85% of samples used to grow each tree.
Performance on pan-cancer tumor purity prediction considering all genes
For each tumor type, we built 1000 models through 100 repetitions of 10-fold cross-validation. We ran each model with the same selected tuning parameter set. Each individual model consists of a sequence of trees. Although the average training performance of these individual models was nearly perfect [near perfect correlation between observed and predicted values and low root mean squared error (RMSE)], overtraining was not a major concern as both cross-validation and test performances were also high (Additional file 5: Table S3A).
To construct our final predictor for the test samples, we averaged the predicted tumor purity values from the 1000 predictions for each of those samples. The Pearson correlation coefficient and RMSE between the final predicted and observed tumor purity values for the test samples were 0.795 (Fig. 2a) and 0.129, respectively.
To see if this observed prediction performance could have been achieved by chance, we applied our procedure to putatively null data sets generated by permutation (Additional file 6: Text -Permutation test). We showed that the models by XGBoost are meaningful and the good performance is unlikely to be attributable to chance (empirical P < 0.004).
Top-ranked genes for pan-cancer tumor purity prediction
Each of the 1000 models provided an importance score for each gene from up to 5000 boosts (see Methods). We averaged the 1000 scores for each gene to obtain the average score for the gene. When ordered, the overall importance score decayed quickly (Fig. 3a). Among the 17,170 genes, 731 had non-zero importance scores in all 1000 models. We re-ranked these genes based on the median value instead of the mean of the 1000 importance scores for each gene. The ten top-ranked genes remained the same for both mean-and median-based ranking. The top ten genes ( Fig. 3b) were CSF2RB, RHOH, C1S, CCDC69, CCL22, CYTIP, POU2AF1, FGR, CCL21, and IL7R (Additional file 7: Table S4).
Performance on pan-cancer tumor purity prediction using only the ten marker genes To see if the ten marker genes could serve as "universal" markers for tumor purity prediction using RNAseq gene expression data, we repeated the above analysis using only the ten marker genes. First, we selected tuning parameters tailored to these ten marker genes using the procedure described (see Methods: Tuning parameters). With this new set of tuning parameters, we again carried out 100 repetitions of 10-fold crossvalidation with the same training samples as before. Lastly, we applied the 1000 resulting models to predict tumor purity values in the same test samples. To our surprise, the ten genes did reasonably well. The average performance of these individual models with only ten marker genes as predictors was comparable, though slightly degraded, compared to using all genes as predictors (Additional file 5: Table S3b).
For the performance of our overall predictor based on averaging predictions from 1000 models, the Pearson correlation and RMSE between the final predicted tumor purity values and observed tumor purity values for the test samples were 0.719 (Fig. 2b) and 0.146, respectively. Thus, performance using only these ten marker genes suffers only slightly compared to performance using all genes.
Performance on individual tumor type purity prediction using only the ten marker genes Above, we showed that the ten genes performed well on pan-cancer (combined) tumor purity prediction. To see if the ten marker genes could also perform well for individual tumor types, we carried out training and crossvalidation for each of the 33 tumor types, separately. Specifically, for each tumor type, we carried out the Table S1 same 100 repetitions of 10-fold cross-validation using only the ten marker genes. However, we did not create a separate testing set as we did for pan-cancer purity prediction because several tumor types have small sample sizes (< 100). We identified an optimal set of tuning parameters for each tumor type using the same approach (see Methods: Tuning parameters) as with all tumor types combined. Across the 33 tumor types, the predictive performance of the ten marker genes varied widely: the median Pearson correlation ranged from 0.17 to 0.92 whereas the RMSE ranged from 0.08 to 0.26 (Fig. 4).
Although the ten marker genes performed reasonably well for most tumor types, performance was poor for three tumor types (LAML, PAAD, and THYM) with high median RMSE. The poor performance for LAML is likely due to the fact that we limited the genes to the 10 markers, which may not be the optimal for those tumor types. LAML (acute myeloid leukemia) is a blood-borne cancer. We speculate that the models obtained from the largely solid tumors may not be appropriate for blood cancer. For PAAD and THYM, interestingly, two other methods (ESTIMATE [14] and CONSENSUS [15]) did not provide tumor purity estimates. For the 53 PAAD test samples, we computed both RMSE and correlations between XGBoost predicted tumor purity values and ABSOLUTE estimated tumor purity values and a, using all genes as the predictors; and b, using only the ten marker genes as predictors. Each label represents a test sample and each tumor type is colored differently compared those from InfiniumPurify [12], a DNA methylation-based tumor purity prediction method. The predicted tumor purity values from XGBoost and Infi-niumPurify both deviate from those from ABSOLUTE for some of the same samples (Additional file 8: Table S5A). For THYM, like XGBoost, there was little correlation between ABSOLUTE estimated tumor purity values and InfiniumPurify estimated tumor purity values (Additional file 8: Table S5B).
Comparison with ESTIMATE
In this comparison, we only considered the 2359 test set samples that were common between our 3104 test samples and all samples with purity data from ESTIMATE [14]. XGBoost outperformed ESTIMATE with a smaller RMSE (0.12 using all genes and 0.14 using the ten marker genes compared to 0.25 from ESTIMATE) and higher correlations (0.82 using all genes and 0.73 using the ten marker genes compared to 0.61 from ESTI-MATE). The results are summarized in Additional file 9: Table S6. Moreover, ESTIMATE used 141 stromal and 141 immune marker genes whereas XGBoost used as few as 10 with better performance.
Comparison with random Forest
For XGBoost, we did not specifically search the hyperparameter space for the optimal hyper-parameter set for RF. Instead, we used the best hyper-parameter set we obtained from our XGBoost analysis as the candidates for the best parameter for RF and further carried out fine-grid search around the candidate set (Additional file 10: Table S7). We chose the hyperparameters (1000 trees, 100% samples per tree and 60% features per split), which gave rise to the lowest out-of- Table 1.
XGBoost performed better than RF in terms of RMSE and Spearman correlation (Table 1) but worse in terms of Pearson correlation, which is subject to more influence by outliers than Spearman correlation. We believe that recursively fitting on the regression residues in XGBoost [30] contributed to the better performance. Moreover, it is worth noting that the XGBoost achieved similar RMSE using only five genes/features compared to all genes by RF (Table 1). We did not carry out feature selection using RF as it is computationally prohibitive for this dataset. Performance on the independent TNBC dataset using all genes Using the 134 TNBC RNA-seq samples from TCGA as the training samples, we also carried out the same 100 repetitions of 10-fold cross-validation. Predictions from the resulting 1000 models were subsequently averaged to predict tumor purity values for the six independent TNBC samples (not used in training) for which the "bulk" RNA-seq data were aggregated from scRNA-seq data [6]. Average cross-validation performance of the individual models was reasonably good, but we were predicting only six test samples (Additional file 11: Table S8A). For the overall predictor, the correlation coefficient between the experimentally obtained tumor purity values and our XGBoost predicted tumor purity values was high (ρ = 0.98, Pearson correlation) (Fig. 5a) and the RMSE was low (0.133). This good performance by XGBoost happened even though the model was trained on ABSOLUTE purity estimates but purity estimates in the test samples were based on cell types of the individual cells from scRNA-seq data sets.
Performance on the independent TNBC dataset using only the ten marker genes
To see if the ten putative marker genes could also accurately predict tumor purity of the six independent samples, we repeated our entire analysis procedure (tuning parameter optimization; 100 repetitions of 10-fold crossvalidation; averaging the 1000 predictions for each test sample) using the expression data of only the ten genes in the 134 training samples from TCGA. Average crossvalidation performance of the individual models was comparable but using 10 genes as predictors was worse compared to using all genes (Additional file 11: Table S8b). The correlation between the final predicted and experimentally obtained tumor purity values remained high (ρ = 0.88) (Fig. 5b), though the RMSE (0.239) was nearly twice as large as it was when using all genes.
Discussion
We applied XGBoost, a machine learning algorithm, to genome-wide RNA-seq data to unbiasedly select a subset of genes whose expression values could predict tumor purity obtained from ABSOLUTE analysis using copy number variation [8]. We combined 9318 TCGA tumor samples from 33 tumor types, carried out pan-cancer tumor purity prediction, and evaluated the quality of predictions through 100 repetitions of 10-fold crossvalidation (Additional file 6: Text -Testing for convergence). Our final predictor was the average of 1000 predictions for independent test samples. We used all genes as well as a selected subset of ten marker genes in our predictions. XGBoost performed well and the correlation between the predicted and observed tumor purity values was generally high with low RMSE. XGBoost provides an importance score for each gene from each model reflecting how useful or important the gene was to the model's prediction. We wondered if the top-ranked ten genes could be used as a "universal" biomarker set for tumor purity prediction. To test this idea, we carried out three separate analyses using only the expression data of the ten marker genes. First, we carried out pan-cancer (all 33 tumor types combined) tumor purity prediction through model training, crossvalidation, and testing. Second, we carried out tumor purity prediction for individual tumor types through model training and cross-validation. Lastly, we trained XGBoost models on 134 TCGA triple-negative breast cancer RNA-seq tumor samples and used the resulting models to predict tumor purity of six independent samples that were not from TCGA and not used in model training. In these analyses, we showed that the Fig. 5 Scatterplots of the XGBoost predicted tumor purity values versus the observed tumor purity values constructed from single-cell RNA-seq experiments for six independent TNBC samples. a. using all genes as the predictors; and b, using only the 10 marker genes as the predictors correlation between the predicted and observed tumor purity values was relatively high and the RMSE was generally small except for three tumor types (LAML, PAAD, and THYM). Therefore, we suggest that the ten-gene set could serve as a biomarker for tumor purity prediction using gene expression data.
We speculate that most of the ten marker genes are largely expressed by the tumor stroma, not the cancer cells. It is not clear if the expression of these genes in tumors merely reflects the amount of infiltrating immune cells or is indicative of some (unknown) fundamental biological processes. All genes, except for two (C1S and CCDC69), had high expression in either transformed lymphocytes, whole blood, or spleen (https://gtexportal. org), suggesting that these genes may be predominantly expressed by the infiltrating immune cells in the tumors. C1S was ubiquitously expressed in most organs except for the brain. C1S was also ubiquitously expressed in most cell lines with highest expression in transformed fibroblasts, an important component of the tumor microenvironment. However, immunostaining of tumor tissue samples showed that all malignant cells were negative (https://www.proteinatlas.org/), suggesting that the observed C1S expression in tumor samples may also largely come from the tumor stroma. Like C1S, CCDC69 was also ubiquitously expressed except in the brain. However, unlike C1S, malignant cells also showed moderate to strong cytoplasmic staining in tumor samples for nearly all TCGA tumor types. Interestingly, the expression levels of CCDC69 in TCGA tumor samples were negatively correlated with the tumor purity for the same samples for nearly all TCGA tumor types (data not shown), suggesting that the observed CCDC69 expression in those tumor samples may also largely come from the tumor stroma. Taken together, these results appear to suggest that these genes are largely expressed in the tumor stroma and that may explain why they were selected as the most important genes for tumor purity prediction.
Many methods have been proposed for tumor purity prediction, such as CONSENSUS [15] and ESTIMATE [14]. Other methods for tumor purity predictions consider methylation data [11][12][13] and copy number variation [8,36,37]. Most non-genomic (e.g., transcriptome-or methylome-based) purity prediction methods consider a subset of preselected stromal-cell-expressed genes or stromal-cell-specific methylation loci as predictors. ESTIMATE involves a set of 141 "universal" stromal genes selected using a sophisticated computational scheme [14]. In our approach, XGBoost considered all genes. Furthermore, our marker genes were not preselected, but identified by XGBoost.
Tumor purity is inversely correlated with the expression of genes active in stromal gene expression. We found that the expression levels of most predictive genes were negatively correlated with tumor purity. Boost also identified some genes as predictive whose expression levels were positively correlated with tumor purity (data not shown), suggesting that those genes were expressed primarily by cancer cells in the tumor samples. We plan to use XGBoost to systematically analyze each tumor type separately. We envision that there exist common and unique gene sets that are predictive of tumor purity among different tumor types. We believe that those common and unique genes might be reflective of the commonality and uniqueness of their respective tumor microenvironments and that identifying them might shed light on barriers to the efficacy of immunotherapy with different tumor types [38].
Our analysis required training data with both tumor purity estimates and gene expression data. To our knowledge, only TCGA has produced large datasets with both attributes. This makes independent validation of our models challenging. Single-cell experiments can estimate cell populations in a tissue sample. However, most of the single-cell experiments do not care about RNA-seq expression in bulk samples and the number of tissue samples considered in a single-cell experiment is typically small (e.g., under 10) due to high sequencing cost. Nonetheless, we could infer the "bulk" RNAseq expression of a tissue sample from the expression data of individual cells in the tissue. Using scRNA-seq data from tissue samples from six TNBC patients, we showed that XGBoost trained on TCGA RNA-seq samples can predict cancer cell proportions in these independent test samples with high correlation using only the 10 marker genes.
The approach that we outlined uses XGBoost to derive predictive biomarkers will be applicable to expression data from any platform like microarrays (see Additional file 6: Text -Test on microarray data and Additional file 12: Figure S3), but the quality of the predictions would certainly depend on how well the data from a given platform reflected the underlying biological reality. The XGBoost algorithm should work well regardless of preprocessing or normalization steps [33]. If the data from different platforms provided comparably accurate reflections of the underlying reality, we would expect the identified biomarkers to serve well, regardless of platform. On the other hand, the exact predictive rule that we derived using RNA-seq data from TCGA will not necessarily transfer to other platforms or other scaling or normalization on the same platform. Our predictive rule relies on regression trees where the predictors in each regression are expression levels. To the extent that expression levels from different platforms are inherently on different scales or have been normalized differently, the estimated coefficients in the component regression models derived from one platform will differ from the corresponding estimated coefficients derived from another platform. Consequently, the exact predictive rule, which involves specific estimated coefficients, derived from data of one platform may not perform well on data from a different platform.
The robustness of gradient boosting machines against small perturbations has been documented [39]. We believe that the TCGA data that we used was appropriately normalized. Nonetheless, outliers can be problematic. Robust methods for dealing with data with outliers have been developed and are thoroughly reviewed [40].
Like many other methods that minimize the L2 loss function, XGBoost would not be robust to outliers/contamination in the response variable. This means that the fidelity of our predictive models depends on the quality of the tumor purity data. For three TCGA tumor types (LAML, PAAD, and THYM), XGBoost performed poorly. This poor performance was also confirmed by an independent method, InfiniumPurify [12], which uses DNA methylation data from largely the same patients for tumor purity prediction. For these three tumor types, the models may be mis-specified or the tumor purity values may be "outliers". Given this finding, we would not recommend using our models to predict tumor purity predictions for those three tumor types.
There are other ensemble learning algorithms applicable to our problem. XGBoost has some advantages, especially, its low computational time complexity and high performance [31]. The XGBoost software is optimized for large-scale machine learning problems on high performance computers. This efficiency is especially needed for our dataset which consisted of~20,000 genes/variables and~10,000 samples.
Finally, we acknowledge that our ten marker genes may not be optimal for all tumor types. It is likely that tumor-type specific predictive models may perform better than models derived from pan-tumors. We were surprised that such a "universal" (although imperfect) gene set could be found, suggesting that expression levels of some immune genes in solid tumors might be indicative of the amount of immune cell infiltration in tumors. Also, our external validation of the ten marker genes as predictive biomarkers was limited by small sample size with only six samples. This is typical of current single cell studies due to high cost. As the single cell sequencing technologies improve further, analysis of a larger number of tissue samples may likely be routine. Until then, we believe purity prediction using marker genes or methylation sites remains useful.
In summary, we have demonstrated that XGBoost can identify a subset of genes whose expression levels could predict tumor purity. We propose that the expression levels of the ten genes may serve as biomarkers for tumor purity estimation.
Framework
We built an ensemble of stochastic gradient boosted tree models using the training data to predict tumor purity in the test set samples (Fig. 6 and Additional file 13: Figure S4). Specifically, we carried out 100 repetitions of 10-fold cross-validation within the original training data. Each repetition was created by randomly shuffling the order of the training set. Then, for each 10-fold crossvalidation, 10% of the samples were sequentially set aside as validation samples and the remaining 90% of the samples used as training samples. This procedure created 100 × 10 = 1000 training-validation partitions of the original training samples. Based on RMSE, 1000 models appear to be adequate (Additional file 14: Figure S5). We fit the XGBoost (R package: version 0.82.1; https://cran. r-project.org/web/packages/xgboost/) models to each training subsample and used the resulting fitted models to predict the tumor purity values of the corresponding validation subsamples and again for the original test data (3104 samples). We used the average of 1000 predictions for each test sample as its final tumor purity prediction. Our reasons of using ensemble of tree ensembles are twofold. First, we could boost the prediction performance by leveraging a model averaging approach. Secondly, since we sought to avoid overfitting by using only a random subset of genes to grow each tree, we could by chance rank an important predictor/gene low. To ensure that we ranked genes appropriately, we repeated the procedure with slightly different training samples many times.
Tuning parameters
XGBoost employs eight tuning parameters (Additional file 15: Table S9) to control "bias-variance" tradeoffs. Complicated models (e.g., many boosted trees in a sequence) may fit the training data well but not the testing data, a situation called "overfitting." XGBoost provides two general ways to avoid overfitting. First, one could adjust model complexity by the changing values of tuning parameters: maximum tree depth, minimum leaf weight, and minimum split gain. The number of trees and the tree depth determine the final tree's structure and complexity. Because each new tree in sequence tries to correct mistakes made by previous trees, shallow trees with a depth of 4-6 are often preferred [30]. The second way is to add randomness to make training more robust to noise. Randomness can be adjusted by setting the sub-sampling rate at each sequential tree and/or by using a subset of randomly selected features for splitting. The model's learning rate, another important tuning parameter, determines how much each tree contributes to the overall model. A low learning rate will increase the number of trees in a sequence and should result in better performance. The tradeoff is that it will increase computational cost. The final predictive model is a linear combination of all trees in the sequence with their contributions weighted by the learning rate.
To identify the best combination of tuning parameters, we carried out a 10-fold cross-validation on the training data (6214 samples) for each of 1486 possible tuning parameter combinations. This step took about one week to complete on a single server (Intel processors, 64 cores, 2.30 GHz CPU). We used the parameter combination with the best cross-validation performance to train the final model using the training data.
Feature importance score
XGBoost automatically provides estimates of feature importance from a trained predictive model. "Feature importance score" refers to a score indicating how useful or important the feature (in this case, a gene) was to the model's prediction. Here, the importance score is calculated for a given gene for a single tree by summing the amount that each split point involving that gene improved the performance measure, weighted by the number of observations contributing at each split [31]. Denote the importance score for gene g in an individual tree t by S gt . Within a single model composed of a sequence of trees, the importance score for a particular gene g * in a model is the sum of all tree-specific importance scores for that gene over all trees in the model divided by the sum of all importance scores over all genes and all trees in the model, namely, S g à • ¼ ð P T t S g à t Þ=ð P T t P G g S gt Þ. We picked the genes with non-zero importance scores for all 1000 models (each fitted to a distinct training-testing partition of the original data; see below). For each model, we got a ranked list of genes based on their variable importance scores. We aggregated 1000 ranked lists of genes by ranking genes based on their median rank across the 1000 models.
Comparisons with ESTIMATE and Random Forests
The tumor purity values from ESTIMATE [14] were downloaded from ref. [14]. For the Random Forest analysis, we used the MATLAB built-in function (Tree-Bagger). We followed the same training-testing procedure as we did for XGBoost.
Performance evaluation
To evaluate performance, we compared the predicted tumor purity values with the observed tumor purity values. For each model, we computed both the RMSE and Pearson correlation (ρ) between the predicted tumor purity values and the ABSOLUTE estimated purity values as performance measures. To summarize the performance of individual XGBoost models, we computed both mean and standard error and median for each measure (both correlation and RMSE) across the 1000 training-validation partitions and testing data. To create the final predicted value for each test sample, we also averaged the 1000 predicted tumor purity values for the sample. The averaged predicted purity value can be viewed as the predicted value by bagging ensembles (1000 in our case). Prediction using bagging ensembles performs well [41] as shown in Random Forests [42]. Throughout the remaining manuscript, we referred to Fig. 6 A schematic of the XGBoost workflow. The shaded area indicates the data and its partitioning. The boxes inside the dashed lines depict training and testing procedures where T stands for tree and GBM stands for gradient boosting machine. The two oval boxes on the right denote the outputs from XGBoost. A tree-representation of the training and testing procedures is provided in Additional file 13: Figure S4 this predicted value by bagging ensembles as the (final) single predicted value and we report its RMSE and correlation with ABSOLUTE tumor purity as measures of overall performance. Let ŷ denote the predicted tumor purity value, y be the tumor purity value estimated by ABSOLUTE (observed) and N be the sample size.
Obtaining "bulk" RNA-seq data from scRNA-seq data For each of the six TNBC scRNA-seq datasets, we summed the raw expression counts for each gene across all cells in the sample to obtain the raw gene-specific expression count for the "bulk" sample. This procedure resulted in six "bulk" expression profiles, one for each of the six samples. Next, we extracted the 134 TNBC RNA-seq samples from TCGA breast cancer RNA-seq samples. The two datasets were then merged using the common genes (15,076). Next, we normalized all 140 samples in the combined dataset using the median of the medians of expression values of the 134 TCGA samples. Specifically, we calculated median expression for each sample and then centered all the data on the median of those medians. Finally, we log 2 -transformed the normalized counts (log 2 (-count+ 1)). The 134 TCGA samples were used to train our model, and the resultant model was then applied to predict tumor purity values for the six independent "bulk" samples.
R code and test data
We have included the R source code, a demo dataset (TCGA triple negative breast cancer and an independent bulk RNA-seq data from single cell sequencing), and a brief documentation on Github (https://github.com/yua-nyuanli66/gbm.ensemble). The code allows users to build their own models using the TCGA RNA-seq data (not provided) for tumor purity prediction on their own expression data. Because of the size of the TGCA data, we did not include the RNA-seq data for all tumor types in the package. Such data can be downloaded from the Pan-Cancer Atlas Publication website (https://gdc.cancer.gov/ about-data/publications/pancanatlas). | 8,715 | 2019-12-01T00:00:00.000 | [
"Biology"
] |
Hybrid black silicon solar cells textured with the interplay of copper-induced galvanic displacement
Metal-assisted chemical etching (MaCE) has been widely employed for the fabrication of regular silicon (Si) nanowire arrays. These features were originated from the directional etching of Si preferentially along <100> orientations through the catalytic assistance of metals, which could be gold, silver, platinum or palladium. In this study, the dramatic modulation of etching profiles toward pyramidal architectures was undertaken by utilizing copper as catalysts through a facile one-step etching process, which paved the exceptional way on the texturization of Si for advanced photovoltaic applications. Detailed examinations of morphological evolutions, etching kinetics and formation mechanism were performed, validating the distinct etching model on Si contributed from cycling reactions of copper deposition and dissolution under a quasi-stable balance. In addition, impacts of surface texturization on the photovoltaic performance of organic/inorganic hybrid solar cells were revealed through the spatial characterizations of voltage fluctuations upon light mapping analysis. It was found that the pyramidal textures made by copper-induced cycling reactions exhibited the sound antireflection characteristics, and further achieved the leading conversion efficiency of 10.7%, approximately 1.8 times and beyond 1.2 times greater than that of untexturized and nanowire-based solar cells, respectively.
preferentially appeared to be one dimensional. This was because the movement of metals, including gold (Au), silver (Ag), platinum (Pt) and palladium (Pd) utilized for initiating the catalytic etching tended to proceed directionally along <100> orientations 8,11,12 , whose involved back-bond strength was lowest in Si crystals. Such fundamental restriction limited the shape diversity of etching profiles in a controlled manner while utilizing a MaCE technique. More critically, the great challenge upon cell construction of PV devices remained in the creation of uniform p-n heterojunction onto one-dimensional nanotextures owing to the nature of high aspect ratio. This might further cause the fabrication complexity and thereby strongly hindered their practical applications on the realization of high-performance solar cells.
Taking together, our strategy was to develop a single-step etching technique that enabled to prepare the regular textures with pyramidal shapes rather than the conventional one-dimensional nanotextures; instead such introduced surface structures still maintained the sound light-trapping capability. Different from the developed MaCE method where the etching reaction was the major motive contributing to the generation of eventual structures, our cycling etching technique essentially consisted of two consecutive processes motivated at Si surfaces, where the cycling reactions were started with an induced etching of Si through copper (Cu) deposition, and followed by the spontaneous elimination of deposited Cu, which acted as a barrier for succeeding dissolution of Si. Anisotropic etching of Si could be sustained as long as these two competitive reactions stayed in dynamic balance, which were accompanied with the gradual variations of slanted angle in formed pyramidal structures. Detailed examinations of morphological evolutions, etching kinetics and formation mechanism were performed, which validated the distinct etching model under the cycling reactions introduced at Cu/Si interfaces. In addition, a further attempt to practically realize the organic/inorganic solar cells was undertaken. This was assembled by incorporating two types of pyramid-type textures with p-type conductive polymer featuring the p-n heterojunction with careful interfacial management. This employment was promised for the development of high-performance nanotextured solar cells.
Fabrication of various Si textures.
A single-step procedure of Cu-induced cycling etching was conducted to fabricate pyramid-type textures with controlled profiles. Prior to etching reaction, the monocrystalline Si substrates with (100), (110) or (111) orientation were cleaned through the regular ultrasonication in acetone, isopropyl alcohol (IPA) and deionized water (DI) water. After drying with N 2 gas, the substrates were dipped into the mixed solutions containing CuSO 4 (0.01 M), HF (1.2 M) and various concentrations of H 2 O 2 (0.14 M-0.9 M) for 15 min at 40 °C. Likewise, nanowire-based textures were fabricated using a conventional MaCE reaction under the mixed solutions containing AgNO 3 (0.02 M) and HF (4.5 M) at 40 °C. After rinsing with DI water several times, the residual Cu nanoparticles were removed completely by dipping the as-prepared samples in a concentrated HNO 3 (65%) solution for 40 s and then rinsed with DI water several times. Device construction. The hybrid solar cells were employed with the n-type monocrystalline Si substrates (Resistivity = 1-10 Ωcm). First, the as-prepared textured substrates with sizes of 2 cm by 2 cm were dipped in the dilute hydrofluoric acid (2%) for 2 min in order to completely removing the native oxide grown on Si surfaces. Subsequently, the substrates were rinsed with DI water and then dried with gentle N 2 gas. 200-nm-thick Al layer was then deposited on the back side of Si substrates with electron-gun evaporator to serve as back electrode. After the construction of Al electrode, Si substrates were transferred to a home-made chamber with controlled humidity of 60% for 2 hr in order to grow an extremely thin layer of Si oxide (1~3 nm) on the surfaces of texturized structures. In addition, a recent literature reported that a very thin layer of Si oxide could also effectively passivate Si surfaces 13 . Next, a PEDOT:PSS dispersion (Clevios PH1000) was spun-coated onto the texturized surfaces at constant speed of 3000 rpm for 1 min and then heated at 120 °C for 15 min under the ambient atmosphere. Finally, a conventional ITO glass (7Ω/square) was gently placed on the Si substrates coated with PEDOT:PSS polymeric layers.
Characterizations. Morphologies of as-prepared Si textures were characterized using a field emission scanning electron microscopy (SEM, LEO 1530). Etching rates of Si were carefully estimated by measuring the weight loss of Si substrates under various concentrations of H 2 O 2 reactants after conducting a Cu-induced cycling etching. Cell performances were measured under a standard AM 1.5 G solar simulator equipped with J-V measurement system (Keithley 2400). Spatial voltage characterizations of fabricated hybrid solar cells with respect to the light illumination (laser diode with wavelength of 450 nm and output power of 5 mW) were performed with LBIV measurement system (LiveStrong Optoelectronics).
Results and Discussion
Morphological evolutions of Cu-induced cycling etching could be observed in Fig. 1, where the involved concentrations of H 2 O 2 were gradually increased from 0.14 M, 0.53 M, 0.65 M and 0.90 M, respectively. These results manifested that the strong correlation of etching profiles with H 2 O 2 concentrations. Specifically, at low H 2 O 2 concentration (0.14 M), Si textures could be hardly found on the Si surfaces after conducting etching process, as evidenced in both cross-sectional and top view of SEM observations in Fig. 1(a). With the addition of H 2 O 2 up to 0.53 M, etching profiles were dramatically altered. These features were believed to the overall contributions from cycling reactions induced through the Cu deposition and dissolution. It further clearly implied that the etching characteristics were dramatically transited from the nearly isotropic features toward anisotropic profiles depending on the involvement of H 2 O 2 concentrations. The formation of pyramid-shaped textures could be clearly observed in Fig. 1(b). With the continued increase of H 2 O 2 concentration reaching 0.65 M, texturization of Si was accelerated due to the existence of abundant H 2 O 2 for proceeding the dissolution process of Cu. leading to the pronounced profiles with deeper etching grooves. Instead, the shapes of etching pores were changed to be inverted-pyramid structures, as particularly imaged in Fig. 1(c). In the case of high concentrated H 2 O 2 [0.90 M], the uneven and rough surfaces textured by Cu-induced cycling etching could be found, as shown in Fig. 1(d).
To gain more insights on the evolution of etching profiles when tuning the concentrations of H 2 O 2 , the aspect ratio and angle of pyramid structures were estimated, as shown in Fig. 2(a), where the pyramid angle (θ) was defined by the angle between the taped sidewalls and base of etched pyramids. Structural estimations were carefully employed from 60 sets of pyramid textures, respectively, and the average values were recorded. Evaluations of structural profiles revealed that the most pronounced aspect ratio of etched textures reached around 0.7 while the H 2 O 2 concentration of 0.65 M was involved. Since the etching durations of all the tested experiments kept irreverent, it suggested that the formation of such inverted-pyramid features were the kinetically preferable under the involvement of Cu-induced cycling etching. Also, the pyramid angle of texturized structures presented the largest value (58 0 ) at the similar concentration of H 2 O 2 . Under such condition, the taped facets of Si textures belonged to <111> orientations according to the lattice configuration of Si crystals. Moreover, one could further observe that the change of aspect ratio with respect to the various concentrations of H 2 O 2 approximately matched the modulation of pyramid angles, indicating the fact that these two geometrical factors were mutually dependent to the involved H 2 O 2 contents, as evidenced in Fig. 2
(a).
In addition, surface coverage of textures under different H 2 O 2 concentrations was monitored to examine the fabrication uniformity of utilized etching technique. This was achieved by quantitatively evaluating the relative surface area possessed via the imaging analysis (ImageJ), as presented in Fig. 2 . Besides, uniformly distributed deposition of Cu nanoclusters covering throughout the Si surfaces upon etching reaction was evidenced, which also acted a significant role for the resulting uniform textures, as evidenced in the Supplementary Information. Combined all these corresponded results from geometrical investigations in Fig. 2(a) and (b), it could be speculated that the controlled formation of pyramidal textures was indeed dominated by the involved reactants with respect to their relative amounts, where the etching features could be intentionally tuned by taking considerations of structural parameters including aspect ratio, taped angle and surface coverage. Furthermore, there must exist the relative amount of H 2 O 2 -HF-CuSO 4 electrolytes that would response to the reliability and stability of this etching technique.
To unveil the influences of reactive compositions on the etching kinetics, the concentration ratio of H 2 O 2 -HF-CuSO 4 electrolyte system, η, was defined 14,15 , 2 2 4 where the etching rates of Si substrates depending on the wide spam of η were investigated under a Cu-induced cycling reaction, as presented in Fig. 3. Essentially, evolution of etching kinetics could be broken into contributions from three seperated regions. At Region 1, reaction of Si dissolution was dominantly contributed by the direct displacement of Cu 2+ ions and Si and followed by dissolution of Si oxide with HF etchants, as expressed by the following equations [16][17][18] , In such condition, Cu nanocluters were favorably grown on Si surfaces through a galvanic displacement between Cu 2+ ions and Si due to the fact that the electronegativity of Cu (1.9) is higher than that of Si (1.8) 16 . Nucleation of Cu nanoclusters were taken place by withdrawing the electrons from Si substrates and accompanied with the direct oxidation of Si surfaces in contact with Cu 2+ ions. The involving oxidation of Si followed by oxide dissolution with HF etchants was essentially isotropic with no preferred orientation, leaving a Si surfaces with fine pores after removing the deposited Cu, as presented in Fig. 1(a). Also, the nature of electroless deposition was highly subjected to the diffusion-limited circumstances. Namely, it could be speculated that the deposition reaction tended to slow down while the dense Cu layers were generated. Metal deposition was eventually terminated because the oxide-dissolution species, i.e., F − ions were incapable of reaching the Cu/Si interfaces for the initiation of removing the grown oxide 19 . Meanwhile, with the gradual reduction of η (concentration of H 2 O 2 over 0.30 M), the introduction of competitive dissolution reaction of Cu in addition to the active growth of Cu was taken place, which could be given by, Competition of Cu dissolution against the reduction of Cu 2+ ions was involved, responding to the substantial decrease of dissolution rates of Si, as shown in the Region 1 (η < 80%) of Fig. 3. It should be noted that the overall reaction still favored the continued deposition of Cu since in this moment the involved H 2 O 2 regents remained insufficient for the complete removal of grown Cu clusters. By decreasing η toward 72% (the addition of H 2 O 2 over 0.50 M), the competitive processes of Cu deposition and dissolution reached a quasi-stable configuration; that is, the kinetic balance of these two reactions was established. This resulted in the monolithic decrease of etching rates with respect to the reduction of η, where the etching evolution was moved to Region 2 in Fig. 3. Interestingly, the etching characteristic in Region 2 explicitly corresponded to the linear correlation with composition ratio, η, which was supported with the sound regression value of R 2 = 0.98. It implied that the etching rates in Region 2 could be well controlled by tuning the molar ratio of reactants in terms of η. In addition, distinct morphologies of Si textures in either pyramidal shape [ Fig. 1(b)] or inverted-pyramid features [ Fig. 1(c)] appeared to be the dominant structures with η of 69% and 64%, respectively. In Region 3, the involvement of substantial concentrations of H 2 O 2 hindered the continued deposition of Cu on Si and thereby, etching of Si was almost ceased with the variation of η. These features led to the unchanged etching rate responding to the decrease of η, where only a few irregularly distributed Si textures with porous sidewalls were created based on the morphological observation shown in Fig. 1(d).
To explore the etching orientations obtained from Cu-induced cycling reaction, various oriented Si substrates including (110) and (111) single-crystalline wafers were employed. The cross-sectional SEM investigations of textured surfaces created from (110)-oriented substrates were presented in Fig. 4(a), demonstrating the etched trenches vertically to the substrate plane with well regularity. Interestingly, the etching depth was approximately consistent with the results prepared with (100)-oriented wafers [ Fig. 1(b)] after experiencing the similar etching durations; instead the etched profiles were dramatically different. In addition, no obvious etched structures could be found on the surfaces of Si (111) substrates, as shown in the Supplementary Information. It implied the appearance of preferential etching direction via such Cu-induced cycling reaction regardless of the crystalline (110) substrates was examined 20 , as presented in Fig. 4(b). This was performed by characterizing the majority of textures' facets from the top-view SEM observations. Projection of two major directions onto (110) plane followed two principle axes which were perpendicular to each other, as evidenced in the insert figure of Fig. 4(b). It could be concluded that these directions belonged to <111> orientations and agreed well with the result of profile examinations shown in Figs 1 and 2.
We attempted to elucidate the formation mechanism of pyramid-type Si textures uniformly covered on the Si surfaces, as illustrated in Fig. 5. It has been extensively reported that the catalytic etching of Si assisted by the defined metal structures could result in the formation of one-dimensional Si nanostructures 21 . Etching conditions, such as involved reactive species, reaction temperature and time might be varied depending on the desired lengths or dimensions of etched products 22,23 . Still, the formed nanostructures were consistently one-dimensional in geometry wherever different catalytic materials, such as Au, Ag, Pt and Pd, were employed, respectively. In fact, the utilized catalysts, as aforementioned, could stand for a lengthy etching process. In such case, the decomposition of H 2 O 2 oxidants was taken place closely at catalyst surfaces, where the generated holes preferentially constituted the dissolution of Si beneath catalytic sites through the hole-injection process in HF-involved environment 24 . As long as these catalysts paved the primary route for Si etching, directional dissolution of Si with one-dimensional geometry was energetically predominant since any change of etching direction inevitably consumed the additional energy in some extent.
Nevertheless, the utilization of Cu as catalysts was believed to alternatively pave the unique way for the texturization of Si. While the molar ratios of H 2 O 2 -HF-CuSO 4 were within the Region 2 [ Fig. 3], the as-deposited Cu nanoclusters would be rapidly dissolved in the H 2 O 2 involved solutions prior to continued growth of Cu layers, as depicted in Fig. 5. This step was quite crucial for the intended modulation of Si surfaces because it boosted the newly arrived Cu 2+ ions to reside and nucleate at texturized Si surfaces, which had experienced the electroless deposition/dissolution of Cu. According to the pioneer study on the electrochemical fabrication of porous Si reported by V. Lehmann 25 , the accumulated holes introduced by an external electric field preferentially injected into the existed pits which possessed the small curvature radius upon electrochemical etching. Thus, it could be understood that the hole injections supplied from Cu 2+ ions were facilitated in the vicinity of textured sites rather than the un-etched regions, and the effective galvanic deposition of Cu could be expected due to the fact that the exchange of carriers were locally restrained at texturized Si surfaces. Such redeposition of Cu was accompanied with breaking the surface bonding of Si that responded to dissolution process. Through the cycling reactions induced by the Cu deposition and dissolution, evolution of etching characteristics was dramatically transited from nearly isotropic features toward anisotropic profiles. Accordingly, the formation of intermediate pyramidal textures could be clearly observed in Fig. 1(b), where the corresponding n was 69% (C H2O2 = 0.53 M).
On the other hand, with the reduction of η at 64% (C H2O2 = 0.65 M) in process, the inverted-pyramid Si textures with pronounced aspect ratio was formed owing to the existence of abundant H 2 O 2 for accelerating the cycling reaction of Cu deposition/dissolution. It should be noted that the etching rate of the Cu-induced cycling etching was more likely to be determined by breaking the surficial bonding of Si through an electroless displacement of Si with Cu 26 . Since there were no additional bias such as electric field or heat applied on this system, etching of Si was preferentially taken place at the energy-favorable sites in order to minimizing the energy barrier that was responsible for nucleation. Considering the crystalline configurations of Si, it has been reported that <111> orientation constitutes the stable bonding configuration in Si crystals according to the model of surficial bonds 27 . This explained the successive formation inverted-pyramid shapes of textures whose {111} planes left on the texture sidewalls essentially behaved as the etching-stop lattice configuration. Based on the origin of texture preparation, it should be emphasized that the fabrication uniformity of such etching technique highly relied on the fluid mechanics of aqueous systems as the overall processes involved the multiplex reactions localized in the vicinity of Cu nanoclusters 28 . Actually, it was found that no obvious textures could be created under the etching condition without applying a magnetic stir, as shown in the Supplementary Information. Moreover, with an introduction of agitation motion for aqueous regents driven by the optimal stirring condition (stir rate = 250 rpm), it could readily facilitate the cycling reactions of Cu that was responsible for the anisotropic texturization of Si, and thereby responded to the uniform textures covered throughout the Si surfaces.
In addition to the structural examinations along with mechanism study, the spectral reflectance of Si textures with various shapes were investigated, as demonstrated in Fig. 6. Here, nanowire samples and two types of pyramidal structures were prepared with conventional Ag-based MaCE and Cu-induced cycling etching, respectively. Compared with planar Si whose reflectivity was spectrally higher than 40% within the broadband illuminations from 300 nm to 800 nm, all other texturized architectures demonstrated the greatly suppressed refractivity with average value of 3.7% from Si nanowires, 8.9% from Si pyramids and 13.6% from Si inverted pyramids, respectively. It has been reported that the illuminated lights encountered the multiple scattering conditions with the nanoscale features whose dimensions were less than the wavelengths of incoming light 29 . This qualitatively explained the most pronounced suppression of light reflection from the case of nanowire-based textures. Aside from the characteristics of light scattering, the effective refractive index n(z) of textures can be described according to effective medium theory, as expressed below 30 , Si q air q 1/q in which z represented the texture thickness, f(z) the fraction of Si, n Si and n air the refractive indices of Si and air, respectively. Accordingly, the subwavelength pyramidal textures possessed the intermediate value of effective refractive index right between Si substrates and surrounding air, which introduced the gradual change of refractive index responding to the pyramidal geometry of Si textures that minimized the possible optical loss at Si/air interfaces, thus leading to the suppressed reflectivity within the broadband solar regions. On the other hand, the distinct optical response associated with spectral reflectivity of light appeared in Si pyramids (average reflectivity = 8.9%) and inverted pyramids (average reflectivity = 13.6%), which could be further elucidated through the schematic illustration of optical path shown in Fig. 6. Accordingly, the additional interfacial reflection of light could be actually motivated at the top surfaces of inverted pyramidal textures, eventually limiting their antireflection capability for trapping the incoming light.
To explore PV effects of two types of pyramidal Si textures, the p-n heterostructures were created by incorporating the n-type Si textures in conjunction with p-type conductive polymers, as depicted in Fig. 7(a). Conventional ITO glass and a deposited Al layer as top and bottom electrode were introduced to build up the basic electric connection, where the whole procedures of cell construction along with the characterizations of PV performance were described in the Experimental Section. Figure 7(b) presented the measured current (J)-voltage (V) curves of sandwich-type hybrid solar cells. Impacts of surface texturization on the cell performance could be clearly revealed through the modulations of texturized structures, including bare substrates (planar Si) as references, nanowires, pyramidal and inverted-pyramid textures, respectively, while holding all other fabrication procedures with consistency. The major PV parameters extracted from J-V measurements were demonstrated in Table 1. In addition, the measured photovoltaic result of hybrid solar cells texturized with conventional alkaline-based etching could be found in the Supplementary Information. Among them, planar Si based solar cells correlated with the lowest cell efficiency (6%), owing to the limited value of short-circuit current density (J sc ). These results could be interpreted by the high reflectivity of planar Si that significantly hindered the effective absorption of photons for supplying photogenerated carriers 31,32 . With the experience of texturization procedures, the values of J sc in three texturized solar cells were evidently improved. The highest J sc appeared in nanowire-based hybrid solar cells, which corresponded well to their lowest light reflectivity presented in Fig. 6. Nevertheless, the nature of high aspect ratio from nanowire architectures inevitably impeded the uniform coverage of p-type polymer with underlying nanowire-involved textures, which reflected to the reduced value of open-circuit voltage (V OC ) due to the poor deposition coverage of polymeric layers 33 . Meanwhile, the result of finite junction area of polymer/nanowire heterostructures was encountered, and responded to the low fill factor (FF) resulting from the insufficient charge transport pathways of photoexcited carriers 33,34 . These combined effects compromised against the great improvement of J sc and eventually led to the cell efficiency of 8.8%. Such severe trade-off condition between pursuing low optical reflectivity and sustaining large-area heterojunction with sound uniformity could be efficiently relieved by introducing the pyramidal textures on cell construction. These textures not only maintained the sound antireflection characteristics, but further eased the fabrication burden in terms of the creation of uniform p-n heterojunctions, which therefore possessed the leading conversion efficiency of 10.7%, approximately 1.8 times and beyond 1.2 times greater than that of untexturized (planar Si) and nanowire-based solar cells, respectively. In addition, over 20 sets of devices were fabricated and measured, showing the reliable conversion efficiency of fabricated solar cells.
Moreover, the origin of performance improvement based on pyramid-involved hybrid cells could be further supported by the spatial characterizations with light beam induced voltage (LBIV) analysis 35 , as compared in Fig. 7(c) and (d). By mapping the voltage fluctuations responded to the scanning of light beam, one could apparently observe the abrupt change of instant voltage with respect to the surficial positions of nanowire-based hybrid solar cells [ Fig. 7(c)], which strongly correlated with the structural non-uniformities or defective sites occurred in the cell structures. Cross-sectional SEM image of polymer/nanowire junction was displayed in the insert figure of Fig. 7(c), evidencing the fact that the polymeric layer was predominantly deposited on the top of Si nanowires rather than covering on the nanowires' sidewalls. It should be noted that the hybrid solar cells with the incorporation of inverted pyramids as textures suffered the similar problems against the resulting efficiency as they demonstrated both the comparable V oc and FF, as found in Table 1. By contrast, the comparably uniform and invariant LBIV image under a large-area scanning of light beam was visualized in pyramid-based hybrid solar cells [ Fig. 7(d)], again validated the improvement of creating p-n heterostructures as well as the resulting cell efficiency.
Conclusions
In conclusion, we have explored the role of Cu catalysts on the modulations of Si textures in the presence of H 2 O 2 -HF-CuSO 4 aqueous system, where the formation of both pyramidal and inverted-pyramid profiles with controlled aspect ratio and coverage uniformity could be tailored. This etching technique, introduced by the cycling reactions of Cu deposition and dissolution, promised a compelling capability for texturizing Si surfaces rather than the generic unidirectional nanowire-based structures, which were validated as promising design for advanced photovoltaic applications. Our results explicitly revealed the improved conversion efficiency of pyramid-based hybrid solar cells by means of reducing the defective sites essentially arisen from the contact non-uniformities between organic p-type layers and inorganic n-type nanostructures. This approach enabled to provide a unique opportunity for the understanding of nanoscale etching which remained unclear from the existing techniques and related studies, and might further extend the potential applications for various functional devices. | 6,091.4 | 2017-12-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Deep learning for terahertz image denoising in nondestructive historical document analysis
Historical documents contain essential information about the past, including places, people, or events. Many of these valuable cultural artifacts cannot be further examined due to aging or external influences, as they are too fragile to be opened or turned over, so their rich contents remain hidden. Terahertz (THz) imaging is a nondestructive 3D imaging technique that can be used to reveal the hidden contents without damaging the documents. As noise or imaging artifacts are predominantly present in reconstructed images processed by standard THz reconstruction algorithms, this work intends to improve THz image quality with deep learning. To overcome the data scarcity problem in training a supervised deep learning model, an unsupervised deep learning network (CycleGAN) is first applied to generate paired noisy THz images from clean images (clean images are generated by a handwriting generator). With such synthetic noisy-to-clean paired images, a supervised deep learning model using Pix2pixGAN is trained, which is effective to enhance real noisy THz images. After Pix2pixGAN denoising, 99% characters written on one-side of the Xuan paper can be clearly recognized, while 61% characters written on one-side of the standard paper are sufficiently recognized. The average perceptual indices of Pix2pixGAN processed images are 16.83, which is very close to the average perceptual index 16.19 of clean handwriting images. Our work has important value for THz-imaging-based nondestructive historical document analysis.
www.nature.com/scientificreports/ typically has the image resolution of a few hundred microns 17 , which is much lower than X-ray and optical imaging, but is still sufficient for historical document analysis. Although THz imaging is challenging for scanning thick books, it is promising to extract information from documents consisting of a few paper layers like letters and papyrus scrolls 15 . Despite its clear advantages, THz imaging requires a trade-off between image quality and imaging speed 18 . THz images typically suffer from speckle noise 19 , especially in a fast imaging mode. Therefore, THz image denoising has an important value in practical applications. Various conventional algorithms have been applied to THz image enhancement such as adaptive filtering [20][21][22] and deconvolution methods [23][24][25] . Adaptive filtering filters out high-frequency noise while preserving the sharpness of edges. Deconvolution methods enhance THz image resolution and suppress noise based on the accurate modelling of the point spread function 23 . Compressed sensing techniques have also been widely investigated in THz image reconstruction 18,[26][27][28][29] . As compressed sensing is able to reconstruct images from relatively few measurements by the exploitation of sparsity, it has been demonstrated effective for high-speed THz imaging, like single-pixel THz imaging systems 28,29 . For example, Li et al. 18 proposed to combine the ant colony algorithm with a compressive sensing technique based on local Fourier transform, which reduces noise well while preserving edge information.
Recently, deep learning has achieved impressive results in various fields, including THz imaging 30 . Deep learning has been applied to segmentation and classification tasks in THz images such as impurity detection in wheat 31,32 , breast cancer classification 33 , and heavy-metal detection in soils 34 . The low resolution problem of THz imaging can also be mitigated by deep learning based super-resolution techniques 35,36 . In rapid THz imaging, deep learning can significantly reduce algorithm complexity and increase signal-to-noise ratio [37][38][39][40][41][42] . For example, Ljubenović et al. 37 used a convolutional neural network (CNN) for THz image deblurring and their work demonstrates the efficacy of CNNs for denoising on synthetic THz data. Choi et al. 42 adopted the WaveNet from the field of speech and audio for THz image denoising in the frequency domain for 1D temporal signals. To overcome limited training data, Jiao et al. 43 proposed a Noise2Noise-based network for THz spectrum denoising using transfer learning from low-quality underwater images. However, deep learning has not been investigated in THz imaging for historical document analysis yet.
The paper aims to improve THz image quality for historical document analysis by reducing imaging noise and artifacts, which commonly exist in reconstructed images processed by standard THz reconstruction algorithms. Our work demonstrates the feasibility of THz imaging in information retrieval from sealed envelopes. It also demonstrates the efficacy of deep learning for THz image enhancement for better character recognition. To the best of our knowledge, our work is the first to apply deep learning to THz image enhancement for historical document analysis. Our experiments indicate that the deep learning enhanced image quality relies on the paper type and the page sides, which is valuable information conveyed to the community. From our point of view, our work is a very important step towards real applications of THz imaging in nondestructive document analysis, which will encourage more research in this topic.
Materials
The THz images used in this work were acquired at the Institute of Microwaves and Photonics (LHFT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany. For the measurements, the commercial radar imaging system "Quality Automotive Radome Tester" by Rohde and Schwarz was used. It is a multiple-inputmultiple-output (MIMO) radar consisting of 3 × 4 sparse subarrays with 1128 transmit channels and 1128 receive channels in total. The applied signal is a 64-point stepped-frequency continuous-wave signal, in the range of 74 GHz to 79 GHz. More details of the scanner can be found here (https:// www. rohde-schwa rz. com/ us/ produ ct/ qar).
To mimic historic letters concealed in envelopes, two types of paper are used to create the images for the dataset. One dataset was made with A4 standard paper and the other with the A4 Xuan-Paper. The Xuan-Paper features great tensile strength, smooth surface, pure and clean texture, clean stroke, and excellent resistance to corrosion, moth, and mold. The Xuan-paper is thinner than the standard paper and hence the corresponding Xuan-paper THz images have less noise than the standard-paper THz images. In addition, the papers were written in two ways: one was written on a single side and the other was on both sides. Therefore the two-side written images contain overlapping letters. All the letters were written with the calligraphy ink Type 29770 from Rohrer & Klinger Company. For each letter, a 3-D volume was reconstructed with a size of 705 × 1025 × 97 voxels and an anisotropic voxel spacing of 0.5 × 0.5 × 0.573 mm 3 . To reduce the effect of paper wrinkles and tilt, maximum intensity projection along the Z-direction was used to convert 3-D volumes to 2-D images. Two THz image examples from the standard paper and the Xuan paper are displayed in Fig. 1a,b, respectively. The THz signal is emitted and caught by a Vector Network Analyzer(VNA) (Rohde & Schwarz ZVA 24) combined with frequency extenders (Rohde & Schwarz ZVA-Z325) for the range between 220 and 325 GHz 15 . Two spline horn antennas and two polyethylene dielectric lenses were also used to achieve optimal focusing.
Methods
As displayed in Fig. 1, the acquired THz images suffer from severe noise, which is an obstacle to the recognition of context in historic document analysis. This work aims to enhance THz images using deep learning methods. Since THz image acquisition is expensive and time-consuming, it is challenging to acquire paired clean and noisy THz images to train a supervised deep learning model. To overcome the lack of paired data, we propose to apply an unsupervised learning network, in particular CycleGAN, to generate paired images using unpaired synthetic clean images and real noisy images. The synthetic clean images are generated by a handwriting generator, and a learned CycleGAN model will add similar noise patterns into the synthetic clean images to construct clean and noisy image pairs. With such paired images, a supervised learning network, in particular Pix2pixGAN, is applied for final THz image denoising. 44 was employed to generate clean handwriting images. A black background was taken, and random letters in white were created over it using random fonts. 2000 clean images in total are created to train our models as the first step result. The outputs of the handwriting generator are binary images of letters with different font types. They are saved in 8-bit PNG format. Figure 2 displays two exemplary images generated by the handwriting generator with two different fonts.
Synthesis of paired data via CycleGAN using unpaired data. Conversion between clean and noisy
images is fundamentally an image-to-image translation task. Since only unpaired instead of paired synthetic clean images and real noisy THz images are available, CycleGAN 45 is applied for such unpaired image-to-image translation in this work. CycleGAN consists of two generators, G AB that transfers an image from domain A to B and G BA that transfers an image from domain B to A. In particular in our work, domain A contains clean text images generated by the handwriting generator and domain B contains images with THz imaging noise and artifacts. Two discriminators D A and D B distinguish whether an image belongs to that domain. For a pair of G AB and D B , the adversarial loss function is defined as, Similarly the adversarial loss for G BA and D A is defined as L GAN (G BA , D A ) . In addition, a cycle-consistency loss is applied to minimize the reconstruction error after an image of one domain to another is translated back to the original domain, The overall objective function is, In our work, during training we kept clean synthetic images created via our handwriting generator in one domain and kept the collected real THz images in the other domain, as displayed in Fig. 3a. During inference, the clean synthetic images are reused as the input test data, and CycleGAN outputs their corresponding paired noisy images, which share similar noise characteristics to the real noisy THz images. (1) (a) Standard paper, one-side (b) Xuan paper, one-side (c) Standard paper, two-side (d) Xuan paper, two-side Figure 1. Examples of THz images from one-side-standard paper (a), one-side-Xuan paper (b), two-sidestandard paper (c) and two-side-Xuan paper (d). www.nature.com/scientificreports/ Note that during inference, the real noisy THz images can be used as the input data as well and CycleGAN will output their corresponding denoised images. In this work, such direct denoising by CycleGAN is also investigated.
Image denoising using Pix2PixGAN. In this work, Pix2pixGAN 46 is applied to translate noisy THz images to denoised ones with paired data. Pix2pixGAN is a conditional GAN, which uses the U-Net as the generator, G and a 5-layer patch-wise convolutional classifier as the discriminator, D. G learns to convert noisy THz images into clean ones. D learns to distinguish the output denoised images from reference clean images. The objective of the conditional GAN is, where x is the input, y is the target, G tries to minimize this objective against an adversarial D that tries to maximize it, i.e., G * = arg min G max D L cGAN (G, D) . In addition, an ℓ 1 loss function is applied to train the generator's output close to the target with less blurring compared to ℓ 2 loss, The overall objective function is As displayed in Fig. 3b, during training the synthetic noisy images from CycleGAN are used as the input and the corresponding clean images from the handwriting generator are used as the target. Only synthetic images are used for training. During inference, the real noisy THz images are used as the input and Pix2pixGAN predicts their corresponding denoised versions.
Experimental setup
Training data synthesis using CycleGAN. The synthetic dataset was created using CycleGAN. For this experiment, the code from Jun-Yan Zhu et al. 45 available on GitHub was adopted. The basic model for the discriminator is a PatchGAN, with a patch of size 70 × 70 and a 9-layer ResNet as the generator. The dataset consisted of two domains, clean synthetic images created by the handwriting data generator and the original THz images. The model was trained using an Adam optimizer with a batch size of 2 for 200 epochs with an initial learning rate of 0.0002 to generate 2000 noisy synthetic images similar to the initial THz images. The weight for the cycle-consistent loss cyc is set to 0.5. For the generator, no dropout was applied. The input channel and output channel were both set to 1. The learning rate was kept the same for the first 100 epochs and linearly decayed to zero over the following 100 epochs. All the images were resized and cropped to 256 × 256 during data preprocessing, and no data augmentation was used. The rest of the parameters were kept unchanged with respect to 45 . Image denoising using Pix2pixGAN. The U-Net is used as the Pix2pixGAN generator, which contains 8 down-sampling modules as well as 8 skip connections. For more details, please refer to the "unet-256" configuration in the authors' implementation 46 . An Adam optimizer was used to train the model with a batch size of 5 for 200 epochs with a constant learning rate of 0.0002. The weight for the ℓ 1 loss was set to 100. It was trained with the 2000 paired noisy synthetic THz images created using CycleGAN, and the inference dataset consisted of the 34 original THz images. A validation dataset of 30 paired noisy synthetic THz images is used to monitor overfitting. The training and validation ℓ 1 losses of the generator are displayed in Fig. 4, where no obvious overfitting occurs. As proposed in the paper 46 , random jitter was applied by resizing the 256 × 256 input images to 286 × 286 and then randomly cropping them back to size 256 × 256 . The model weights were initialized following a Gaussian distribution with zero mean and standard deviation 0.02. The remaining parameters were kept the same as the standard version 46 .
Comparison algorithms. In this work, some exemplary results of other algorithms are also displayed as a comparison. The bilateral filter 47 and its trainable version 48 are applied to compare with well-known adaptive www.nature.com/scientificreports/ filters. In particular trainable bilateral filter versions have been shown to provide robust denoising performance in the context of medical imaging 49 . The iterative reweighted total variation (wTV) algorithm 50 is selected as a compressed sensing representative. The half instance normalization network (HINet) 51 is chosen as a general deep learning denoising representative. Self-supervised learning algorithms do not rely on labelled training data, which can avoid the data scarcity problem. In this work, three self-supervised learning algorithms are selected: Noise2Self 52 , Noise2Void and Self-supervised vision transformer (SiT) 53 . SiT applies the latest techniques of transformers. Noise2Self and Noise2Void are well-known self-supervised denoising algorithms. In our experiments, three trainable bilateral filter layers are trained in a self-supervised way using the Noise2Void method following the setup of Wagner et al. 48 .
Evaluation metrics. Since ground truth images are not available for the CycleGAN synthetic images and the denoised real THz images, a non-reference image quality metric called perceptual index (PI) 54 is used to quantify these images. The perceptual index is calculated from the non-reference metrics of the natural image quality evaluator (NIQE) 55 and the Ma's score 56 , both of which extract image features to compute the perceptual quality. For super resolution tasks on natural images, a lower PI value corresponds to richer fine structures and hence indicates better perceptual quality. In our application, a lower PI value corresponds to more high-frequency noise/artifacts in general. The average PI value of all the original noisy THz images is 6.85 with a standard deviation of 0.60, while that of the clean handwriting generator images is 16.19 with a standard deviation of 0.45. Therefore, larger PI scores are desired for our denoising results. In addition, a custom approach is applied to quantify the algorithms used to denoise the THz images. As this paper aims to reduce the noise of THz images and finally retrieve the original data ideally or at least its structure, the characters visible with bare eyes are counted as a success, and if a character, any part of it or the entire character was missing, it is not considered as a valid output. The same accuracy calculation has been followed in the case of overlapping characters. Two overlapped characters count as a single structure for both-sided written images, so it is impossible to identify the characters separately in this case. The correct retrieval of overlapped characters' structure is counted as a success. The results are differentiated by the type of paper.
The accuracy is measured according to Eq. (8), and a comparative result is displayed in Table 2 for the Xuan-Paper and standard paper.
Results
CycleGAN results. One exemplary synthetic image from CycleGAN is displayed in Fig. 5c together with its corresponding clean input image Fig. 5b and a real THz image Fig. 5a. Figure 5a,c have similar appearance, although the two characters indicated by the arrows are hardly visible. The histograms of Fig. 5a,c are displayed in Fig. 5d, which indicates that the synthetic image also has similar intensity distributions to the real THz image. The average mean intensity, average standard deviation, and average total variation (TV) values for all the real and synthetic images are displayed in Table 1. For all the synthetic images, the average perceptual index is 4.52 with a standard deviation of 0.83. To show the overall appearances of the synthetic images, four additional synthetic images together with their PI values are displayed in Fig. 6e-h. Figure 5e is a typical example of the Cycle-GAN synthetic images like Fig. 5c. Figure 6f-h have slightly different appearances: Fig. 5f contains high-intensity artifacts surrounding each character; Fig. 5g contains wrinkle-like structures in the background; Fig. 5h is very bright for both characters and artifacts.
Two exemplary CycleGAN prediction results using real noisy THz images as the input are displayed in Fig. 6. In Fig. 6b,e, although noise is reduced, many fragments of the characters are removed or random strokes are added. Hence, only a small portion of characters are recognized. For example, in Fig. 6b only the characters "C", "D", "N", "P" and "S" are correctly restored, and in Fig. 6e only the characters "D", "G", "R", "N" and "S" are correctly restored. Figure 6 indicates that directly using CycleGAN for THz image denoising is insufficient. (Fig. 6a), its Pix2pixGAN output is entirely noiseless and all the characters in this image can be well recognized, as shown in Fig. 6c. The result of the standard-paper input is noise-free as well in Fig. 6f. Due to the relatively high-level noise in THz images using standard paper, some parts of certain characters are missing in Fig. 6f, for example, the letter "E" and "Z". Nevertheless, other characters like "C" and "S" are well recognized.
Two exemplary results of Pix2pixGAN on two-sided written THz images are displayed in Fig. 7. For both Xuan and standard paper, noise (artifacts) is removed, although some residual artifacts remain in the background. Compared with characters written on the back side, those on the front side are recognized much better. Nevertheless, the interpreted letter "G" in Fig. 7b is actually either "Q" or "O" in Fig. 7a, while the letter "C" in Fig. 7d is actually a mixture of two letters in the input image Fig. 7c.
The results of comparison algorithms on the same THz image written on Xuan paper (Fig. 6a) are displayed in Fig. 8. Figure 8a demonstrates that a bilateral filter with hand-picked filter parameters can reduce the noise and image artifacts to some degree, but the resultant background appears blurry. In Fig. 7b, the noise and artifacts are reduced as well. However, some "shadow" artifacts remain. The HINet result in Fig. 7c has the best binarization performance, although some artifacts remain. Like bilateral filter and wTV, HINet is able to improve the image quality, but many fragments of the characters are missing. The self-supervised learning algorithms all fail to reduce noise or artifacts, as displayed in Fig. 8d-f. Therefore, they are excluded for further quantification in Table 2. The character recognition accuracy in Table 2 indicate that almost all the characters (99%) in the Pix2pixGAN results can be recognized for Xuan paper, while ≤ 50% characters are recognized in the results of other algorithms. For standard paper, only 61% characters are recognized in the Pix2pixGAN results. But it is still higher than the accuracies of other algorithms. The PI scores of the bilateral filtering and wTV results are smaller than the average PI (16.19) of the clean handwriting generator images, which indicates noise and artifacts remain in such images. In contrast, the PI scores of HINet are larger than 16.19, which indicates good binarization of their results. However, the missing fragments in its processed images result in sparser image features, which
Discussion
CycleGAN should be able to convert clean images into noisy ones and reversely convert noisy images into clean ones in the ideal case. In our work, Fig. 5 demonstrates that CycleGAN is able to generate realistic noisy images from clean images generated by a handwriting generator. However, it is not able to generate satisfying denoised images directly from real noisy THz images as shown in Fig. 6. CycleGAN does a better job in translating clean images to noisy ones than translating noisy images to clean ones as we observed. This could be explained using the concept of entropy: getting noisy images, which have higher entropy, is easier than getting clean images, which have lower entropy. Therefore, CycleGAN is applied to generate the paired noisy image of the clean handwriting images first, and then an additional supervised-learning network trained from such paired data is applied to get the final denoised images. Data scarcity is a common problem for deep learning applications. Generating synthetic data is commonly used nowadays for training deep learning models in various fields 57,58 , which have been demonstrated good generalizability to real data. The results in this work demonstrate that using synthetic data for training supervised www.nature.com/scientificreports/ deep learning models is also effective for THz image denoising. This encourages further deep learning based THz applications. Figures 6 and 7 reveal which types of historical documents are suitable for context retrieval by THz imaging: (a) Fig. 6c demonstrates the efficacy of Pix2pixGAN in THz image denoising for one-sided Xuan paper; (b) Fig. 6f indicates that THz imaging with deep learning denoising has the potential to reveal most information written in a single-page standard paper; (c) Fig. 7 indicates that character recognition in THz images for documents with double-sided text is very challenging, regardless whether Xuan or standard paper is used.
In the real THz images, not only high-frequency noise but also image artifacts with high-intensity block-like structures exists. Conventional denoising algorithms like (trainable) bilateral filter and wTV are effective in reducing high-frequency noise. However, they are not optimal to remove structured artifacts. The HINet is also a supervised learning network using the same training data as Pix2pixGAN. It learns to binarize the real THz images from synthetic training data. However, due to its limited representation power by architecture design (design for denoising only), it is not able to restore missing fragments of the characters. The self-supervised learning networks like Noise2Self or Noise2Void consider local noise characteristics, like the J-invariant 52 . Therefore, such networks are optimized to denoise random noise based on local neighbourhoods, but not suitable for block-like structured artifacts. To develop effective self-supervised learning algorithms for such THz images, further research is required.
Some characters written on one-side-standard paper are ambiguous to recognize after Pix2pixGAN denoising, for example, the letters "E", "F" and "G" in Fig. 6f. In our experiments, only individual characters, instead of words or sentences, are written on the pages, which increases ambiguity once any character misses fragments. Such ambiguity can potentially be reduced for words and sentences based on their surrounding context. In other words, spell correction can be performed to get meaningful words and sentences and hence reduce ambiguity. www.nature.com/scientificreports/ This is one potential advantage of real historical document analysis. To generate synthetic data for training, more sophisticated handwriting styles are available 44,59 . However, real historical documents contain many other challenges, for example, blurred handwriting due to aging and imaging shadow artifacts caused by paper wrinkles. Such challenges require our future exploration. Nevertheless, this work is an important step towards real nondestructive historical document analysis using THz imaging. In this work, the CycleGAN and Pix2pixGAN models are purely data driven. Data driven deep learning models may not generalize well to out-of-distribution test data and are sensitive to noise or perturbations 49,60 . Therefore, in our CycleGAN results, some synthetic images have different appearance characteristics (e.g., Fig. 5h), which we exclude for training Pix2pixGAN. Developing physics-informed neural networks 61 , which are built based on known operators 62 and hence can combine the advantages of both deep learning and conventional methods, for supervised learning should be investigated in our future work. Conventional THz imaging theories have the potential to develop more robust and effective neural networks for THz image enhancement. For example, the conventional mathematical modelling of THz point spread function and simulation of THz imaging systems 23 can guide CycleGAN or a customly designed network to generate more diverse and realistic THz images 63 for training Pix2pixGAN, which may enable Pix2pixGAN to generalize well for THz images acquired from various system settings.
Conclusion
This work applies deep learning to denoise THz images for nondestructive historical document analysis. To overcome the data scarcity problem when training a supervised deep learning model, an unsupervised learning network, CycleGAN, is applied first to generate paired noisy images from clean synthetic images generated by a handwriting generator. Such synthetic paired data is effective to train Pix2pixGAN for THz image denoising. Our work demonstrates that the deep learning denoising performance as well as the resultant character recognition accuracy depends highly on the paper type: Context can be easily retrieved on one-side-Xuan paper after Pix-2pixGAN denoising; Most context written on one-side-standard paper can still be retrieved using Pix2pixGAN; However, context written on both sides is very challenging to retrieve due to the overlap of characters. This work is an important step towards real THz-imaging-based nondestructive historical document analysis.
Data availability
The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. | 6,119.4 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
A proximal gradient method for control problems with nonsmooth and nonconvex control cost
We investigate the convergence of an application of a proximal gradient method to control problems with nonsmooth and nonconvex control cost. Here, we focus on control cost functionals that promote sparsity, which includes functionals of $L^p$-type for $p\in [0,1)$. We prove stationarity properties of weak limit points of the method. These properties are weaker than those provided by Pontryagin's maximum principle and weaker than $L$-stationarity.
Introduction
Let Ω ⊂ R n be Lebesgue measurable with finite measure. We consider a possibly non-smooth optimal control problem of type min u∈L 2 (Ω) f (u) + The function f : L 2 (Ω) → R is assumed to be smooth. Here, we have in mind to choose f (u) := f (y(u)) as the smooth part of an optimal control problem incorporating the state equation and possibly smooth cost functional. We will make the assumptions on the ingredients of the control problem precise below in Section 2. Due to the properties of g, the optimization problem (P) is challenging in several ways. First of all, the resulting integral functional u → Ω g(u(x)) dx is not weakly lower semicontinuous in L 2 (Ω), so it is impossible to prove existence of solutions of (P) by the direct method. Second, it is challenging to solve numerically, i.e., to compute local minima or stationary points. * In this paper, we address this second issue. Here, we propose to use the proximal gradient method (also called forward-backward algorithm [3]). The main idea of this method is as follows: Suppose the objective is to minimize a sum f + j of two functions f and j on the Hilbert space H where f is smooth. Given an iterate u k , the next iterate u k+1 is computed as where L > 0 is a proximal parameter, and L −1 can be interpreted as a step-size. In our setting, the functional to be minimized in each step is an integral function, whose minima can be computed by minimizing the integrand pointwise. Using the so-called prox map, that is defined by prox γj (z) = arg min where γ > 0, the next iterate of the algorithm can be written as If j ≡ 0, the method reduces to the steepest descent method. If j is the indicator function of a convex set, then the method is a gradient projection method. If f and j are convex, then the convergence properties of the method are well-known: under mild assumptions the iterates (u k ) converge weakly to a global minimum of f + j, see, e.g., [3,Corollary 27.9]. If f is non-convex, then weak sequential limit points of (u k ) are stationary, that is, they satisfy −∇f (u * ) ∈ ∂j(u * ). If in addition j is nonconvex, then much less can be proven. In finite-dimensional problems, limit points are fixed points of the iteration, and satisfy the so-called L-stationary type conditions, see [5] and [4,Chapter 10] for optimization problems with l 0 -constraints. A feasible point u * is called L-stationary if In a recent contribution [16], the method was analyzed when applied to control problems with L 0 -control cost. There it was proven that weak sequential limit points of the iterates in L 2 (Ω) satisfy the L-stationary type condition. An essential ingredient of the analysis in [16] was that the functional g is sparsity promoting: solutions of the proximal step are either zero or have a positive distance to zero. We will show how this property can be obtained under weak assumptions on the functional g in (P) near u = 0, see Section 3. Still this is not enough to conclude L-stationarity of limit points. We will show that weak limit points satisfy a weaker condition in general, see Theorem 4.18. Under stronger assumptions, L-stationarity can be obtained (Theorems 4.19,4.20). Let us emphasize that, under weak assumptions, the sequence of iterates (u k ) contains weakly converging subsequences but is not weakly convergent in general. Pointwise a.e. and strong convergence is obtained in Theorem 4.25. We apply these results to g(u) = |u| p , p ∈ (0, 1) in Section 5.1. Interestingly, the proximal gradient method sketched above is related to algorithms based on proximal minimization of the Hamiltonian in control problems. These algorithms are motivated by Pontryagin's maximum principle. First results for smooth problems can be found in [15]. There, stationarity of pointwise limits of (u k ) was proven. Under weaker conditions it was proved in [6] that the residual in the optimality conditions tends to zero. These results were transferred to control problems with parabolic partial differential equations in [7].
Preliminary considerations
Throughout the paper, we will use the following assumption on the function f . Assumption A. The functional f : L 2 (Ω) → R is bounded from below and weakly lower semicontinuous. Moreover, f is Fréchet differentiable and ∇f : holds for all u 1 , u 2 ∈ L 2 (Ω).
For the moment, let g : R →R be lower semicontinuous and bounded from below. In Section 3 below, we will give the precise assumptions on g that allow sparse controls. Let u ∈ L 2 (Ω) be given. Then x → g(u(x)) is a measurable function, and we define Then j : L 2 (Ω) →R is well-defined and lower semicontinuous, but not weakly lower semicontinuous in general. Hence standard existence proofs cannot be applied. For a discussion, we refer to [11,16] Remark 2.1. The results are also valid for the general case that g depends on x ∈ Ω, which results in the integral functional j(u) = Ω g(x, u(x)) dx, provided g : Ω × R →R is a normal integrand, for the definition we refer to [10, Definition VIII.1.1].
Necessary optimality conditions
The mapping u → Ω g(u(x)) dx is not directionally differentiable in general, and thus there is no first order optimality condition. In the following we are going to derive a necessary optimality condition for (P), known as Pontryagin maximum principle, where no derivatives of the functional are involved. We formulate the Pontryagin maximum principle (PMP) as in [16]. A controlū ∈ L 2 (Ω) satisfies (PMP) if and only if for almost all holds true for all v ∈ R. The following result is shown in [16,Thm. 2.5] for the special choice g(u) := |u| 0 .
Proof. Letū be a local solution to (P). We will use needle perturbations of the optimal control.
For arbitrary x ∈ Ω define u r,i ∈ L 2 (Ω) by for some r > 0 and i ∈ N. Let χ r := χ Br(x) , then we have u r,i = (1 − χ r )ū + χ r v i and After dividing above inequality by |B r (x)| and passing r 0, we obtain by Lebesgue's differentiation theorem This holds for every Lebesgue point x ∈ Ω of the integrands, i.e., for all x ∈ Ω \ N i , where N i is a set of zero Lebesgue measure, on which the above inequality is not satisfied. Since the union for all (v, t) ∈ epi(g). Choosing t = g(v) yields the claim.
Sparsity promoting proximal operators
In this section, we will investigate the minimization problems that have to be solved in order to compute the proximal gradient step in (1.1). Let g : R →R be proper and lower-semicontinuous. For s > 0 and q ∈ R, we define the function h q,s (u) := −qu + 1 2 u 2 + sg(u).
Here, we have in mind to set q := −∇f (u k )(x). Let us investigate scalar-valued optimization problems of form min The solution set is given by the proximal map prox sg : R ⇒ R of g, If g is convex then (3.1) is a convex problem, and the proximal map is single-valued. If g is bounded from below and lower semicontinuous, prox sg (q) is nonempty for all q but may be multi-valued for some q.
The focus of this section is to investigate under which assumptions prox sg is sparsity promoting: Here, we want to prove that there is σ > 0 such that u ∈ prox sg ⇒ u = 0 or |u| ≥ σ.
In [13], this was also investigated for some special cases of non-convex functions. We will show that the following assumption is enough to guarantee the sparsity promoting property, it contains the result from [13] as a special case. Assumption B. (B1) g : R →R is lower semicontinuous, symmetric with g(0) = 0.
(B3) g satisfies one of the following properties: (B3.a) g is twice differentiable on an interval (0, ) for some > 0 and lim sup u 0 g (u) ∈ (−∞, 0), (B3.b) g is twice differentiable on an interval (0, ) for some > 0 and lim u 0 By assumption B, the function g is non-convex in a neighborhood of 0 and nonsmooth at 0. Some examples are given below. (iv) The indicator function of the integers g(u) We are interested in the characterization of global solutions to (3.1) in terms of q. It is wellknown that for given s > 0 the proximal map q ⇒ prox sg (q) is monotone, i.e., the inequality is satisfied for all q 1 , q 2 ∈ R. In addition, the graph of prox sg is a closed set. Moreover, the following results hold true. Proof. Due to (B1), we have u ∈ prox sg (q) if and only if −u ∈ prox sg (−q). The claim now follows from the monotonicity of the prox-mapping. Proof. Let u ∈ prox sg (q). By optimality, the following inequality is true. Since g(u) ≥ 0, the claim follows. Proof. If f is of the claimed form, then clearly prox f (q) = {0} for all q. Now, let 0 ∈ prox f (q) for all q ∈ H. Then it holds This is equivalent to Setting q := tu and letting t → +∞ shows f (u) = +∞ for all u = 0.
then |q| ≤ q 0 is also necessary for u = 0 being a global solution to (3.1).
Proof. Let |q| ≤ q 0 . Take u = 0, then we have Note that the second inequality is strict if |q| < q 0 . For the second claim assume u = 0 is a global solution to (3.1). Assume q > 0. Then it holds By the definition of q 0 , the inequality q ≤ q 0 follows. Similarly, one can prove |q| ≤ q 0 for negative q.
Together with Assumption B, these results allows us to show the following key observation concerning the characterization of solutions to (3.1). A similar statement to the following can be found in [13, Theorem 1.1]. Theorem 3.6. Let g : R →R satisfy Assumption B. Then there exists s 0 ≥ 0 such that for every s > s 0 there is u 0 (s) > 0 such that for all q ∈ R a global minimizer u of (3.1) satisfies u = 0 or |u| ≥ u 0 (s).
In case g satisfies (B3.b) or (B3.c), s 0 can be chosen to be zero. Moreover, for all s > 0 there is q 0 := q 0 (s) > 0 such that u = 0 is a global solution to (3.1) if and only if |q| ≤ q 0 . If |q| < q 0 then u = 0 is the unique global solution to (3.1).
Proof. Assume that the first claim does not hold. Then there are sequences (u n ) and (q n ) and s > 0 with u n ∈ prox sg (q n ) and u n → 0. W.l.o.g., (u n ) is a monotonically decreasing sequence of positive numbers, and hence (q n ) is monotonically decreasing and non-negative by Lemma 3.2. Let u and q denote the limits of both sequences. Since u n = 0 is a global minimum of h qn,s , it follows h qn,s (u n ) ≤ h qn,s (0) = 0. Passing to the limit in this inequality, we obtain lim inf n→∞ h qn,s (u n ) ≤ 0, which implies With g(0) = 0 by (B1), this contradicts (B3.c). Let now (B3.a) or (B3.b) be satisfied. Then for n sufficiently large the necessary second-order optimality condition h qn,s (u n ) ≥ 0 holds, and we obtain lim sup This inequality is a contradiction to (B3.a) if s > −1/lim sup u 0 g (u) > 0 and to (B3.b) for all s. By (B1), it holds prox sg (q) = ∅ for all q. Due to (B2) and Lemma 3.4, there is q ≥ 0 such that 0 ∈ prox sg . The claim concerning q 0 follows from Assumptions (B4), (B3) and Lemma 3.5. First, consider that case (B3.a) or (B3.b) is satisfied, i.e., there is 1 > 0 such that g is strictly concave on (0, 1 ]. By reducing 1 if necessary, we get g( 1 ) > 0. Since g(u) = 0, it holds g(u) ≥ g( 1 ) 1 |u| for all u ∈ (0, 1 ) by concavity. Due to symmetry, this holds for all u with |u| ≤ 1 . Since g(u) ≥ 0 for all u by (B4), it holds 1 2 u 2 +sg(u) ≥ 1 2 |u| for all |u| ≥ 1 . This proves )|u| for all u. Hence, the claim follows with q 0 := min( 1 2 , sg( 1 ) 1 ) by Lemma 3.5. Second, if (B3.c) is satisfied, then there are 2 , τ > 0 such that g(u) ≥ τ for all u with |u| ∈ (0, 2 ) as g is lower semicontinuous. Therefore, it holds g(u) ≥ τ ≥ τ 2 |u| if |u| ∈ (0, 2 ). The claim follows as above by Lemma 3.5.
Remark 3.7.
1. In general, the constant u 0 in Theorem 3.6 depends on s and the structure of g.
2.
We note the second claim concerning q 0 in Theorem 3.6 holds for all s > 0 and does not depend on the first claim due to Assumption (B4). One can replace g(u) ≥ 0 by the pre-requisite of Lemma 3.5.
3. Assumption B also allows functions of form g(u) =q(u) + δ D (u) with someg : R →R and the indicator function δ D of the set D ⊆ R. This means the analysis includes constrained optimization problems, e.g., standard box constraints of form Example 3.8. The proximal map of (3.1) with g(u) = |u| 0 is given by the hard-thresholding operator, defined by With the above considerations in mind, let us discuss the minimization problem This minimization corresponds to the pointwise minimization of the integrand in (1.1).
If 1 L > s 0 , see Theorem 3.6, then all global solutions u satisfy and therefore of form (3.1). The claim follows by definition and from Theorem 3.6.
Analysis of the proximal gradient algorithm
In this section, we will analyze the proximal gradient algorithm.
The functional to be minimized in (4.1) can be written as an integral functional. In this representation the minimization can be carried out pointwise by using the previous results. The following statements are generalizations of [16, Lemmas 3.10, 3.11, Theorem 3.12], and the corresponding proofs can be carried over easily.
Lemma 4.2.
Let u k ∈ U ad be given. Then is solvable, and u k+1 ∈ L 2 (Ω) is a global solution if and only if Proof. Let us show, that we can choose a measurable function satisfying the inclusion (4.3). The set-valued mapping prox L −1 g has closed graph and is thus outer semicontinuous. Then by [14,Corollary 14.14], the set-valued mapping [14,Corollary 14.6] implies the existence of a measurable function u such that u(x) ∈ prox L −1 g 1 L (Lu k (x) − ∇f (u k )(x)) for almost all x ∈ Ω. Due to the growth condition of Lemma 3.3, we have u ∈ L 2 (Ω), and hence u solves (4.2). If u k+1 solves (4.2) then (4.3) follows by a standard argument. We introduce the following notation. For a sequence (u k ) ⊂ L 2 (Ω) define Let us now investigate convergence properties of Algorithm 4.1. The following Lemma will be helpful for what follows.
Theorem 4.4.
For L > L f let (u k ) be a sequence of iterates generated by Algorithm 4.1. Then the following statements hold: is monotonically decreasing and converging.
(iv) Let s 0 be as in Theorem 3.6. Assume 1 L > s 0 . Then the sequence of characteristic functions (χ k ) is converging in L 1 (Ω) and pointwise a.e. to some characteristic function χ.
Using the optimality of u k+1 , we find that the inequality holds.. Hence, (f (u k ) + j(u k )) is decreasing. Convergence follows because f and j are bounded from below.
(ii) Weak coercivity of the functional implies that (u k ) is bounded. Furthermore, because of and hence Hence, (χ k ) is a Cauchy sequence in L 1 (Ω), and therefore also converging in L 1 (Ω), i.e., χ k → χ for some characteristic function χ. Pointwise a.e. convergence of (χ k ) can be proven by Fatou's Lemma.
As a consequence, we get the following result. Proof. By the Lemma of Fatou, we have This implies n k=0 |u k+1 (x) − u k (x)| 2 < ∞ for almost all x ∈ Ω, and the claim follows.
Stationarity conditions for weak limit points from inclusions
Under a weak coercivity assumption Theorem 4.4 implies that Algorithm 4.1 generates a sequence (u k ) with weak limit point u * ∈ L 2 (Ω). Due to the lack of weak lower semicontinuity in the term u → Ω g(u) dx, however, we cannot conclude anything about the value of the objective functional in a weak limit point. Unfortunately, we are not able to show as it was done in [16,Thm. 3.14] for the special choice g(u) := |u| 0 . Nevertheless, by using results of set-valued analysis we will show that a weak limit point of a sequence (u k ) of iterates satisfies a certain inclusion in almost every point x ∈ Ω, which can be interpreted as a pointwise stationary condition for weak limit points. By definition, the iterates satisfy the inclusion for almost all x ∈ Ω, see e.g., (4.3). However, this inclusion seems to be useless for a convergence analysis as the function u k+1 to the left of the inclusion as well as the arguments Lu k − ∇f (u k ) only have weakly converging subsequences at best. The idea is to construct a set-valued mapping G : R ⇒ R, such that a solution u k+1 of (4.2) satisfies the inclusion in almost every point x ∈ Ω for some z k ∈ L 2 (Ω), where (z k ) converges strongly or pointwise almost everywhere. Here, we will use By Theorem 4.4, we have u k+1 − u k → 0 in L 2 (Ω) and pointwise almost everywhere. With the additional assumption that subsequences of (∇f (u k )) are converging pointwise almost everywhere, the argument of the set-valued mapping is converging pointwise almost everywhere. In the context of optimal control problems, such an assumption is not a severe restriction. So there is a chance to pass to the limit in the inclusion (4.5).
Lemma 4.7. Let u k+1 be a solution of (4.2). Then where the set-valued mapping G : R ⇒ R is given by Unfortunately, the set-valued map G is not monotone in general. If g would be convex, then the optimality condition of (4.2) is z k (x) ∈ ∂g(u k+1 (x)) for almost all x ∈ Ω, hence one could choose G = gph(∂(g * )), where g * denotes the convex conjugate of g. For the rest of this section, we will always suppose that g satisfies Assumption B. As a first direct consequence from the definition of G we get
A convergence result for inclusions
Let us recall a few helpful notions and results from set-valued analysis that can be found in the literature, see e.g., [2,14].
S is called locally bounded
A set-valued mapping S is outer semicontinuous if and only if it has a closed graph.
The following convergence analysis relies on [2, Thm. 7.2.1]. We want to extend this result to set-valued maps into R n that are not locally bounded. Let us define the following set-valued map that serves as a generalization of x → conv(F (x)) for the locally unbounded situation. Define the set-valued map conv ∞ F : By definition, it holds gph F ⊂ gph conv ∞ F . In addition, we have conv(F (x)) ⊂ (conv ∞ F )(x). If F is locally bounded in x, then (conv ∞ )F (x) = conv(F (x)), which can be proven using Carathéodory's theorem. In general, dom conv ∞ F is strictly larger than dom F . Let (Ω, A, µ) be a measure space and F : R m ⇒ R n be a set-valued map. Let sequences of measurable functions (x n ), (y n ) be given such that 1. x n converges almost everywhere to some function x : Ω → R m , 2. y n converges weakly to a function y in L 1 (µ, R n ), 3. y n (t) ∈ F (x n (t)) for almost all t ∈ Ω.
Stationarity conditions for weak limit points
Recall, for iterates (u k ) of Algorithm 4.1 and the corresponding sequence z k we have by construction Then by Theorem 4.14, we could expect the inclusion u * (t) ∈ (conv ∞ G)(−∇f (u * )(x)) pointwise almost everywhere to hold in the subsequential limit. However, the convexification of G results in a set-valued map that is very large. In order to obtain a smaller inclusion in the limit, we will employ the result of Corollary 4.9: the graph of G can be split into three clearly separated components. In the sequel, we will show that we can pass to the limit with each component separately, which leads to a smaller set-valued map in the limit. This observation motivates the following splitting of the map G. 1. G + : R ⇒ R with u ∈ G + (z) :⇐⇒ u ∈ G(z) and u > 0, The mappings G + , G − and G 0 are depicted in Figure 2 for the special choice g(u) : Obviously we have by construction Proof. G being outer semicontinuous is equivalent to the closedness of its graph. Let (u n ), (q n ) be sequences such that u n → u, q n → q and u n ∈ G(q n ). By definition it holds for all v ∈ R. Passing to the limit in above inequality yields due to the lower semicontinuity of g. Hence, i.e., u ∈ G(q), which is the claim for G. For G + , G − , G 0 the claim follows as their graphs are intersections of closed sets with gph G, which follows from Corollary 4.9 (for suitable chosen L in case of G + , G − ).
In the sequel we want to apply Theorem 4.14 to each of the set-valued maps in (4.6) separately. Let us first show the next helpful result.
holds for almost all x ∈ Ω.
Let us remark that the assumption of pointwise convergence of (∇f (u k )) is not a severe restriction. If ∇f : L 2 (Ω) → L 2 (Ω) is completely continuous, then this assumption is fulfilled. For many control problems, this property of ∇f is guaranteed to hold.
Interestingly, we can get rid of the convexification operator conv ∞ if we assume that the whole sequence (∇f (u k )) converges pointwise almost everywhere. Theorem 4.19. Let (u k ) be a sequence of iterates generated by Algorithm 4.1 with weak limit point u * ∈ L 2 (Ω). Assume ∇f (u k ) → ∇f (u * ) pointwise almost everywhere. Then holds for almost all x ∈ Ω.
Let ∈ (0, ). Set I := {x : |z − z(x)| < }, and The sequence (I K ) is monotonically increasing. Since z k (x) → z(x) for almost all x ∈ Ω, we have (Ω) and pointwise almost everywhere. Let x ∈ I. Then there is K such that x ∈ I K . This implies u k (x) ∈ B (ũ) for all k > K. Here, the pointwise convergence of the whole sequence (z k ) is needed. The sum ∞ k=K+1 (χ + k+1 χ − k + χ − k+1 χ + k )(x) counts the number of switches between values larger thanũ + and smaller thañ u − from u k (x) to u k+1 (x). Since this sum is finite for almost all x ∈ Ω, there is only a finite number of such switches. Then there is K > K such that either u k (x) ≥ũ + for all k > K or u k (x) ≤ũ − for all k > K . Set The sequences (S + K ) and (S − K ) are increasing, and for almost all x ∈ I, which implies for almost all x ∈ Ω. Since we can cover the complement of gph G by countably many such sets, the claim follows.
Pointwise convergence of iterates
So far we were able to show that weak limit points of iterates (u k ) satisfy a certain inclusion in a pointwise sense. However, the resulting set in the limit might still be large or even unbounded in general. Assuming that G is (locally) single-valued on its components G + , G − , G 0 , we can show local pointwise convergence of a subsequence of iterates (u kn ) to a weak limit point u * ∈ L 2 (Ω).
In the next result this is illustrated for the map G + , however it can be shown for the components G − , G 0 similarly. To this end, we set in the following χ + k := χ {x∈Ω: u k (x)>0} with χ + k → χ + in L 1 (Ω) and pointwise almost everywhere by Lemma 4.17.
Theorem 4.20. Letz ∈ dom(G + ). Assume that G + : R → R is single-valued and locally bounded on B (z) ∩ dom(G + ) for some > 0. Let u kn u * in L 2 (Ω) and assume ∇f (u kn )(x) → ∇f (u * )(x) pointwise almost everywhere. For ∈ (0, ] define the set holds for almost all x ∈ I. Furthermore, we have Proof. Let u kn+1 u * in L 2 (Ω). By the assumption and Corollary 4.9 it holds z kn (x) → z(x) := −∇f (u * )(x) pointwise almost everywhere. In addition, u kn+1 u * in L 2 (Ω) holds. Let ∈ (0, ) be given. Take x ∈ I such that z kn (x) → z(x). Then there is K > 0 such that |z kn (x)−z| < for all k n > K. Since x ∈ supp(χ + ) and χ + k → χ + in L 1 (Ω) and pointwise almost everywhere there is K > 0 such that x ∈ supp(χ + k ) for all k > K . Hence, for k n sufficiently large we have Since G + is single-valued, locally bounded and outer semicontinuous in B (z) ∩ dom(G + ), it is continuous, see also [14,Cor. 5.20]. This implies The continuity property mentioned above implies conv ∞ G + (z(x)) = G + (z(x)). Then by Theorem 4.18, G + (z(x)) = {u * (x)}, and the convergence u kn (x) → u * (x) follows. The fixed-point property is a consequence of the closedness of the graph of the proximal operator. As x ∈ I was chosen arbitrary, and I = ∪ ∈(0, ) I , the claim is proven.
The above result requires local boundedness of the set-valued map G, which is not satisfied in general. For some interesting choices of g, e.g. g(u) := |u| p , it can be proven, see Section 5. Let us give an example of a locally unbounded map G below.
Strong convergence of iterates
Many optimal control problems of type (P) include a smooth cost functional of form u → α 2 u 2 L 2 (Ω) , α > 0. For the rest of the sequel, we will treat this term explicitly in the convergence analysis to obtain an almost everywhere and strong convergence of a subsequence. Therefore let g : R → R satisfy Assumption B and consider a sequence of iterates computed by u k+1 := arg min The solution to (4.7) is now given by for almost every x ∈ Ω. It follows that all the analysis that was done in this section still applies in this case and all results can be transferred except for a possible change of notation. Furthermore, we adapt the set-valued map G : R → R from Lemma 4.7 which is then defined by For simplicity we assume dom(g) = [−b, b] with b ∈ (0, ∞], i.e., the subproblem (4.7) is equivalent to a box constrained optimization problem of form u k+1 := arg min subject to |u(x)| ≤ b for almost every x ∈ Ω. To obtain strong convergence of iterates in L 1 (Ω) and an L-stationary condition almost everywhere, we need to put stronger and more restricting assumptions ong, as the next theorem shows. To this end, let us introduce the following extension of Assumption B.
First, we have the following necessary optimality condition for (4.7) due to Assumption (B5).
Corollary 4.22. Let u k+1 be a solution to (4.7) andg satisfy in addition (B5). Then the pointwise inequality in
pointwise is equivalent to solve the constrained problem min u:|u|≤b in every Lebesgue point x. For x ∈ I k+1 it holds u k+1 (x) = 0, and therefore above problem is differentiable. The claimed inequality is the corresponding necessary optimality condition.
Let us for the rest of the sequel assume thatg satisfies (B5) and (B6) in addition to Assumption B. This enables us to give more information about the set-valued map G as the next result shows. That is, elements in G are (possibly unique) solutions of an associated variational inequality. ). Then u ∈ G(z) satisfies the variational inequality L+α and u 0 ≥ u I with u I := u I ( 1 L+α ) as in (B6), then we have u ∈ G(z) if and only u satisfies (4.9).
Proof. Let us discuss the case u ≥ u 0 only. If u ∈ G(z) for some z ∈ R, then by definition Hence, by first order necessary optimality condition it holds , which is the claim. Assume u I ≤ u 0 holds, and let u > 0 satisfy (4.9), then u satisfies in particular and also to min By convexity u is the unique solution of the latter and since by assumption z+Lu L+α ≥ q 0 1 L+α , it follows from Theorem 3.6 that there is a global solution larger than u 0 to the unconstrained problem which together implies u ∈ G(z).
Proof. We set s := 1 α and u I := u I (s) as in (B6). Note that by assumptions the following holds for α > 0 and |u| ≥ u 0 ≥ u I : where we define, corresponding to (4.10), Due to assumption (B6) and Lemma 4.23, u k+1 (x) is the only element in G(z k (x)) \ {0} for almost all x ∈ I k+1 and it holds u k+1 (x) = prox u I sg z k (x) α . Set It is easy to see that prox u I sg is single-valued for |z| > 0. Since it is in addition outer semicontinuous and locally bounded for |z| ≥ z I , it is also continuous on {z : |z| ≥ z I }, see also [14,Corollary 5.20]. Let u ∈ prox u I sg (z). By optimality of u we have −zu + 1 2 u 2 + sg(u) ≤ −z · sign(u)u I + 1 2 u 2 I + sg(u I ).
Dividing by |u| > 0, we get Having in mind that u I |u| ≤ 1, the growth estimate | prox u I sg (z)| ≤ 2|z| + c for all |z| ≥ z I with some c > 0 independent of z follows.
Let l : R → R denote a continuous function defined by for z : Ω → R. Then by a well-known result, see e.g. [1, Theorem 3.1], the superposition operator G is continuous from L 2 (Ω) → L 2 (Ω) and the claim follows. Now, we are able to prove strong convergence of a subsequence of (u k ) similar to [16,Thm. 3.17].
Theorem 4.25. Suppose complete continuity of ∇f and let (u k ) ⊂ L 2 (Ω) be a sequence generated by Algorithm 4.7 with weak limit point u * . Under the same assumptions as in Lemma 4.24 u * is a strong sequential limit point of (u k ) in L 1 (Ω).
Proof. By Lemma 4.24 there exists a continuous mapping G : L 2 (Ω) → L 2 (Ω) such that u k+1 = χ k+1 G( z k α ) . Let u kn u * in L 2 (Ω). Again, by Theorem 4.4 and complete continuity of ∇f , we obtain strong convergence of the sequence z kn := − (∇f (u kn ) + L(u kn+1 − u k )) → −∇f (u * ) =: z * in L 2 (Ω) as well as χ k → χ in L p (Ω) for all p < ∞ and u kn+1 u * . Then the convergence in L 1 (Ω) follows by Hölder's inequality. Since strong and weak limit points coincide, it follows u kn → u * in L 1 (Ω) and With the assumptions in Theorem 4.25 we can find an almost everywhere converging subsequence of iterates, i.e., u kn (x) → u * (x) for almost every x ∈ Ω. By the closedness of the mapping prox sg , we get i.e., u * is L-stationary to the problem in almost every point. If L = 0 in (4.11), then we obtain by Lemma 4.2 Hence, in this case u * satisfies the Pontryagin maximum principle.
The proximal gradient method with variable stepsize
The convergence results of this section require the knowledge of the Lipschitz modulus L f of ∇f . This can be overcome by line-search with respect to the parameter L subject to a suitable decrease condition, which is a widely applied technique.
is satisfied.
The convergence results as in Theorem 4.4 can be carried over. Then theorem 4.4 holds without the assumption L > L f . The assumptions 1/L > s 0 has to be replaced by (lim sup L k ) −1 > s 0 . This is satisfied if s 0 = 0, which is true by Theorem 3.6 if one of (B3.b), (B3.c) is valid. 5 Applications of the proximal gradient method 5.1 Optimal control with L p control cost, p ∈ (0, 1) In [16], the discussed proximal method was analyzed and applied to optimal control problems with L 0 control cost, i.e., g(u) := α 2 u 2 + |u| 0 . In this section, we discuss the problem with g(u) := α 2 u 2 + β|u| p + δ [−b,b] , where p ∈ (0, 1) and b ∈ (0, ∞] and consider min u∈L 2 (Ω) To find a solution to (5.1)with Algorithm 4.1, the subproblem, interpreted in terms of (4.7) withg := |u| p + δ [−b,b] , has to be solved in every iteration. According to Theorem 4.2, u k+1 is a solution to (5.1) if and only if Due to Theorem 3.6 it holds u k+1 (x) = 0 or |u k+1 (x)| ≥ u 0 for all k. The particular choice of g allows to compute the constant u 0 explicitly by solving min u =0 u 2 + s g(u) 2 and is given by as a consequence of Lemma 3.5. We recall the definition of the set-valued map G : R → R, which reads in this case u ∈ G(z) := G L,α,s :⇐⇒ u = arg min Note that g satisfies assumptions (B5) and (B6) due to its structure. This allows to give an equivalent but more precise characterization of G as Lemma 4.23 applies to u k+1 (x) on I k+1 .
A visualization of G is given in Figure 2 below. With a suitable choice of parameters, we can apply Theorem 4.25 to the L p problem to obtain a strong convergent subsequence.
Corollary 5.2.
Let α > 0 and (u k ) a sequence of iterates. Furthermore, assume L ≤ ( 2 p − 1)α. Then the assumptions of Theorem 4.25 are satisfied. If in addition ∇f is completely continuous from L 2 (Ω) to L 2 (Ω), then every weak sequential limit point u * ∈ L 2 (Ω) is a strong sequential limit point in L 1 (Ω).
calculation yields that the assumptions on the parameters imply Here, u I is the positive point of inflection of (5.1) and it holds that is convex for all q ∈ R on [u I , ∞) and (−∞, u I ), respectively, which corresponds to Assumption (B6). The claim now follows by Lemma 4.24 and Theorem 4.25.
Optimal control with discrete-valued controls
Let us investigate the optimization problem with optimal control taking discrete values. That is, we choose g(u) as the indicator function of integers, i.e., The problem now reads min u∈L 2 (Ω) Note, this choice satisfies Assumption (B3.c). Applying Algorithm 4.1, the subproblem to solve is given by min u∈L 2 (Ω) and can be solved pointwise and explicitly. The analysis carried out in Chapter 4 is applicable, however, the special choice of g comes along with the following desirable result.
Thus, (u k ) is a Cauchy sequence in L 1 (Ω) and therefore convergent in L 1 (Ω) and it holds u k → u * .
Numerical experiments
In this section we finally apply the proximal gradient method to optimal control problems of type (P) and carry out numerical experiments for cost functionals with different g. Let in the following denote f l the reduced tracking-type functional where S l is the weak solution operator of the linear Poisson equation ∂d(x, y) ∂y + ∂ 2 d(x, y) ∂y 2 ≤ C M for almost all x ∈ Ω and |y| ≤ M .
Then the equation is uniquely solvable, we refer to e.g., [8,9] In addition, we define Furthermore, we choose Ω := (0, 1) 2 to be the underlying domain in all following examples.
To solve the partial differential equation, the domain is divided into a regular triangular mesh and the PDE (6.1),(6.2) is discretized with piecewise linear finite elements. The controls are discretized with piecewise constant functions on the triangles. The finite-element matrices were created with FEnicCS [12]. If not mentioned otherwise, the meshsize is approximately h = √ 2/160 ≈ 0.00884. In each iteration a suitable constant L k > 0 needs to be determined, that satisfies the decrease condition see (4.12). Note, L −1 k can be seen as a stepsize. In [16] several stepsize selection strategies are proposed. In our tests, we use a simple Armijo-like backtracking line search method (BT). That is, having an initial L 0 > 0 and a widening factor θ ∈ (0, 1), determine L k as the smallest accepted number of form L 0 θ −i , i = 0, 1, .... This method ensures a decrease in the objective values along the iterates, but it turns out to be very slow for large L 0 , as the corresponding stepsize L −1 k gets smaller. For all our tests we choose The stopping criterion is as follows: If |f (u k+1 ) + g(u k+1 ) − (f (u k ) + g(u k )| ≤ 10 −12 : STOP.
Example 1
Let g(u) := |u| p + δ [−b,b] for p ∈ (0, 1) and find min u∈L 2 (Ω) Setting U ad := {L 2 (Ω) : |u(x)| ≤ b a.e. on Ω} the problem is equivalent to min u∈U ad The first example is taken from [16], where the proximal gradient algorithm was investigated for (sparse) optimal control problems with L 0 (Ω) control cost. Since Ω |u| p dx → Ω |u| 0 dx as p 0, we expect similar solutions. We choose the same problem data as in [11,16]. That is, if not mentioned otherwise, y d (x, y) = 10x sin(5x) cos(7y) and α = 0.01, β = 0.01, b = 4. A computed solution for p = 0.8 is shown in Figure 3. Convergence for decreasing p−values. In the following we consider solutions for different values of p. We use the same data and discretization as above. We set L 0 = 0.0001. In Table p Table 1 shows the result of applying the iterative hard-thresholding algorithm IHT-LS from [16] to the problem with p = 0, which is in agreement with our expectation. In the implementation we used a meshsize of h = √ 2/500 ≈ 0.0028.
Discretization. Next, we solved the problem on different levels of discretization to investigate the influence. As can be seen in Table 2 Convergence in the case L > (2/p − 1)α. So far, in every experiment the assumption on the parameters was naturally satisfied, such that strong convergence of iterates can be proven according to Theorem 5.2. The numerical results confirmed the theory. We will now investigate the case where the assumption is not satisfied, i.e., we choose parameters such that L > (2/p − 1)α. In the following we present the result for the problem parameters α = 0.001, p = 0.9, L 0 = 0.005.
Furthermore, we set b = 6. In our computations the algorithm needed very long to reach the stopping criteria |J(u k+1 ) − J(u k )| ≤ 10 −12 as can be seen in Table 3. This might be due to the parameter choice and the step-size strategy. For smaller mesh-sizes more iterations are needed. Table 3: performance for bad choice of parameters across different mesh-sizes Recall, the problem in the analysis that comes with this choice of parameters is that the map G in Lemma 4.7 is not necessarily single-valued anymore on the set of points where an iterate is not vanishing, see also Figure 2. Let u I := u I (β/α) > 0 denote the constant from Assumption (B6) and define the set Ω m,k := {x ∈ Ω : 0 < |u k (x)| < u I }.
Then Ω m,k is the set of points for which the crucial assumption in Lemma 4.24 that implies single-valuedness of G \ {0} is not satisfied. In our numerical experiments, however, we made the observation that the measure of the set Ω m,k is decreasing as k → ∞, see Figure 4. Across different mesh-sizes h, the measure decreases and tends to zero along the iterations. with g(u) = |u| p , p ∈ (0, 1). This example can be found in [9] for semilinear control problems with L 1 -cost. Here, f sl is given by the standard tracking type functional u → y u − y d 2 L 2 (Ω) , where y u is the solution of the semilinear elliptic state equation −∆y + y 3 = u in Ω, y = 0 on ∂Ω.
The data is given by α = 0.002, β = 0.03, b = 12 and y d = 4 sin(2πx 1 ) sin(πx 2 )e x 1 . We use the parameter L 0 = 0.001. We made similar observations as in the linear case concerning the influence of discretization and different values of p. Also the behavior of the algorithm in case of a bad choice of parameters is as before (see Example 1).
Example 3
In this last test, we consider an optimal control problem with discrete-valued controls. That is, we choose g(u) := δ Z (u), where δ M denotes the indicator function of a set M , i.e., δ M (u) := 0 if u ∈ M, ∞ else . Here, the subproblem in Algorithm 4.1 can be solved pointwise and explicitly. We adapt again the setting from Example 1. In Figure 6, a solution plot of the optimal control is displayed. We used exactly the same problem data as before in Example 1, but set b = 2 and L 0 = 0.001. Again, we find the algorithm is robust with respect to the discretization. | 10,641.4 | 2020-07-22T00:00:00.000 | [
"Mathematics"
] |
Success in the workplace: From the voice of (dis)abled to the voice of enabled
The intention of this article is twofold; first to encourage a shift in seeing ‘the disabled’ not as people with disabilities but rather as people with unique abilities. Secondly, to explore ways of facilitating gainful employment for these uniquely abled people. The term disability is examined against a backdrop of definitions including the definition postulated by the International Classification of Functioning. In this article, the life experiences of a purposive sample of people with (dis)abilities who have been successful in the world of work are explored. A narrative approach gives voice to their experiences. Quotes from the participants’ responses are used to illustrate the common themes that emerged relating to their experiences. These themes are resonated against a backdrop of relevant literature. If disabled people are enabled to recognize and use their unique abilities, as well as develop various self-determination skills, imagine the endless possibilities which could arise for them and society in general.
Introduction
The The intention of this article is twofold: firstly, to encourage a shift in attitude towards people with disabilities, regarding them not as disabled, but rather as people with unique abilities. Secondly, to reveal and discuss some of the common themes that people with disabilities have used to describe their experiences in the world of work.
The International Classification of Functioning, Disability and Health (ICF) classification system uses, to the extent possible, neutral language to name its components and categories. The use of neutral language is helpful and also challenging. For the purpose of this article, I choose to engage in positive and preferred language, and to refer to people with (dis)abilities because the focus is on individual ability. on the individual disability as pathological, to focussing on the individual within the context of the environment and social community has important implications for developing policies pertaining to (dis)ability and has been richly debated (Buntinx 2013;Brueggemann 2013;Eide & Ingstad 2013;Oliver 2009;Shogren 2013;Watermeyer et al. 2006;Wehmeyer 2013b;Wilson & Lewiwcki-Wilson 2001). Recently, (dis)ability has been examined through the lens of strengths-based Positive Psychology. Wehmeyer (2013b) states the following: The historical view of disability as pathological has run its course, although it remains far too prevalent. The success of people with disabilities in all aspects of life, aided by civil protections and equal opportunities, has made pathology-based understandings of disabilities irrelevant or inaccurate. It is well past time to begin to consider disabilities using a strengths-based focus. (p. 5) The strengths-based approach of Positive Psychology provides a platform for discussion in this article.
Defining disability
Defining disability is controversial and difficult. The ICF views disability as a complex phenomenon and provides a multidimensional framework incorporating medical and rehabilitative interventions with environmental and social interventions in a more optimistic way. The advantage of this framework is that it incorporates all aspects of a person's life, including medical (body function and structure), social (ability to participate), environmental factors (the person within the context of his or her physical world) and personal factors (race, gender, age and education). Although the ICF-model attempts to be culturally neutral, the question is whether such neutrality is possible. Different kinds of impairments are understood differently, and have different consequences in different cultures (Brueggemann 2013;DWCPD 2013;Eide & Ingstad 2013;Watermeyer et al. 2006).
According to Bach (2013), 80% of people with (dis)abilities live in developing countries. Seeing that data collection is difficult, not much research has been possible. Eide and Ingstad (2013) report that there are substantial gaps in services for people with disabilities, and that disability is associated with a lower level of living. The fact remains there is a link between poverty and disability and that disability affects millions of families in developing countries (DSD et al. 2012;Eide & Ingstad 2011;Eide & Ingstad 2013;Filmer 2008;Priestley 2006;Watermeyer et al. 2006;WHO 2011). The construct of (dis)ability can be interpreted as a form of social inequality resulting from oppressive social structures rather than from individual difference or biology (Balcaza et al. 2009;Brueggemann, 2013;Eide & Ingstad, 2013;Priestley 2006).
The South African context
South Africa is described as a rainbow nation, which means that it is a home to a plethora of different cultures that influence the interpretation of (dis)ability. The perception of 'disability' is exacerbated in the South African context by historical and political structures. The history of South Africa is woven with tales of people in authority, dictating to others what they need, and what is good for them (Hansen & Sait 2011). The policy of segregation from the past, and the present Employment Equity Act have given rise to two approaches to (dis)ability in the South African context: the broad definition (disability as discrimination) and the narrow definition (disability as impairment) (Hansen & Sait 2011;Van Deventer 2011). These two approaches cloud perception and interfere with proper reporting, thus blurring quality data on disability in the South African context. Some people are unable to recognise and acknowledge that they have a disability. Others, fear stigmatisation and consequently fail to report ( This view is echoed by Van Deventer (2011) who suggests that statistics vary between 2% and 12%, and that there is disparity between all the organisations that present statistics regarding people with (dis)abilities who are employed. Despite commitments from the National Skills Development Strategy (NSDS) to increase opportunities for training and skills development for people with (dis)abilities South Africa is still far from achieving its goal in this regard (Department of Higher Education and Training [DHET] 2010; Soudien & Baxen 2006). Despite the set target to employ a minimum of 3% of people with (dis)abilities, the figure for employed people with (dis)abilities dropped from almost 1% in 2009 to 0.5% in 2011 (Van Deventer 2011).
South Africa adopted a policy of inclusion in education and integrated learners with special needs into 'mainstream schooling'. This move has not always benefitted learners with special needs because of the diversity of difficulties, exacerbated by the fact that not all educators are trained to recognise and deal with (dis)abilities (Dalton, Mckenzie & Kahonde 2012). Data from 22 of the 23 public universities shows that 5807 students with disabilities were enrolled in higher education institutions in 2011, accounting for only 1% of total enrolment (DHET 2014). This decrease could be attributed to the limited opportunties for education and training. The World Health Organization Report (2011) states that: In South Africa it is thought that school attendance and completion are influenced by the belief of school administrators that disabled students do not have a future in higher education. (p. 216) However, it would appear that cognisance has been taken of this disturbing fact. Blade Nzimande, Minister of Higher Education and Training, speaking at the launch of a white paper on post-school education and training in January 2014, stated that: Despite attempts to integrate disability into the broader policy arena, currently there is no national policy on disability to guide education and training institutions in post-school domain. The management of disability in post-school education remains fragmented and separate to that of existing transformation and diversity programmes at the institutional level. Individual institutions determine unique ways in which to address disability, and resourcing is allocated within each institution according to their programme. Levels of commitment toward people with disability vary considerably between institutions, as do the resources allocated to addressing disability issues. TVET (Technical Vocational Education and Training) colleges in particular lack the capacity, or even the policies, to cater for students and staff with disabilities. (DHET 2014:8) This statement highlights the difficulties experienced within education and training facilities in South Africa. Furthermore, Nzimande stated that the Department of Higher Education and Training's disability funding was underutilised in 2010 and 2011 at levels of only 47% and 55% of available funding respectively (DHET 2014:8).
Motivation
There have been several calls for research to promote the rights and participation of disabled people in our society (AfriNEAD 2009;DHET 2014;DWCPD 2013;Eide & Ingstad 2013;WHO 2011). Research is urgently needed to move disability up the economic development agenda (World Bank 2000). The slogan Disabled People South Africa (DPSA) adopted is 'Nothing about us, without us'. Participation of people with (dis)abilities is regarded as an important aspect of the new paradigm on (dis)ability, and has important implications for the way in which research is done. One cannot overemphasise the need for collaboration between people who have (dis)abilities, professionals who assist people with (dis)abilities, civil society and state institutions. More knowledge and a new understanding are gained by engaging in conversation with people who have (dis)abilities. Oliver (2009) states that: If disabled people left it to others to write about disability, we would inevitably end up with inaccurate and distorted accounts of our experiences and inappropriate service provisions and professional practices based upon these inaccuracies and distortions. (p. 9) There is a danger that research may become oppressive. Therefore, the aim of this study is to engage in what is referred to as 'emancipatory research' using indigenous knowledge (Barnes & Mercer 1997;Barnes, Oliver & Barton 2002;Moore, Beazley & Maelzer 1998;Oliver 2009). The reality is that people with (dis)abilities are their own best advocates (Barnes & Mercer 1997;Barnes et al. 2002;Eide & Ingstad 2013;Filmer 2008;Oliver 2009;Watermeyer et al. 2006). I work as a psychologist assisting people with (dis)abilities to position themselves in the world of work. In response to a need to improve my way of working, and in an attempt to stimulate further conversation around effective policies and practicies, I have undertaken this study. I could be classified as (dis)abled. However, I have offered resistance to this theoretical classification and have endeavoured to overcome difficulties that prevent me from doing the work I wish to do. In fact, perhaps the need to overcome these difficulties may inspire and assist the work I do.
Research methodology
This is a qualitative descriptive study using a narrative approach. This approach was chosen because it allows the experiences of people with (dis)abilities in the world of work to be described, thereby giving greater meaning to practice. Narrative inquiry has a distinguished history and is increasingly used in studies that describe social experience (Clandinin & Connelly 2000;Josselson & Lieblich 1995;Lieblich, Mashliach & Zibler 1998;Riessman 2008;Sandelowski 2000).
Narrative inquiry successfully captures personal and human dimensions that cannot be quantified into dry facts and numerical data (Clandinin & Connelly 2000). As a researcher, I seek credibility based on accountability, trustworthiness and dependability. A process of reflexivity was used, which makes the researcher aware of her own experiences, perceptions and interpretations and how these may influence the way she hears what the participants are telling. Ethical considerations governing this study include the following: • Participation in this project was entirely voluntary. • There was neither cost nor benefit for the participants.
• The participants were entitled to read the draft of this paper and make comments. • Confidentiality was and is respected at all times. Oliver (2009:5) states that 'the link between personal experience and what people write cannot be ignored and should not be denied'. A purposive sample was selected (N = 25 with 14 men and 11 women from diverse cultures). All the participants have received education and training in marketable skills. They are all gainfully employed and live in urban areas. The average age of the participants is 37.4 years; however, two participants did not reveal their age.
Describing the participants
Various categories of (dis)ability are represented, and often more than one (dis)ability is stated for each person. It is important to note that the primary (dis)ability is used to compile the frequencies. Eleven participants were born with the condition and 14 participants acquired their condition through illness, injury or traumatic life events: • physical difficulties (N = 9) -four of these people have difficulty with mobility • emotional difficulties (N = 5) -specifically depression and anxiety • learning difficulties (N = 6) -three with Attention Deficit Hyperactive Disorder (ADHD), two with Attention Deficit Disorder (ADD), and one person falling within the autism spectrum. • sensory impairment (N = 3) -two people with visual impairment and one with hearing impairment • chronic illness (N = 2) -one person with epilepsy and one with rheumotoid arthritis.
Data collection
A biographical questionnaire was completed. Then, prompted by moderately structured open-ended questions, participants were invited to describe their experiences in the world of work. Some participants preferred to tell their experiences orally, others preferred to write down their responses.
Making meaning of the data
Qualitative thematic content analysis was used to identify the themes that emerged from the responses. Sometimes the themes overlapped and hence were modified in the course of analysis as it became necessary to accommodate new data and new insights (Lieblich et al. 1998;Riessman 2008;Sandelowski 2000). The themes used to describe the experiences of the participants were named, confirmed by counting and then summarised using descriptive statistics.
Located in a hermeneutic circle of re-interpretation, narratives with common story elements can be reasonably expected to change from telling to telling, making the idea of empirically validating them for consistency or stability completely alien to the concept of narrative truth (Clandinin & Connelly 2000;Josselson & Lieblich 1995;Lieblich et al. 1998;Riessman 2008;Sandelowski 2000).
Whilst acknowledging that no description is free of interpretation, it is also important to remember that although a particular theme was not mentioned, it does not mean that it was not experienced by the person. Therefore, it is difficult to offer absolute frequencies. Themes that emerged from each question are summarised below, illustrated with some direct quotations. A discussion reflecting these responses against the background of literature follows thereafter. The reader is invited to reflect his or her experiences against the experiences of the participants.
Descriptive summary of data
First question: How did you become employed or self-employed? Please describe the process More than one way of entering the world of work was mentioned by all the participants and frequencies are difficult to determine. There is nothing unusual about the approaches mentioned, however there were some interesting responses which are illustrated. The following themes emerged: • placement through agencies that specialise in placing people with (dis)abilities • finding employment through social networking • joining the family business • becoming self-employed.
Placement through agency
It was interesting to note that some of the participants experienced their interviews as hostile. It would appear that the prospective employer, and in one case the placement agency, was not sensitive to the unique needs of the participant, as illustrated by the following quotations: L: 'I applied for a job through a recruitment agency that got work for disable people. The job was to stand all day and greet people. Unfortunately that didn't last long as I am not able to stand all day because of rheumatoid arthritis.' Ca: 'Interview was difficult. All would go well in the interview until I mentioned I am hard-of-hearing. That word alone scared them.'
Social Networking
L: 'I was fortunate enough that a distant family member owned her own little electrical wholesale company and she offered me a job as a "girl Friday" and receptionist. I was very lucky as they gave me in-house training. I eventually worked my way up to being a debtors/creditors clerk. After that, it was a little easier getting jobs as I had extensive experience.'
Family business
R: 'After school I joined my father in his building business In 2008 we bought a farm where we farm with cattle and we still build occasionally.'
Self-employed
Three participants (12%) are self-employed. Both the participants had worked for a period of time within their field of interest before moving into a position of being self-employed: B: 'Whilst studying at university, I was offered a part time position at a media company during the university holiday. I enjoyed the work very much -more than the studying -as it was involved in a field I always wanted to be in. I was kept on as a freelancer for several years, performing multiple tasks in various different disciplines. After a few years working as a freelancer on a permanent contract, I decided to start my own company and have been working successfully on my own for the last six years.' Second question: What helped you to become and remain successfully employed?
The major themes that emerged from this question reflect many of the constructs described in the paradigm of Positive Psychology (Wehmeyer 2013a). The main themes were: • choosing or creating an enabling environment • self-determination and good work ethic • support structures (personal, organisational, environmental and spiritual) • self-knowledge.
Enabling environment
All the participants mentioned that they either chose or created an enabling environment:
Medication and support from professionals
All the participants received professional support in one form or another as they had all been diagnosed and categorised with a condition; some received medication. For others, it was counselling that assisted them. The following quotations illustrate the specific kind of assistance received from professionals: Lv: 'Ongoing support from a psychologist and medication.'
Self-determination and good work ethic
Self-determination was a theme mentioned by 76% of the participants and 76% of the participants (not always the same participants) also mentioned good work ethic:
Support structures
Sixty percent of the participants mentioned that they actively sought social support at work, which was helpful for them:
Support from family
Forty-four percent of the participants mentioned that social support from family and friends was helpful: Kh: 'My family especially my mother, husband and children played a major role in supporting me.'
Self-knowledge
Forty-four percent of participants commented that selfknowledge in terms of their own strengths and limitations assisted them to regulate and adapt their own behaviour so that they were able to work optimally.
One participant who struggles with ADHD runs his own business, with the assistance of a chartered accountant who takes care of all the financial running of the company. Another participant, who also struggles with ADD, runs his business without assistance and admits that it is disorganised and not as productive as it could be. The comparison of these two stories highlights the need for people to understand and accept their limitations, to have the courage to seek assistance, and then to focus on their strengths and interests: Only one participant commented on her access to technological assistance (JAWS speech reader) and how this technology assisted her once she had been trained to use it. Eide and Ingstad (2013) state that nearly half of those who need assistive technology devices do not have access to one. The question can be posed whether this means that technological assistance is scarce or did the participants simply not make any mention of technological assistance: Jw: 'The only area that I needed extra assistance with was with entering marks onto the computer (because I hadn't yet been trained on the JAWS speech reader programme).'
Third question: What were the obstacles you had to overcome?
It would appear that the obstacles mentioned were influenced by the nature of the (dis)ability of the person: physical, sensory, environmental or social. Themes evident in these responses were: • special needs are not always met • negative perception of self • stigma and discriminatory practices • difficult experiences in education and training.
Special needs not always met
All of the participants who have mobility difficulties reported hostile environments and difficulties with access and transport. Furthermore, the participants with visual difficulties also commented on transport difficulties. The participant with a hearing impairment also expressed difficulty in her work environment: J: 'Accessibility was a big concern for me, even though companies say they are accessible, they really are not. You need to be in the situation to understand the obstacles.
Stigma and discriminatory practices
Twenty-eight percent of participants commented that the attitude of others (those with whom they work and society in general was difficult):
Negative self-perception
Sixty-four percent of the participants described their negative self-perception as an obstacle: H: 'I was afraid I wouldn't be good enough and that people wouldn't respect me. I had to put aside my fear of failure. My mood constantly affects those around me. It is not fair for me to expect others to support and help me "pick myself up". It is a consistent on-going struggle to remain positiveand it is hard work.' Kh: 'One has a fear of the unknown and what will people accept me and this paralyses you further. It is pretty tough to compete with abled people in the workplace because you constantly want to prove that disability is not an obstacle to deliver quality service.'
Difficult experiences in education and training
Two people with ADD (16%) commented about their learning experiences being difficult. Their teachers had not understood their specific needs and they had not reached their full academic potential. They were only diagnosed with the difficulty in adulthood. They had subsequently received treatment, which had been of benefit to them. These examples illustrate that education structures are not always supportive and enabling: M: 'I have only recently found out that I have ADD which explains a lot why business has not run as successfully as it should. The distraction of the ADD in my head has caused me to be disorganised and also why I didn't get a better qualification. I wish that I had known about this condition long ago.' B: 'Suffering from ADHD could be a debilitating problem in an unstimulating environment. As such, formal education could be regarded as problematic, as there is very little in the way of multiple thought paths.'
Fourth question: Please feel free to add anything more you may feel is of interest with regard to you becoming gainfully employed despite the fact that you are differently abled
• lack of awareness and the need to educate others about (dis)ability • defying the disablist attitude • (dis)ability has proved to be a strength.
Lack of awareness and the need to educate others about (dis)ability
Fifty-two percent of participants commented that they needed to educate those with whom they worked and others in their social network about their special needs and how they can best be helped:
Defying the disablist attitude
Forty-eight percent of the participants commented on how their own change in attitude assisted them. They described how channelling their (dis)ability into a strength, and offering resistance to their own disabilist attitude, enabled them towards success: Kh: [After the accident] 'My doctor declared me as incapacitated and unable to go back to work again as he alleged that I was 100% disabled. I refused to believe him and I did not submit the "medically unfit" certificate to my employer.' P: 'My philosophy was that I never saw myself as disabled and this resulted in the people I worked with very quickly becoming 'blind' to my disability.' Ca: 'Being hard of hearing wasn't going to prevent me from becoming the person I wanted to be, a teacher.'
(Dis)ability as a strength
Twelve percent of the participants consider their disability to be a strength: B: 'In my field, and specifically running my own business, one needs to be able to concentrate on a variety of things simultaneously, and my "disability" actually serves me well in terms of when one task is getting boring or running smoothly, my mind moves over to another, and I can jump from one task to another without interrupting a train of thought. In my industry, we're permanently moving, changing environments and facing different problems with each new setup. There is nothing mundane about it, and there is no routine. The lack of routine could adversely affect people who require stability in order to function, but it's the very thing that makes it easier for me to tackle, as there is always something new to stimulate the mind.' G: 'The energy -be it physical or mental that you once were made to believe was a disability will help you to think on your feet, be seen as someone who thinks "outside the box" and people will tell you, you are different. You'll know they mean it in a good way.' R: 'The point I am trying to make is that many autistic spectrum people usually have great genius in certain areas.
In fact, the imbalance caused by the condition and the genius seem too often go together. It seems as if the imbalance in abilities may lead them to brilliance in others.'
Limitations of study
I acknowledge that this study has limitations. Firstly, all the participants have had, or are receiving, the benefit of appropriate education and training, and live in urban areas. Thus, they may be considered to be the privileged few. Secondly, this study was undertaken out of personal interest to develop better strategies to inform my practice of work with people who have special needs. Therefore, it is a microview of (dis)ability within the workplace. Finally, it is important to note that whilst acknowledging that HIV and/or AIDS is a very important dimension of (dis)ability, especially in the South African context, I have set this aspect aside in an attempt to narrow the scope of this study. I believe that the impact of HIV and/or AIDS on success in the workplace is worthy of an independent study.
Policy
Global awareness of disability is increasing. The United Nations Convention on the Rights of Persons with Disabilities (CRPD) specifically refers to the importance of international development in addressing the rights of people with disabilities and promotes their unrestricted integration in society. Despite the fact that South Africa has world class policies of good practice and has ratified the Convention on Rights of Persons with Disabilities (United Nations 2008; World Bank 2014), the plan of action to implement these policies is sometimes inadequate. The reality is that rights do not automatically enable people to live better lives.
One of the primary objectives of the Disability Policy Guideline (Department of Public Works 2010) was to encourage a tangible shift from policy to practice. The experiences described by the participants in this study illustrate that policy has not effectively influenced practice. Perhaps state institutions do not yet have the capacity and skills needed to action these policies (Dalton et al. 2012;Eide & Ingstad 2013;DHET 2014;DSD, DWCPD & UNICEF 2012;Van Deventer 2011). Hence, as members of society it is necessary for us all to work in our communities to raise awareness and provide appropriate support structures for people with (dis)abilities. Comprehensive psychoeducational prgrammes offered to all stakeholders would be of benefit in creating effective practice.
Support structures
People do not exist in isolation; each individual is a member of a family and social community. There is abundant literature focussing on people with (dis)abilities and their families, how they interact with their environment and society, as well as their need for support (Buntinx 2013;Charlton 1998;Ingstad & Whyte 1995, 2007Moore et al. 1998;Rocco 2011;Stone 2005;Watermeyer et al. 2006). The responses from the participants in this study confirm the need for support structures, and illustrate how people with (dis)abilities who receive support from family, friends and colleagues are enabled and thus become successful.
Despite the recommendation made in 'Article 8 of CRPD', which addresses Awareness Training, there is still evidence in the responses of a lack of appropriate training in society about disabilities and practical support. Buntinx (2013:13) defines support systems as 'resources and strategies that aim to promote the development, education, interests and personal well-being of a person and enhance individual functioning'. The responses from the participants in this study illustrate that (dis)ability is a community endeavour, which requires a multidimensional and multidisciplinary approach. Awareness Training, using specific psychoeducation programmes, could be provided for each person with a (dis)ability, as well as their families and prospective employers, who are their primary support system.
There is also evidence that the role of all professionals who work with people with (dis)abilities is useful when they identify unique strengths and develop individualised strategies to enhance the functioning of each person. Bach's (2007) view that the role of the professional is not to determine if -but how people with (dis)abilities can live meaningfully and productively in a community. Buntinx (2013:15) suggest a four-phase approach which may be useful: • assessment of individual strengths • assessment of the person's subjective expectations and objective needs • linking personal goals to a range of related resources and action strategies • evaluate support outcomes.
Education and training
Taking into account 'Article 24 of CRPD' addressing education, and despite commitments from National Skills Development Strategy (NSDS) to increase opportunities for training and skills development for people with (dis)abilities South Africa is still far from achieving its goals in this regard (AfriNEAD 2009;DHET 2014;DWCPD 2013;WHO 2011). The DHET acknowledges the continued difficulty in providing sufficient capacity to accommodate and serve students with (dis)abilities, despite the fact that they have committed to making funding available. Clearly, more than funding is required to ameliorate the difficulties experienced by people with (dis)abilities in education and training.
Blade Nzimande continues to call for an adequate policy framework. He states that: A strategic policy framework is necessary to guide the improvement of access to and success in post-school education and training (including in private institutions) for people with disabilities. The framework will create an enabling and empowering environment across the system. The framework will set norms and standards for the integration of students and staff with disabilities in all aspects of university or college life, including academic life, culture, sport and accommodation. (DHET 2014:8) General psycho-education programmes would stimulate more knowledge about the needs of people with (dis)abilities and bring about a change in attitude. Oliver (2009), amongst others, speaks about the 'disablist' attitude, which he describes as particularly disempowering. One of the six principles of Critical Disability Theory is that 'Ableism is invisible' (Rocco 2011:7). Therefore, it is imperative that people with (dis)abilities demonstrate their 'ableism', in order to be recognised. I wish to argue that people may not be able to demonstrate their 'ableism' if they struggle with self-esteem and are not recognised and encouraged to reach their potential.
Change in attitude for the individual and society
All people entering the world of work benefit from having self-knowledge and being able to identify their natural talents, accept their limitations, and acquire market related skills (Marsay 2008). Several responses from participants in this study vividly illustrate how their (dis)ability can in fact be used as a strength. Indeed, many of the success stories told by these participants pivot on their ability to offer resistance to a 'disabilist' attitude. I wish to argue that many people who have (dis)abilities can be very competent members of the workforce if they are enabled to identify and develop their unique talent. Assisting people to establish positive self-regard, to see their intrinsic self-worth and to know their strengths and limitations is a priority. Wehmeyer and Little (2013:119) explain that people who are able to use accurate knowledge of themselves, value themselves and who know their strengths and weaknesses are able to capitalise on their knowledge. Eide and Ingstad (2013) state that women with disabilities are worse off than men. Wehmeyer and Little (2013:125) describe findings of research studies which indicate that males show a higher degree of self-determination than females in certain cultures and societies. Could it be that in Africa, gender inequality may exacerbate the outcomes for people with (dis)abilities, especially women? Is there a link between a positive attitude of self, regard from others, and the ability to be self-determined? Eide and Ingstad (2013) discuss several issues which make it difficult for people with disabilities to live well. However, they note that many individuals with (dis)abilities still manage. I wish to argue, based on evidence discussed in literature and supported by the themes exposed in this study, that self-determination is crucial to the success of people with (dis)abilities.
Self-determination
According to Wehmeyer and Little (2013), self-determination actions are identified by four essential characteristics: • a person acts autonomously • behaviour is self-regulated • the person initiates and responds to the event(s) in a psychologically empowered manner • the person acts in a self-realising manner (p. 119). Many of these constructs seem to be part of the fabric of the experiences told by the participants in this study, thereby highlighting these actions of self-determination. Wehmeyer and Little (2013) advocate that self-determnation can be learned. They suggest that further research around appropriate interventions to develop and investigate self determination would be useful in moving forward.
Self-determination is an essential part of success in the workplace for people who have (dis)abilities and is a product of both the person and the environment. According to Wehmeyer and Little (2013:121), 'Self-determination is affected by environmental variables as well as by the knowledge, skills and beliefs expressed by the individual'. Hence, it is necessary to empower people who have (dis) abilities, with essential self-determination skills and assist them to seek out and create environments that offer the opportunity to actualise their potential using specific psychoeducation programmes.
Enabling environment
The responses from the participants in this study illustrate how people with (dis)abilities may not always be treated with sufficient knowledge, understanding or respect for their unique needs. Furthermore, it would appear that work environments continue to present accessibility difficulties. 'Article 9 of CRPD' addresses accessibility. Eide and Ingstad (2011:139) refer to 'structural violence' which includes not only buildings that are not easily accessible for those with disabilities, but also structures like the natural terrain that is inaccessible. These structures are not violent themselves, but become adversarial when nothing is done to overcome them as barriers. Rocco (2011:6) suggests that the environment becomes disabling when spaces are created without regard to the needs of people with (dis)abilities. Therefore, once ubiquitous accessibility needs have been met, employers and employees need to communicate and collaborate to address specific (dis)abilities. It is necessary for employers to understand the specific needs of each individual, rather than making assumptions. In addition, it is necessary for people with (dis)abilities to be enabled to communicate their needs with confidence. Furthermore, it is essential for people with (dis)abilities to make their unique abilities visible to others.
It is interesting to note that whilst the majority of participants who applied for employment through a specialist agency were successful, some placements were unsuccessful. The participant who suffers from Rheumatoid Arthritis describes how she was required to work in a position she was physically unable to do. On the other hand, it is encouraging to note that another participant, who struggles with paraplegia, was able to qualify as a medical doctor due to understanding and accommodation of her condition. Thus, it may be useful for employers to adopt an all-encompassing biopsychosocial approach to disability, to pay keen attention to the ergonomics of the workplace and the surrounding area, as well as to make provision for special needs surrounding transport for people with (dis)abilities.
Clearly, there is a need for both general and specific psychoeducation for both prospective employers, with regard to how they can accommodate the environment to suit the unique needs of a person with (dis)abilities, as well as the person with (dis)abilities himself or herself. The Supports Intensity Scale (Thompson et al. 2004) may be a useful tool to ascertain the specific needs of each person and can form the basis for an Individualised Support Plan (ISP) (Buntinx 2013).
Conclusion
'Article 27 of CRPD' states that people are entitled to participate fully in the world of work. The stories of success described by the participants in this study highlight the following areas to be considered: • Effective education and training is necessary to equip people who have (dis)abilities with appropriate marketable skills. • Self-determination skills are essential to success and can be learned. Therefore, these skills need to be developed as part of specific psycho-educational intervention plans. • Biopsychosocial support structures including attention to creating enabling work environments are essential for people with (dis)abilities to live, learn, work and play. • It is necessary to find ways to co-construct an 'ableist' attitude in society. I wish to argue that more employment opportunities for people with (dis)abilities would stimulate hope for those who struggle with (dis)abilities and would work towards co-constructing an 'ableist' attitude in society. • Both general and specific psycho-education programmes for employers, families and people with (dis)abilities that focus on how to address and support the needs of people with (dis)abilities is one of the most important approaches to consider that would ameliorate environmental and social obstacles.
It would appear, that despite good intentions and altruistic policies, there remains a lot that needs to be done to action good practice. This study describes the experiences of people who have education and live in urban areas. If they describe inadequate practice, then the question is posed, how much worse are conditions in rural areas, where people have limited access to education and health care services.
If we believe that the 'cure' to the problem of disability lies in exploring and making use of the strengths of the (dis) abled person as well as restructuring of society's attitude towards (dis)ability, then, all people need to be trained according to their abilities, and it is the responsibility of the entire community to work towards providing enabling individualised support structures. | 8,661.8 | 2014-11-20T00:00:00.000 | [
"Philosophy"
] |
Depositing a Titanium Coating on the Lithium Neutron Production Target by Magnetron Sputtering Technology
Lithium (Li) is one of the commonly used target materials for compact accelerator-based neutron source (CANS) to generate neutrons by 7Li(p, n)7 Be reaction. To avoid neutron yield decline caused by lithium target reacting with the air, a titanium (Ti) coating was deposited on the lithium target by magnetron sputtering technology. The color change processes of coated and bare lithium samples in the air were observed and compared to infer the chemical state of lithium qualitatively. The surface topography, thickness, and element distribution of the coating were characterized by SEM, EDS and XPS. The compositions of samples were inferred by their XRD patterns. It was found that a Ti coating with a thickness of about 200 nanometers could effectively isolate lithium from air and stabilize its chemical state in the atmosphere for at least nine hours. The Monte Carlo simulations were performed to estimate the effects of the Ti coating on the incident protons and the neutron yield. It turned out that these effects could be ignored. This research indicates that depositing a thin, titanium coating on the lithium target is feasible and effective to keep it from compounds’ formation when it is exposed to the air in a short period. Such a target can be installed and replaced on an accelerator beam line in the air directly.
Introduction
Compact accelerator-based neutron sources (CANS) are widely used in neutron image [1][2][3], boron neutron capture therapy [4][5][6], and other fields. Lithium (Li) is one of the commonly used target materials for CANS to produce neutrons by 7 Li(p,n)7 Be reaction [7][8][9]. However, like other alkali metals, lithium reacts with moisture and oxygen easily. It forms lithium hydroxide (LiOH and LiOH·H 2 O) and other compounds immediately upon exposure to the air [10]. The compounds' formation will cause a decline of neutron yield. On the other hand, lithium has a low melting point of 180.6 • C. The reaction generates a lot of heat and it may cause lithium evaporation [11][12][13]. To overcome the high chemical activity of lithium, X.-L. Zhou et al. studied the substitution of pure lithium with lithium compounds, such as LiF, Li 2 O, LiH, LiOH, and Li 3 N, and found that the neutron yields decreased by 30% or more [14]. To tackle lithium evaporation, S. Ishiyama et al. synthesized lithium nitride on the surface of the lithium target by in situ nitridation techniques, due to its thermal stability of up to 1086 K [15,16]. But they did not characterize the chemical state of the lithium after the nitridation lithium target was exposed to the air. The anti-evaporation effect also needs to be further tested by proton irradiation experiments. To solve the problems mentioned above, Yoshiaki Kiyanagi et al. attached a thin, metal plate to the target by hot isostatic pressing to make a sealed lithium target [17,18]. The plate with a thickness of a few microns could keep lithium away from the air and confine Li and Be-7 into the target. They have made some lithium sealing tests and no damage was observed on the Ti foil after proton beam irradiation. However, for a given proton energy, the metal plate will cause an evitable energy loss on incident protons, which lead to a decline of the neutron yield.
To protect lithium from the air, we considered depositing a thin, anticorrosion, metal coating on the lithium target. Such a lithium target could avoid compound formation and decrease the neutron yield as little as possible. The coating could also avoid lithium evaporation. The physical vapor deposition (PVD) is proven to be a simple, low-cost, and green technology that has been widely utilized to coat metals and alloys with protective films [19,20]. PVD includes magnetron sputtering, vacuum evaporation, ion plating, etc. Magnetron sputtering was selected in this study for its advantages of high speed, outstanding adhesion, easy control of film thickness, and good film formation property [21,22]. Besides, its low deposition temperature prevents lithium melting during the coating process. As for the coating material, we made comparison among with aluminum [23], chromium [24], and titanium [25][26][27][28]. Finally, titanium (Ti) was selected in this research for its high mechanical strength, good thermal stability, and excellent corrosion resistance.
In this research, a titanium coating of about 200 nanometers thick was deposited on the lithium target by magnetron sputtering technology. The color change processes of coated and bare lithium samples in the air were observed and compared to infer the chemical state of lithium qualitatively. The chemical compositions of the coating were analyzed by the X-ray photoelectron spectroscopy (XPS). The surface topography and thickness of the coating were characterized by scanning electron microscope (SEM). The Ti element distribution on the surface of the coated samples was scanned by energy dispersive spectroscopy (EDS). The lithium compounds were inferred by their X-ray diffraction (XRD) patterns. The Monte Carlo simulations were made to estimate the effects of the Ti coating on incident protons and the neutron yield. Figure 1 is a schematic diagram of a lithium sample (1 cm × 1 cm × 190 µmt) used in this work. It consisted of a lithium layer (Tianqi Lithium, Inc. Chongqing, China) with a thickness of 90 µm and a tantalum substrate (Junuo Metal Co., Ltd., Baoji, China) with a thickness of 100 µm, which were combined by the rolling process. There were five of the same lithium samples and one silicon sample shown by this work and numbered from 1-6, respectively. Their components, treatment, and characterization are listed in Table 1. Sample (1) remained a control group without a coating. Samples (2)(3)(4)(5)(6) were all subjected to the same coating treatment. The substrate of sample (6) was cut from the p-type (100) Si wafers. It was ultrasonically cleaned in acetone, absolute ethyl alcohol, and deionized water, respectively, for 20 min before coating deposition. Compared with the Li/Ta substrate, silicon substrate is brittle and much easier to get a cross section for the coating thickness measurement. Therefore, we used sample (6) as a reference to assess the average thickness of the Ti coating on samples (2)(3)(4)(5).
Magnetron Sputtering
A DC magnetron sputtering system (Chuangshiweina Technology Co., Ltd., Beijing, China) with a titanium target (φ75 mm × 5 mm) was used to synthesize the Ti coating on samples (2)(3)(4)(5)(6). The purity of the target was over 99.99%. The Ti coating was deposited on the lithium surface of samples (2)(3)(4)(5). In order to ensure the accuracy of the control group, sample (1) was transported into the coating chamber in which the sample was left uncoated. The deposition was carried out at room temperature (25 • C) and the maximum temperature of the substrate during the sputtering process was 50 • C. It was much lower than the melting point of lithium (180.6 • C). The DC power, bias, argon flow, working pressure, and base pressure were set as about 100 W (320 mA, 310 V), −70 V, 30 sccm, 0.5 Pa, and 6 × 10 −4 Pa, respectively. The distance between target to substrate was about 10 cm. The deposition time in this experiment was 55 min. The film thickness was approximately proportional to the sputtering time under the constant pressure and sputtering current, so the thickness could be easily controlled by adjusting the deposition time [26].
Air Exposure Process
According to references [10,16,29], lithium rapidly tarnished, forming a black coating of lithium hydroxide, lithium nitride, and lithium carbonate in the moist air and then became white slowly. So, we could infer qualitatively that there was a reaction between the lithium and air by its color change. After being coated, sample (2) and sample (3) were exposed in the air with a relative humidity of 50% and a temperature of 25 • C. Sample (1) was treated in a same exposure process at the same time. Sample (2) was scratched with tweezers at different positions every 3 h to remove the Ti coating and thus to expose the lithium under the film. The color change of the samples and the scratches were observed and recorded. X-ray diffraction was performed to sample (3) at regular intervals during its exposure to measure whether the titanium-coated lithium had reacted with air and what the derivatives were.
Characterization and Analysis
The chemical composition of the surface of sample (4) was studied immediately when the coating was performed, by an ESCALAB 250 XPS (Thermo, Waltham, MA, USA) with Al Kα X-rays. The binding energy was calibrated by C 1s peak of 284.8 eV. The surface morphology of samples (1) and (5) and the cross-sectional morphology of the silicon sample (6) were captured by a 7800F Schottky field SEM (JEOL, Tokyo, Japan). The titanium distribution on the surface of sample (5) was observed by an Oxford EDS (Oxfordshire, UK). To verify the composition change of samples (1) and (3) after exposure in the air, X-ray powder diffraction was performed on them by a PANalytical X'Pert Pro diffractometer (Almelo, The Netherlands) with Cu Kα radiation. The composition information, though not shown in XRD, could be inferred based on the XRD patterns, since the derivatives of lithium and air were known [10,16,29]. The reason why we used XRD instead of XPS is that the latter could only be performed in a high-vacuum environment. However, we found that lithium samples that had reacted with air outgassed a lot under the low pressure, which made it difficult and time consuming to create a vacuum condition. By contrast, XRD test was performable in the atmosphere.
Monte Carlo Simulation
Theoretically, the titanium film in front of the lithium will cause energy loss of protons, which leads to a decrease of the neutron yield. In order to explore these impacts, taking 2.5 MeV incident protons as an example, the Monte Carlo simulation was made by the stopping and range of ions in matter (SRIM) [30] code and Monte Carlo N-Particle (MCNP) [31] code version 6. The calculation models consisted of a beam of protons of 2.5 MeV, a two-layer target, a titanium layer, and a lithium layer. The proton beam was perpendicular to the target surface. The lithium density was set as 0.534 g/cm 3 . The titanium thickness ranged from 0-1000 nm. The thickness of lithium was set as constant at 90 µm, which could reduce the mean energy of a proton beam from 2.5 MeV to about 1.88 MeV, the 7 Li(p,n)7 Be reaction threshold [32]. The relationship among proton energy loss, neutron yield, and titanium thickness was investigated. Figure 2 shows the photograph of sample (1) and sample (2). Sample (1) in the air changed from metallic silver to homogeneous dark gray in a few minutes, and was dappled with black and white one hour later. After three hours, it gradually turned off-white, then slowly turned pure white, and stopped changing afterwards. This process of color change was due to the reaction of lithium with air [10]. Samples (2) and (3) changed from metallic silver to uniform orange, brown, and gray, successively, within a few minutes, and then stopped changing afterwards. Such a change was caused probably by the reaction of titanium coating with air. Then, 3 h later, sample (2) was scratched by tweezers and it was found that the coating had outstanding adhesion with the lithium substrate. The scratch was silver at first, then quickly became dark gray, and gradually turned off-white. This process of change was very similar to that of the lithium sample (1) in the air. It can be inferred that the coated lithium of sample (2) did not deteriorate after being exposed to the air for 3 h. The same scratch test was performed to sample (2) every 3 h, and we found that with 9-h exposure, the color change of the scratch was still consistent with the above situation. (2): (a) sample (1) after being exposed to the air for a minute; (b) sample (2) after being exposed to the air for a minute.
XPS Results
After being coated, sample (4) was immediately transferred into the chamber of XPS in situ. The wide scan was performed to survey the chemical elements of the sample surface to check out whether the lithium was evenly covered with Ti coating. The scanning area was located in the middle of the sample. The XPS spectra are shown in Figure 3 and the 20-80 eV region is shown as a small inset of it. Ti peaks were observed as expected. The Li 1s peak was located around 54 eV [13,33], and no obvious peak was found near this region, which means that a titanium coating was successfully deposited on the lithium surface and the lithium was coated completely. In view of the inelastic scattering mean free path (IMFP) of the photoelectron [34,35], the thickness of Ti coating was fairly thicker than a few nanometers. In addition, the oxygen peak also appeared. It was because the titanium target was inevitably oxidized slightly when it was assembled in the deposition chamber. Besides, it was also probably caused by the trace air in the chamber.
SEM and EDS Results
The surface morphologies of samples (1), (5), and (6) are shown as Figure 4a-c, respectively. Sample (1)'s surface was rough and pitted with fissures of varying sizes. Sample (5)'s surface was undulating, but smoother than that of (1) and fewer fissures were found. The surface of sample (6) was very flat and with no fissure. The flatness difference between sample (5) and sample (6) was caused by their different substrates. The surface of the silicon substrate was mirror-like and flatter than that of the lithium substrates. A grainy structure was observable on the surfaces of sample (5) and sample (6). But the grain size of (5) was larger than that of (6). This difference was also caused by the different surface roughness of their substrates that effected the nucleation and microstructural growth of Ti coating [36,37]. Nonetheless, the surface morphology of sample (5) was obviously different from sample (1) but similar to sample (6). The EDS mapping was performed on sample (5) to search Ti element distribution. The scan place was the same as where Figure 4b shows. It was found that Ti element had a uniform distribution on the surface of sample (5), as shown in Figure 5a. To estimate the Ti coating thickness, the SEM cross section of sample (6) was photographed, as shown in Figure 5b. The thickness was about 200 nanometers. From the XPS, SEM, and EDS results, it was considered that the titanium coating had good property in coating formation on the lithium sample surface. The surface of samples (2)(3)(4)(5)(6) were successfully deposited with a uniform titanium film of about 200 nm thick.
XRD Results
An X-ray diffraction analysis was performed on sample (1) after it was exposed to the air for 3 h. The patterns are shown in Figure 6, with obvious diffraction peaks of lithium hydroxide and weak diffraction peaks of tantalum, indicating that sample (1), after three hours of exposure in the air, reacted with air and became LiOH, and thus the color turned off-white from metallic silver. The scratch test in Section 3.1 showed that the lithium of sample (2), after nine hours of exposure, had no reaction with air. To verify that, XRD was performed to sample (3) at this time. The pattern is shown in Figure 6, from which it can be seen that, except for strong diffraction peaks of lithium, no peak of LiOH appeared. That means lithium was well protected in the first 9-hour exposure in the air. Some weak peaks were also found and they were corresponding to titanium or tantalum. To tell which one it was, XRD was also performed on sample (6) and no Ti peak was found. So, the weak peaks were Ta peaks, and the Ti coating we prepared was amorphous. After being exposed to the air for 17, 26, and 46 h, sample (3) was further characterized by XRD three times, with the results shown in Figure 6. From the patterns, we could tell that with the extension of the exposure time, the lithium peaks weakened and the LiOH peaks appeared and increased gradually. After a 46-hour exposure, the XRD pattern of sample (3) was quite similar with that of sample (1). Both of them showed strong peaks of lithium hydroxide and no lithium peak was found. That means lithium was completely decomposed. These results indicated that sample (3) stayed stable in the first nine hours and gradually deteriorated afterwards when it was exposed to the air. After three hours, the LiOH peaks of sample (1) were obvious and no lithium peaks were shown. After nine hours, only the lithium peaks and tantalum peaks could be found in the diffraction pattern of sample (3). After that, LiOH peaks began to appear and increased gradually with the extension of the exposure time.
Monte Carlo Simulation Results
Taking 2.5 MeV incident proton as an example, the relationship between the thickness of the titanium coating and the energy loss of the proton was calculated by SRIM code, as shown in Figure 7a. Within a certain range, as the thickness of the titanium coating increased, the resulting proton energy loss also increased linearly. For a 200-nm Ti coating, the corresponding energy loss of 2.5 MeV incident proton was about 7 keV, accounting for about 3‰ of the total. So theoretically, energy loss of a thin titanium film of hundreds of nanometers on protons is very small. The thinner the coating, the smaller the effect on the protons. However, a coating might be too thin to isolate a lithium target from the air. In addition, on the premise that the lithium could be effectively protected, an over-thick coating was unnecessary, for it would cause a larger energy loss to protons. The neutron yield of the lithium target with a single-side, coated, Ti film was calculated by MCNP6, with the result shown in Figure 6. As the thickness of the titanium film increased, the neutron yield gradually decreased. The neutron yield of a lithium target with a 200-nm Ti coating was 99.86% of that of pure lithium. So, the effect of a thin Ti coating on the neutron yield was extremely limited. To sum up, when protons with 2.5 MeV energy bombard a 200-nm Ti-coated lithium target, the effects of Ti coating on the protons and the neutron yield are little and negligible.
Conclusions
In this paper, a research of thin anticorrosion coating on lithium target for CANS was carried out. It is the first time depositing a coating on the lithium target by magnetron sputtering technology was carried out. A bare lithium sample and five titanium-coated ones were studied in this research. The corrosion of bare lithium in the air was determined qualitatively according to color change. The surface chemical elements, morphology, and compositions of the samples were characterized by XPS, EDS, SEM, and XRD measurements to evaluate the protection effect of Ti coating on Li target. The influences on the incident proton beam and the neutron yield caused by the Ti coating were estimated by the Monte Carlo simulation. Based on the above work, the results were derived as follows: (1) The corrosion of bare lithium in the air happened quickly, and the corrosion product after 3 h of exposure was mainly LiOH.
(2) By magnetron sputtering technology, thin Ti anticorrosion coating can be plated on the Li target surface.
(3) A 200-nm Ti coating can effectively isolate Li from the air and stabilize its chemical state for at least 9 h, at a relative humidity of 50% and a temperature of 25 • C.
(4) Taking 2.5 MeV incident protons as an example, the simulation showed that energy loss rate of 200-nm Ti film for protons was about 3‰, and the reduction rate of the neutron yield was less than 2‰.
From these results, it can be concluded that depositing a thin titanium coating on the lithium target by magnetron sputtering technology is feasible. Such a thin Ti coating is effective to prevent lithium from deteriorating reaction with air in a short period. The influence of a 200-nm, thin, Ti coating on the incident protons and the neutron yield can be ignored. The lithium target coated with titanium is more convenient to store and transport than a bare lithium target, and it can be directly installed and replaced on an accelerator beam line in the air, instead of carrying out in a vacuum or an ultra-low humidity environment. Furthermore, the titanium coating may also be able to avoid lithium evaporation and seal the radionuclide Be-7 produced by 7 Li(p,n)7 Be, which needs to be verified in further experiments. | 4,824.4 | 2021-04-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Lithium treatment reverses irradiation-induced changes in rodent neural progenitors
Cranial radiotherapy in children has detrimental effects on cognition, mood, and social competence in young cancer survivors. Treatments harnessing hippocampal neurogenesis are currently of great relevance in this context, and we previously showed that voluntary running introduced long after irradiation rescued hippocampal neurogenesis in young mice (Naylor et al. 2008a). Lithium, a well-known mood stabilizer, has both neuroprotective, pro-neurogenic as well as anti-tumor effects, and in the current study we introduced lithium treatment 4 weeks after irradiation, analogous to the voluntary running study. Female mice received a single 4 Gy whole-brain irradiation dose at postnatal day (PND) 21 and were randomized to 0.24% Li2CO3 chow or normal chow from PND 49 to 77. Hippocampal neurogenesis was assessed at PND 77, 91 and 105. We found that lithium treatment had a pro-proliferative effect on neural progenitors and promoted neuronal integration upon its discontinuation. Gene expression profiling and DNA methylation analysis identified two novel factors related to the observed effects, Tppp, associated with proliferation, and GAD2/65, associated with neuronal signaling. Our results show that lithium treatment reverses irradiation-induced impairment of hippocampal neurogenesis even when introduced long after the injury. We propose that lithium treatment should be intermittent in order to first make neural progenitors proliferate and then, upon discontinuation, allow them to differentiate. Our findings suggest that pharmacological treatment of cognitive so-called late effects in childhood cancer survivors is possible.
INTRODUCTION
Dramatic improvements in childhood cancer survival rates have been made in the last decades (Gatta et al. 2014) and this is due to great strides in the intervention treatments.
The treatments encompass a combination of surgery, chemotherapy and radiotherapy, with the recent addition of immunotherapy. However, the growing population of survivors often has to face the therapy-related morbidity (Spiegler et al. 2004). Radiotherapy is known to cause debilitating cognitive alterations (Georg Kuhn and Blomgren 2011) leading to impaired processing speed, attention and working memory, further impinging on emotional and psychological well-being and ultimately leading to anxiety, and posttraumatic stress like symptoms (PTSS) (Marusak et al. 2017). Declines in IQ and academic achievements have been observed during longitudinal follow-up and this impact the quality of life, school attendance and overall daily activities (Janelsins et al. 2011).
Different mechanisms have been implicated in the cognitive changes observed in patients treated with radiotherapy (Davis et al. 2013). Increased levels of cytokines seem to mediate some of these changes, as well as direct and indirect DNA damage, endocrine dysfunction, activation of microglia and astrocytes, hypomyelination and decreased neurogenesis (Janelsins et al. 2011) (di Fagagna et al. 2003) (Monje et al. 2002). In particular neurogenic regions, harboring cellular proliferation, display higher sensitivity to irradiation as seen in rodent models and in humans (Fukuda et al. 2004) (Limoli et al. 2004). Irradiation, even after a single moderate dose, was shown to cause apoptosis and progressive decline in neurogenesis of young rats and mice resulting in severe cognitive declines (Fukuda et al. 2004) (Boström et al. 2013). Adult hippocampal neurogenesis persists throughout life, mainly in the subgranular zone (SGZ) of the hippocampus and in the subventricular zone (SVZ) of the lateral ventricles (Altman and Das 1965). These regions harbor neural stem and progenitor cells (NSPCs) dividing continuously and giving birth to newborn neurons. This is believed to contribute highly to hippocampal plasticity and especially learning, memory and mood regulation (Shors et al. 2002) (Deng, Aimone, and Gage 2010) (Yun et al. 2016).
Lithium, commonly used in the treatment of bipolar disorder, has been shown to exert neuroprotective and regenerative effects in a variety of neurological insults (Shorter 2009). In preclinical studies lithium protected the neonatal brain against the neurodegenerative effects of hypoxia-ischemia (HI) (Xie et al. 2014) and rescued cognitive loss in adult as well as in young mice after cranial irradiation (Yazlovitskaya et al. 2006) (Huo et al. 2012;Zhou et al. 2017). The neuroprotective effects of lithium after cranial radiation are attributable to enhanced hippocampal neurogenesis and decreased apoptosis in young rats and mice (Yazlovitskaya et al. 2006) (Huo et al. 2012). Lithium also restored synaptic plasticity in a Down syndrome mouse model (Contestabile et al. 2013) and ongoing trials aim at introducing lithium as a treatment of a broad range of brain related disorders ("Neuroprotective Effects of Lithium in Patients With Small Cell Lung Cancer Undergoing Radiation Therapy to the Brain -Full Text View -ClinicalTrials. gov" n.d.).
Despite the surge of studies conducted on lithium, the exact mechanisms of action are still only partly elucidated. Lithium exerts its action through modulation of intracellular second messengers with subsequent alteration of complex and interconnected intracellular enzyme cascades (Brown and Tracy 2013). One target is the protein kinase Gsk3β (O'Brien and Klein 2009). Direct (Wang et al. 2015) (Jope 2003) and indirect (Zhang et al. 2003) inhibition of Gsk3β by lithium and related improvements of impaired cognition likely involve a variety of different mechanisms, such as supporting long-term potentiation and diminishing long-term depression, promotion of neurogenesis and ultimately reduction of inflammation and apoptosis (Yazlovitskaya et al. 2006) (Huo et al. 2012) (King et al. 2014) (Voytovych, Kriváneková, and Ziemann 2012).
Encouraging results also support the use of lithium in combination with cancer treatment to improve the therapeutic effect, for example as a radiosensitizer (Zhukova et al. 2014) (Ronchi et al. 2010) (Zinke et al. 2015) (Korur et al. 2009). Nevertheless, a post radiotherapy lithium treatment may still be preferable to safely exclude the likelihood of protecting tumor cells and thus increasing the risk of relapses. The study herein investigates the effects of lithium treatment on NSPC proliferation, dendritic orientation and survival after brain irradiation and whether using a therapeutically relevant dose can rescue neurogenesis even long after irradiation of the juvenile brain. Ultimately, we provide evidence for a possible molecular mechanism involving novel proteins targeted by lithium and further elucidate the effects of irradiation on fate commitment of NSPCs and how lithium can harness this process.
RESULTS
Assessment of neurogenesis (proliferation, dendritic orientation and survival of the NSPCs) was performed at three different time-points, immediately after (PND 77), two weeks after (PND 91) and 4 weeks after (PND 105) termination of LiCl exposure ( Figure 1A in vivo study design).
Cells in the DG but Decreases the Cell Body Area and the Dendritic Complexity of DCX + Cells
To analyze the effect of lithium on proliferation of the NSPCs, we measured BrdU at PND 77 ( Figure 1B). Proliferation in the GCL was decreased significantly by irradiation in both the control and the lithium-treated mice ( Figure 1C). Lithium increased the density of proliferating cells in both sham and irradiated brains, around 83% in the irradiated group and 33% in the sham group. We further determined the effects of lithium on the density of doublecortin-positive (DCX + ) cells in the GCL at PND 77 ( Figure 1D). Irradiation provoked a 60% decrease of the density of DCX + cells in the GCL, but lithium had no effect on DCX + cell density, neither in sham, nor in irradiated brains ( Figure 1E).
In addition, we conducted morphometric analysis of DCX + immature neurons at PND 77 ( Figure 1F). We found that lithium treatment decreased the cell body area in both sham and irradiated animals ( Figure 1G), whereas radiation drastically decreased the dendritic complexity. Surprisingly, lithium treatment per se reduced dendritic complexity in sham, but not in irradiated animals at PND 77 ( Figure 1H). This was confirmed by Sholl analysis, showing that lithium treatment reduced the number of intersections at the distal part of the dendrites (50 and 60 mm from the soma) at PND 77 in the sham, but not in irradiated animals ( Figure 1I).
Orientation and Dendritic Complexity of DCX + Cells at PND 91
To assess if the increase in proliferating cells at PND 77 resulted in increased survival and differentiation into immature neurons, we analyzed the density of DCX + cells in the GCL at PND 91, two weeks after lithium discontinuation (Figure 2A). DCX + cell density was decreased by irradiation, as expected, but this decrease was reversed by lithium treatment.
Also in non-irradiated brains the DCX + cell density increased. The increase was 156% and 24% in the irradiated and sham lithium treated groups, respectively ( Figure 2B). To further address the effects of radiation and lithium treatment on the integration of newly born neurons, we performed a DCX/BrdU double immunostaining (Figure 2C), enabling us to determine if the orientation of the main dendritic process of neurons born during the last 5 days of lithium treatment was parallel or radial to the GCL. We have earlier shown that irradiation causes the main dendritic process to shift from a radial to a parallel orientation (Naylor et al., 2008, PNAS). Our results confirmed that irradiation increased the percentage of parallel processes from 22% in sham to 41% in irradiated, whereas the number of radial processes was decreased from 64% in sham to 48% in irradiated ( Figure 2D). Remarkably, we found that lithium treatment reduced the percentage of parallel main processes by half, from 41% to 21%, in the irradiated brains, to the same level as in the sham controls (22%) ( Figure 2D). However, lithium did not alter the proportion of radial main processes, neither in the sham, nor in the irradiated brains. Also, we found that lithium treatment increased the number of BrdU cells that were not labelled for DCX in the irradiated group (25%) compared to the sham irradiated group (11%). However, the morphometric analysis of the DCX + cells at PND 91 ( Figure 2E) revealed that the cell body area of the irradiated groups were not different from the sham groups, and lithium did not affect cell body area either ( Figure 2F).
The dendritic complexity of the DCX+ cells in the irradiated group was drastically decreased compared to the sham group (7%), but lithium treatment restored this completely to the sham level ( Figure 2G). This was further confirmed by Sholl analysis, where we found that the number of intersections at 50 and 60 µm from the soma was normalized to sham level in the irradiated group treated with lithium at PND 91 ( Figure 2H).
Vitro -association with changes in DNA methylation
In order to investigate the molecular mechanisms possibly involved in the lithium effects after irradiation, we designed an in vitro study (Figure 3A), where isolated NSPCs from the brains of 15.5 days rat embryos, after expansion passage 3 (P3) were exposed to 2.5 Gy irradiation and lithium treatment. Initially, we performed RNA sequencing and compared the reactome across different conditions. Principal component analysis (PCA) revealed that the sham groups clustered together for PC2 ( Figure 3B) whereas the irradiated groups clustered for PC3 ( Figure 3C), indicating that NSPCs responded in a specific manner to the individual treatments. A deeper analysis of the reactome profile with respect to each PC revealed that PC2 identified changes in gene expression related to a variety of brain developmental as well as synaptic transmission processes (Figure 3D), whereas PC3 identified differential gene expression for genes involved primarily in cell cycle as well as axonal guidance ( Figure 3E). To identify individual candidate genes involved in the specific responses, we next looked for the highest fold-changes in gene expression between sham and sham treated with lithium ( Figure 3F) and between irradiated and irradiated treated with lithium ( Figure 3G). This analysis highlighted two genes, one that encodes for tubulin polymerization-promoting protein (Tppp) and the glutamate decarboxylase 2 (GAD2) gene that encodes for the GAD65 protein (Erlander et al. 1991). The mRNA levels were confirmed by RT-qPCR and lithium accounted for most of the variability in the sham and irradiated groups in Tppp expression levels ( Figure 3H) as well as in GAD65 ( Figure 3I).
To investigate possible mechanisms underlying the changes in gene expression, we next investigated the effects on DNA methylation of the regulatory regions of Tppp and the GAD2 gene, the latter encoding GAD65, using a MeDIP-based approach (Brebi-Mieville et al. 2012) and analyzing the 5-methylcytosine (5mc) levels of the regulatory regions of these two genes in fetal neural stem cells in vitro. DNA methylation of regulatory regions is associated with transcriptional repression, and decreased levels of methylation are thus mostly associated with increased gene expression. These experiments revealed a clear correlation between the effects of irradiation and lithium treatment on gene expression and the corresponding DNA methylation levels. At the Tppp gene, 5mc levels were significantly decreased after lithium treatment of irradiated cells compared to the sham control ( Figure 3J). The 5mc levels at the GAD2 regulatory region also showed a significant decrease in the irradiated group treated with lithium compared to sham control and irradiated groups ( Figure 3K). These results suggest that lithium treatment after irradiation influences epigenetic mechanisms yielding decreased DNA methylation levels.
Lithium Upregulates Tppp and GAD65 levels in the Irradiated Juvenile Brain in Vivo
To be able to validate the DNA methylation and RNA sequencing data in our in vivo model,
Observed 4 Weeks after Its Discontinuation
To assess the survival and the differentiation of the NSPCs at later time in the neurogenic process, we conducted a triple immunostaining of BrdU, NeuN and S100β in the DG at PND 105 ( Figure 5A). We found that irradiation drastically reduced the proportion of NeuN/BrdU + cells (from 76% to 40%) and increased the proportion of S100β/BrdU + cells (from 10% to 21%) ( Figure 5B). Lithium after irradiation was able to restore the percentage of neuronal commitment from 40% to 66% approaching sham levels. We did not observe lithium-induced changes in astrocytic commitment in both sham and non-irradiated animals. In addition, irradiation significantly increased the percentage of surviving neurons (BrdU + ) that were neither of neuronal or astrocytic lineage. To further assess whether lithium had a long-lasting effect on the positive modulation on DCX + cells, we analyzed the density of the DCX + cells in the GCL at PND 105 ( Figure 5C). The density was still significantly decreased after irradiation, consistent with previous models (Boström et al. 2013) (Figure 5D). Additionally, there was no significant difference in the density of DCX + cells between the control and the lithium-treated mice, neither in the irradiated, nor in the sham groups ( Figure 5D). Together, this indicates that lithium treatment promotes proliferation of NSPCs and that discontinuation of the treatment promotes a wave of neuronal differentiation, but at the same time prevents the wave of irradiation-induced astrocytic differentiation.
Lithium is effective even when introduced long after the injury
Cranial radiation therapy is a major cause of long-term complications in pediatric patients (Gatta et al. 2014). These complications include late-occurring cognitive impairments, and a negative impact on social competence (Armstrong et al. 2010;Georg Kuhn and Blomgren 2011). It has been demonstrated in animal models that preserving or promoting neurogenesis helps attenuate the cognitive deficits observed in irradiated mice and rats (Naylor et al. 2008a;Zhou et al. 2017). Progress has been made in identifying mechanisms underlying the neuroprotective effects of lithium in rodent models of brain injury, including anti-apoptotic effects (H. Li et al. 2011) (Huo et al. 2012) (Omata et al. 2008;Q. Li et al. 2010). Regenerative effects were demonstrated when lithium was shown to reduce brain tissue loss by 39% after hypoxia-ischemia (HI) in the immature brain even when it was administered 5 days after the injury (Xie et al. 2014) in a model where neuronal cell death peaks 1-2 days after HI, but the mechanisms have not been characterized. Delayed administration of lithium towards preservation of neurogenesis following whole-brain cranial irradiation has never been shown. In this study, we demonstrate that delayed, continuous lithium treatment regulates critical aspects of neurogenesis such as proliferation, differentiation, and survival of neural progenitor cells in the dentate gyrus of the irradiated mouse hippocampus.
Following irradiation-induced hippocampal injury, the neurogenesis cascade within the GCL undergoes several long-lasting changes: an increase in NSPC apoptosis, a continuous decrease in NSPC proliferation, a decreased propensity for differentiation of those NSPCs, where gliogenesis is favored over neurogenesis, and the onset of inflammation (Fukuda et al. 2004) (Huo et al. 2012) (Blomstrand et al. 2014). Hippocampal sections, obtained on PND 77, after 4 weeks of continuous lithium treatment, from irradiated mice, revealed that lithium promoted the proliferation of NSPCs, as indicated by the increased density of BrdU + cells in the GCL. In a separate study, we found that lithium increased the proliferation of mouse hippocampal-derived NSPCs in vitro, and drove them much faster through the G1 phase of the cell cycle compared to control NSPCs (Zanni et al. 2015). However, in response to pro-neurogenic stimuli, such as physical exercise, enriched environment, and antidepressants, most proliferating cells in the dentate gyrus are amplifying neural progenitors (Kronenberg et al. 2003) (Encinas, Vaahtokari, and Enikolopov 2006) (Encinas et al. 2011) (Hodge et al. 2008). Hence, it is likely that lithium increases the rate of symmetric divisions of the amplifying neural progenitor population, which would be therapeutically beneficial since those cells represent a renewable source of neuronal precursors.
Why would delayed onset of lithium treatment be beneficial? One reason is that lithium treatment could be used for all cancer survivors who have already been treated and suffer from late effects. Another reason is the possible risk of protecting or stimulating remaining tumor cells. Radiotherapy remains an important component of their treatment despite its numerous side effects (Pollack and Jakacki 2011). Some of the pro-proliferative effects of lithium are based on pathways, which have been well described (Brown and Tracy 2013) but are still not fully understood. Some effects of lithium on brain tumor cells or leukemia cells have been described. While lithium might act as an anti-tumor agent in certain types of medulloblastoma (Zhukova et al. 2014;Ronchi et al. 2010), glioblastoma (Korur et al. 2009), glioma (Cockle et al. 2015) or leukemia ("Lithium Carbonate and Tretinoin in Treating Patients With Relapsed or Refractory Acute Myeloid Leukemia -Full Text View -ClinicalTrials.gov" n.d.), its action on other tumor cell types remains uncertain. Therefore, it may be beneficial to wait until the anti-cancer therapy is finished. To show as we did, that lithium can rescue neurogenesis long after irradiation, is thus an important step before validating its use in children.
Intermittent treatment is important
We showed for the first time in vivo that discontinuation of lithium in the irradiated brain could restore neuronal fate progression, thereby counteracting the irradiation-induced astrocytic differentiation of NSPCs as previously described (Dranovsky et al. 2011;Schneider et al. 2013), further suggesting a role for lithium in restoring neurogenesis after cranial radiotherapy. Continuous lithium treatment with stable serum concentrations promoted NSPC proliferation but prevented neuronal differentiation. Discontinuation allowed differentiation and integration to occur over the course of 4 weeks. Hence, we surmise that a sequential therapeutic regimen, for example one month with lithium and one month without it, would be more effective in the treatment of cognitive late effects. Another advantage would be that the side effects frequently reported in patients (Singer 1981) treated with lithium could be better tolerated. This is a novel concept that deserves to be tested in patients.
Identification of mechanisms
Studying both rat neural stem cells in vitro and mouse hippocampal neurogenesis in vivo enabled us to identify molecular mechanisms underlying the positive effects of intermittent lithium treatment. Two indices related to the functional integration of DCX+ cells into the GCL are the orientation of integrating DCX+ cells, and the maturity of their dendritic processes (Seki et al. 2007;Plümpe et al. 2006). Type-3 neuroblasts possess an elongated cell body, flanked by processes that lie tangential to the SGZ suggestive of an early stage of maturation, while newborn neurons have a radial process to the SGZ that is indicative of functional integration into the granule cell layer (Eisch et al. 2008). By phenotyping the orientation of dendritic processes in cells double-labeled with BrdU and DCX, we found as previously reported (Chakraborti et al. 2012;Naylor et al. 2008b) that irradiation decreased radial migration, yet increased the number of parallel dendritic processes in a different subset of neurons within that population of cells. In irradiated mice treated with lithium, the percent of cells with parallel dendrites was reduced but the radial migratory pattern was unaffected. Arguably, lithium protects against irradiation damage by limiting the number of cells with parallel dendritic processes so that immature neurons are not maintained in that stage. Overall, these data suggest that lithium discontinuation is pivotal for the late critical period of newborn cell survival as well as their structural and synaptic integration. Here irradiation persistently reduced dendritic complexity (number, length and area of branches) and spine density, similar to a previous study in which these changes were attributed to the overexpression of the synaptic plasticity-regulating postsynaptic density protein (PSD-95) in immature neurons in the GCL (Parihar and Limoli 2013). PSD-95 is believed to be a key regulator of dendritic morphology and when overexpressed it adversely affects dendritic morphology and complexity (Charych et al. 2006). We extended these findings and proved that continuous lithium treatment followed by a period without lithium acts positively on dendritic maturation and complexity, which is arguably a morphometric parameter that has important functional implications in memory function and anti-stress mechanisms (Besnard et al. 2018). We hypothesized that this is attributable to key regulatory proteins involved in cytoskeletal rearrangements (Tppp) and synaptic transmission (GAD65). Tppp expression is known to increase the stability of microtubule network thereby playing a crucial role in cell differentiation (Oláh, Bertrand, and Ovádi 2017) was accompanied by a decreased in methylation of these two regulatory genes in the irradiated group that received lithium, supporting the hypothesis that the positive effects of lithium after irradiation are likely to be attributed to an epigenetic regulation of genes important to cell fate and maturation.
More importantly, these data support the novelty of the present study, showing that intermittent lithium treatment has unprecedented targets in the irradiated brain. Lithium has the potential to become the first pharmacological treatment of cognitive late effects in childhood cancer survivors.
Irradiation Procedure
Mice were anaesthetized using isoflurane at 4 % for induction followed by 1.5-2 % throughout the procedure. Mice were placed on a custom-made Styrofoam frame in prone position (head to gantry), and the frame placed inside an X-ray system (Precision X-RAD 320, North Branford, CT, USA) setup in-house for in-vivo targeted radiotherapy research with an energy of 320KV, 12.5 mA and a dose rate of 0.75 Gy/min. The whole brain was irradiated with a radiation field of 2 x 2 cm. A single dose of 4 Gy was delivered to each animal on postnatal day (PND) 21. The source-to-skin distance was approximately 50 cm.
The sham-irradiated controls were anesthetized but not irradiated.
Lithium in vivo administration
Female littermates (5-6 animals in each cage) were randomly assigned to lithium chow ( Netherlands). This regimen was determined in our previous study (Zanni et al. 2017) and was sufficient to yield a lithium serum concentration of 0.7-0.9 mM in mice (Zanni et al. 2017), which is equivalent to the commonly used 0.6-1.2 mM therapeutic range in humans.
The lithium chow was maintained four weeks, from PND 49 to PND 77 ( Figure 1A in vivo study design).
Immunohistochemistry
Animals were injected intraperitoneally with 50 mg/kg BrdU ( Only cells with an entire, clearly visible cell body were counted.
Dendrite Reconstruction and Morphometric Analyses
Imaging of dendritic arbors at PND 77 and PND 91 was performed using a confocal microscope by acquiring images with 1 µm intervals using a 20X objective lens (X20 / 0.
Protein quantification
Hippocampal tissue was processed for protein quantification as previously described (Osman et al. 2016). The Wes
Embryonic cortical NSPC culture and LiCl exposure procedures
Primary cultures of NSPCs were established as previously described (Tamm et al. 2006;Ilkhanizadeh, Teixeira, and Hermanson 2007). Cells were obtained from embryonic cortices Thereafter, the cells were gently mixed in N-2 medium and plated at 1:4 density. To investigate LiCl (Sigma Aldrich, St. Louis, USA) effects, we exposed passage 3 (P3) NSPCs from 12 h before irradiation to LiCl (3 mM), as previously described (Zanni et al. 2015). A photon 60 Co irradiation source was used to expose the NSPCs at a set distance of 80 cm and an absorbed dose of 2.5 Gy. P3 cells were harvested 24 hours after irradiation for gene expression analysis (Figure 3A, in vitro study design).
RNA, cDNA and RT-qPCR
For real time qPCR, total RNA from culture NSPCs was extracted using RNeasy Mini Kit (Qiagen) and stored at -80 until further use. Integrity and concentration of extracted RNA were measured using Qubit (Thermo Fisher Scientific). cDNA was synthesized from extracted RNA using High Capacity cDNA Reverse Transcription Kit (Thermo Fisher) according to the manufacturer's protocols. Quantitative real-time PCR was performed with Platinum SYBR Green qPCR Supermix-UDG (Thermo Fisher Scientific) together with sitespecific primers. Expression levels were normalized to housekeeping gene, TATA-box binding protein (TBP) levels.
RNA sequencing
Illumina TruSeq Stranded mRNA sample preparation kit with 96 dual indexes (Illumina, CA, USA) was used to prepare RNA libraries for sequencing, respectively 4 biological samples per condition (Sham, Sham+Li, Irradiated, Irradiated+Li), for a total of 16 samples.
The protocols were automated using an Agilent NGS workstation (Agilent, CA, USA) using purification steps as described previously (Borgström, Lundin, and Lundeberg 2011;Lundin et al. 2010 Reads overlapping fragments in the exon regions were counted with featureCounts (subread/1.5.1) using default parameters, i.e. fragments overlapping with more than one feature and multi-mapping reads were not counted.
Differential expression analyses were performed under R/3.3.3 using EdgeR/3.16.5 package. Low count reads were filtered by keeping reads with at least 1 read per million in at least 2 samples. Counts were normalized for the RNA composition by finding a set of scaling factors for the library sizes that minimize the log-fold changes between the samples for most genes, using a trimmed mean of M values (TMM) between each pair of samples. Design matrix was defined based on the experimental design, genes-wise glms models were fitted and likelihood ratio tests were run for the selected group comparisons.
RNA-seq data have been deposited in the ArrayExpress database at EMBL-EBI (www.ebi.ac.uk/arrayexpress) under accession number E-MTAB-7238.
MeDIP-qPCR
MeDIP-qPCR was performed using MagMeDIP kit (Diagenode) according to the manufacturer's instructions. Briefly, rat NSPCs cells were lysed and the DNA was extracted using phenol:chloroform:isoamyl alcohol (25:24:1) (Sigma-Aldrich), purified using Purelink Genomic DNA kits (Invitrogen, now Thermo Fisher), fragmented using Bioruptur (Diagenode) and immunoprecipitated with the antibody anti-5'-methylcytosine (Diagenode), following MagMeDIP kit settings. DNA concentration was measured using Qubit dsDNA HS Assay Kit (Thermo Fisher). Immunoprecipitated DNA was quantified using RT-qPCR, as described above, and the temperature profile used was: 95 °C for 7 min, 40 cycles of 95 °C for 15 s. and 60 °C for 1 minute, followed by 1 minute 95 °C. Tppp, Gad2 promoter primers (Qiagen) and Methylated DNA and unmethylated DNA control primers (Diagenode) were used as internal controls (Fig. S1). The efficiency of methyl DNA immunoprecipitation was expressed as a relative to the percentage of the input DNA using the following equation:
Statistical Analysis
Statistical analysis was performed using GraphPad Prism® (La Jolla, CA, USA). Statistical differences in immunohistochemistry, protein quantitation and gene expression analysis were calculated using a 2-way ANOVA analysis followed by a Bonferroni post-hoc test for multiple comparisons correction. For methylation analysis, Kruskal-Wallis test for multiple comparisons, followed by Dunn´s multiple comparisons test was performed. For dendritic analysis, linear mixed models with a random intercept for each animal, were used to account for within-individual dependencies, using R version 3.3.3 (The R Foundation for Statistical Computing, Vienna, Austria). As data were not normally distributed, natural logarithmic .228, *p=0.0161). Error bars represent SEM. ****p<0.0001; **p<0.01; *p<0.05.
Figure 5. Lithium counteracts the irradiation-induced changes in NSPCs fate progression in the DG at PND105. (A)
Representative confocal images of NeuN (red), BrdU (green) and S100β (blue) immunoreactivity in the GCL at PND105 depicting colocalization of BrdU and NeuN in mature neurons (upper panel) and colocalization of BrdU and S100β in an astrocyte (lower panel). Scale bar=20 µm. 63x magnification. (B) Pie chart of the percentages of BrdU cells double labeled for NeuN or S100β. The percentages of NeuN+/BrdU+ cells were significantly increased by lithium in the irradiated group (**p=0.0031) but not in sham (p>0.9999). 2-way ANOVA: irradiation (F 1,14 =22.5, ***p=0.0003), LiCl (F 1,14 =7.02, *p=0.0191). The percentages of S100β+/BrdU+ were significantly increased after irradiation but not altered by lithium treatment. 2-way ANOVA: irradiation (F 1,14 =4.89, *p=0.0442), LiCl (F 1,14 =0.831, p=0.3774). The remaining percentages of BrdU+ cells were significantly increased in the irradiated group (*p=0.0137) but not altered in the other groups. 2-way ANOVA: irradiation (F 1,14 =13.1, **p=0.0028), LiCl (F 1,14 =5.12, *p=0.0401). (C) DCX immunoreactivity in the GCL at PND105 in different treatment groups. Scale bar=100 µm. 20x magnification. (D) Interleaved dot plot graph of the quantification of DCX + cells in the GCL showing the effect of irradiation (****p<0.0001) on the density of DCX positive cells in the GCL. 2-way ANOVA: irradiation (F 1,16 =372.2, ****p<0.0001), LiCl (F 1,16 =0.02213, p=0.8836). (E) Schematic drawing of the hippocampal network on the left. Input signals from the entorhinal cortex (EC) are carried through two connectional routes made of the axons of the medial (light green) and lateral (blue) perforant pathways, MPP and LPP respectively. These axons establish stable synapses with the dendrites of the mature granule cells neurons (grey) and weak ones with the immature doublecortin (DCX) cells (red) in the granule cell layer (GCL). At the boundary of the GCL and the hilus is the subgranular zone (SGZ), where quiescent neural stem cells (qNSC) give rise to amplifying neural progenitors (ANP) allowing the continuous neuronal re-population of the dentate gyrus (DG). The qNSC and ANPs are multipotent stem cells, capable of giving rise to astrocytes, oligodendrocytes and neurons. The input signal from the DG is relayed to the proximal Cornus Ammonis region (CA3) through the axons of the mature granule cells that form the mossy fibre projection. The signal transduction continues to the CA1 region through the Schaffer collaterals fibers and to further cortical areas. Parvalbumin (PV) interneurons in the hilus are important in modulating, through feed-back and feed-forward inhibition, the input signals and NSC proliferation and integration through the release of the neurotransmitter gamma-aminobutyric acid (GABA). On the right above a schematic representation of the effects of irradiation on DG. The number of astrocytes is increased while the number of ANPs is decreased. The neuronal differentiation process is decreased in favor of an astrocytic fate progression. On the right below a schematic representation of the effects of lithium on the irradiated DG. Lithium acts by increasing ANPs cell number and promoting neuronal fate progression as compared to astrocytic differentiation. Error bars indicate SEMs. ***p<0.001; **p<0.01; *p<0.05. | 7,186.2 | 2019-03-16T00:00:00.000 | [
"Psychology",
"Biology"
] |
Comparative Study of Molecular Interaction in Binary Liquid Mixture At 303.15 K
: In this paper, experimental measurement of viscosity, density and refractive index of a pure liquid and their binary mixture of compound m-cresol with methanol, ethanol, butanol, propanol and alcohols were done at a temperature of 303.15 K. The data analysis obtained above gave information about various parameters such as excess molar volume, viscosity deviation, internal pressure, excess free volume, compressibility, excess ultrasonic velocity , intermolecular free length etc. The result is discussed in terms of existence of intermolecular interactions between the components in the mixtures under the study.
INTRODUCTION:
Ultrasonic Velocity measurements have been employed extensively to detect and assess weak and strong molecular interactions in binary and tertiary mixtures because mixed solvents find practical applications in many chemical and industrial process.[1] The main aim of this research is "Comparative Study of Molecular Interaction in Binary Liquid Mixture at 303.15 K.There are various physical methods such as ultrasonic Velocity, viscosity, density and refractive index measurements to identify the strength of intermolecular interactions in the binary solutions.[2] The ultrasonic method plays an important role in understanding the physico-chemical behaviour of liquids.The velocities give information about the bondding between the molecule and formation of complexes at various temperatures through molecular interaction.[3] Density is defined as its weight per unit volume.A fundamental feature of matter, density indicates the amount of mass contained in a specific volume.It is a measrement of the density of particles inside a material.In many facets of science and daily existence, density is vital.It is crucial in disciplines including materials science, chemistry, and engineering.It also influences the behavior of fluids in fluid mechanics and helps determine whether an object will float or sink in a fluid (buoyancy).Greek scientist Archimedes made the discovery of the density principle.Viscosity arises from the internal friction between molecules as they move past each other.In fluids, molecules are in constant motion, and their interactions determine the fluid's viscosity.This internal friction converts mechanical energy into heat, which is why fluids with higher viscosity tend to generate more heat when they flow.
There are two types of viscosity: dynamic viscosity and kinematic viscosity.Dynamic viscosity (also known as absolute viscosity): This measures the resistance to flow under an applied force.It is denoted by the symbol "η" (eta) and is typically expressed in units such as Pascalseconds (Pa•s) or poise (P).Kinematic viscosity is calculated by dividing the dynamic viscosity by the density of the fluid.It shows the fluid's resistance to flowing in the absence of outside forces.The sign "v" (nu) represents kinematic viscosity, which is commonly stated in quantities like centistokes (cSt) or square meters per second (m2/s).The amount that a light beam benfs when moving from one medium to another medium is measured by its refractive index.It can be expressed as-n= c/v The ratio of a light rays velocity in empty space to that of light in a matter.The refractive index is dimensionless and is commonly denoted by the symbol n.It is a basic characteristic of materials that is influenced by atomic structure, density, and composition.Refractive indices vary among materials and are influenced by temperature and wavelength, among other things.For instance, diamonds bend light more than air or water because of their high refractive index.
REVIEW ARTICLE: R Kumar et al.
Found that the extent of interaction existing between two components.In acetone-CCl4 system, the interaction parameter values have been out to be negative and in acetone-benzene system there are strong dipole induced -dipole interactions.Found that density, viscosity and ultrasonic Velocity of a binary mixtures of t-butyl alcohol, n-butyl alcohol and iso pentyl alcohol have been prepared with O-nitrotoluene at 303.15 K and 313.15 K.The data analysis of different parameters such as excess molar volume, viscosity derivation, deviation in isotropic compressibility etc. were done.The observed parameters and their changes are correlated to each other.S. Bahadur Alisha et al.Found that density and ultrasonic velocity of binary liquid mixtures of trimethyl amine with carbitols, methyl carbitol, ethyl carbitol and butyl carbitol have been measured at 308.15 K. Various parameters such as isentropic compressibility, intermolecular free length and acoustic impedence were calculated by the observed data.The results were discussed in terms of the existence of intermolecular interactions between the components in the liquid mixtures under the study.Sk.Fakruddin et al.Found that measurements of the values of ultrasonic Velocity, viscosity, density in binary liquid mixtures of a heterocyclic aromatic compound quinoline which is readily soluble in organic solvents cresol at different temperatures over the entire range of composition.Different parameters such as adiabatic compressibility, excess intermolecular free length, excess ultrasonic Velocity were evaluated with the help of above calculated values.
CONCLUSION:
It has been found that various physical methods and parameters were measured and evaluated in order to identify the strength and study on intermolecular interactions in the binary liquid mixtures/ solutions.
M
Durga Bhavani et al.Found that some parameters like density, speed of sound, viscosity values are measured in binary mixtures containing O-anisidine and with O-cresol.K Narendra et al.Found that density, viscosity and ultrasonic Velocity of binary liquid mixtures of Anisaldehyde with O-Cresol, m-cresol and p-cresol over the entire composition range have been measured at 303.15 K, 308.15 K, 313.15K and 318.15 K. D. Chinnarao et al.Found that density, ultrasonic velocity and dynamic viscosity for binary mixtures of Ethyl Oleate with Ethyl Methyl Ketone is experimented at ambient temperature ranging from 303.15 K to 318.15 K at atmospheric pressure.N. Santhi et al.Found that the theroritical value of ultrasonic Velocity were evaluated using the Nomoto's relation, Ideal Mixture relation, Free length theory and Collision factor theory further the densities of binary mixtures of Dimethyl sulpohoxide with phenol, o-cresol, p-cresol and p-chlorophenol at 318.15 K were measured.B. Nagarjun et al.Found that speed of sound and density for binary mixtures of ethyl benzoate with N,Ndimethylformamide, N,N-dimethyl acetamide, and N,N-dimethyl aniline were measured as a function of mole fraction at temperatures 303.15 K, 308.15 K, 313.15K and 318.15K and atmospheric pressure.R. D. Pawar et al. | 1,384.6 | 2024-05-14T00:00:00.000 | [
"Chemistry"
] |
An analysis of available solutions for commercial vessels to comply with IMO strategy on low sulphur
ABSTRACT The International Maritime Organization (IMO) strategy reduces ship emission to 2020 and zero in 2050, the ship-owners and ship operators are looking for economic solutions to meet the new emission requirements. The available solutions include using Liquefied Natural Gas (LNG) or sulphur-free-fuel, switching form high sulphur fuel oil (HSFO) to marine gas oil or distillate fuels, using very-low-sulphur fuel oil or compliant fuel blends and using scrubber/exhaust gas cleaning system on the ship, which allows operation on regular HSFO. The cost of changing and using fuel may be too high as a barrier for Vietnamese ship-owner. Therefore, advantages and disadvantages of available options need to analysis. In this article, the pros and cons of available options were analysed by using thematic PLEET analysis that considering political, legal, economic, environmental, technological and sociological aspects. The results show that using compliant blended fuel is a priority solution for the most Vietnamese commercial ship-owners as well as using LNG fuel will be a solution for building new ships. It is expected that this research would make any contribution to improving the efficiency of commercial ships to satisfy the IMO’s strategy.
Introduction
The maritime industry plays a vital role in the development of global economy, with the benefits of the volume and distance carried. With the world commercial fleet hit 94,171 vessels, combined tonnage of 1.92 billion DWT, maritime shipping has been mode of transport that was over 80% of global trade by volume (UNCTAD 2018). However, maritime activities also emit an increasing large amount of greenhouse gases and other toxic emissions (Li et al. 2020), causing negative impacts on the environment and global climate (Russo et al. 2018;Zhen et al. 2018). According to the Third IMO Greenhouse Gas Emissions Study , international shipping emitted 796 million tonnes of CO 2 in 2012, accounting for 2.2% of total global greenhouse gas emissions, and that could increase from 50% to 250% by 2050 due to the growing global seaborne trade. In addition, maritime activity causes 12% of sulphur oxide (SO X ) emissions and about 13% of nitrous oxide (NO X ) emissions globally.
In order to response to the negative impacts caused by marine emissions, over the past decades, various measures to decrease chip emission have proposed by national and international organizations (Chang et al. 2018;Wan et al. 2019;Gritsenko and Regulating 2017). The IMO has made many efforts to develop a legal and technical framework for controlling and reducing emissions from the ship (IMO 2020; Perera and Mo 2016), following achievements: MEPC adopted the regulations on energy efficiency for ships in chapter 4 of MARPOL Annex VI, the regulation entered into force on 1 January 2013 and apply to all ships of 400 GT and above, considering NOx and SO X emission; MEPC 72 adopted resolution MEPC.304(72) on Initial IMO Strategy on reduction of GHG emissions from ships on 13 April 2018, with the scope of reducing the total annual GHG emissions by at least 50% by 2050 compared to 2008; IMO adopted ship and port emission toolkits, these guides are intended to national parties that can be used to evaluate the present status, developing national regulation and building national strategy on reduction of shipping emission.
According IMO 2020, Sulphur content limit of 3.5% reduced to 0.5% cap. Ship-owners have successfully completed on board marine fuel quality in view of the implementation of maximum 0.50% sulphur in 2020. There are several options (Li et al. 2020) for fuel supplier and user regarding compliant fuel such as marine gas oil (MGO), low sulphur fuel oil (LSFO) and LNG or approved equivalent methods like high sulphur fuel oil (HSFO) combined scrubbers on ship. Ship fuel consumption can represent more than 50% of ship voyage cost. According to increasingly stringent regulations for the shipping industry, the ship-owner/ship operator's incentives have become not only environment but also economic and efficient ship operation. This article will compare three options, including MGO (S < 0.50%), LNG and HSFO combined scrubber. The result of this study would help ship-owner/ship operator for future decision making.
Vietnamese shipping fleet comprises 1,863 ships totalling about 8.176 million DWTs as of December 2018, reaching about 1.98 per cent of world total DWT. In terms of ship tonnage, the shipping fleet of Vietnam ranked number four within 10 ASEAN countries, behind Singapore, Indonesia and Malaysia.
A widely used indicator providing insights into ability of seaport is volumes handled by port, including all cargo types such as container, liquid cargo and dry cargo, can serve as a leading economic indicator. Table 1 provides a list of Vietnamese seaports cargo throughput, increased by 328% to 14,733,000 TEUs in 2018, compared with 4,489,165 TEUs in 2007 rising from the development of Vietnamese seaport system (44 seaports with 219 terminals). The current maritime business patterns in Vietnam focus on two international gateway seaports which are named Lach Huyen (Hai Phong City) and Cai Mep-Thi Vai (Ba Ria -Vung Tau province) to receive ships of 80,000-100,000 DWTs or 4,000-8, 000 TEUs. Table 1 shows that ship call at Vietnamese port were increased continuously from 98,901 vessels in 2012 to 122,527 vessels in 2018.
Obviously, Vietnamese ship owners are grappling with proper decision-making for SOx regulation-compliant options because technical status of old and outdated Vietnamese fleets. Therefore, an analysis of advantages and disadvantages of available options for Vietnamese ship-owners is necessary in this context.
Literature review
Marine fuel demand is not only promoted by the volume of transport demand, fleet composition and operational efficiency but it also determines total energy demand, and by the share of fuel consumed in Emission Control Areas (ECAs), sharing MGO, sharing LNG and HSFO combined scrubbers on ships. Regarding Assessment of Fuel Oil Availability (Faber 2016), marine energy demand will increase by 8% between 2012 and 2020. The volume of marine petroleum fuels will increase by 5.5% (308 million tonnes per year) in the base case, while LNG will increase by 50% (12 million tonnes per year). The volume of HSFO (sulphur>0.50%) will reduce from 228 to 36 million tonnes in the base case. In addition, the demand for HSFO (sulphur content of 0.50% m/m or less) will be 233 million tonnes meanwhile the demand for MGO will be 39 tonnes, most of which will have a sulphur content of 0.10% m/m or less. Global fuel demand has been growing from about 4 billion tonnes in 2012 to approximately 4.5 billion tonnes in 2020, increasing 13%. The demand of marine fuel will increase by 5% in the base case, increase by 21% in the high case and decrease by 8% in the low case. In this period, it is more stringent air quality regulations, such as the IMO 2020 global sulphur cap. There are various study about IMO 2020 global sulphur cap (Li et al. 2020;Wan et al. 2019;Zhen et al. 2018;Perera and Mo 2016;Wackett 2019). Most of vessels may go for LSFO after 2020, due to the uncertainty of fuel prices and regulatory (Halff, Younes, and Boersma 2019). According to the influence of IMO strategy, the number of scrubber installed and LNG-fuelled ships may increase gradually, estimating 5200 vessels installed scrubbers accounting for about 5% of the world's fleet (Atkinson and Lejeune 2019). For the IMO 2020 deadline, fuel-switching strategy will play a vital role in vessel's compliance with new regulations. To be honestly, there are basically four options available for Vietnamese ship-owners as below.
Using LNG or sulphur-free fuel (NH3, LH2)
This option is a best solution for shipping industry's environmental problems. LNG is expected as a marine fuel in the future for complying with the strategy reducing sulphur emission from ships because of sulphur content less than 0.1% as the cleanest form of fuel. LNG is present a technically proven measure, bunkering infrastructure is developing rapidly at all major ports around the world (Lee, Yoo, and Huh 2020). Beside, LNG price is cheaper form of fuel almost half the price of crude oil and one third the price of diesel oil. While marine petroleum will remain the major fuel option for existing ships, LNG can be interesting for new building ships. New order for LNG ship building due to IMO 2020 are predicted an optimistic scenario about 2000 ships by 2025 (Lloyd's Register). However, it requires specially designed dedicated tank and space required can go up to four-times that of other fuel, reducing the cargo capacity of ship. The cost of building LNG ship is much higher and complex than other cargo ships. The supply chain and bunkering LNG are limited, resulting non-operating in many ports. The using LNG and its emission should be investigated and evaluated, considering methane environmental pollution issue.
Switching from high sulphur fuel oil to marine gas oil or distillate fuels
When switching to distillate fuels, it will be a significant increase in the cost of fuel and may also require upgrading to a fuel treatment plant due to the significantly lower viscosity of the distillate fuels. In order to avoid contamination and non-compliance problems, the requirement of all fuel tanks previously used for HSFO must be carefully cleaned before bunkering MGO or distillate fuels.
Using very-low-sulphur fuel oil or compliant fuel blends (0.50% sulphur)
LSFO -compliant fuel as blended fuels are anticipated to be available fuel in the market through various products. New blended fuels may go experience stability, compatibility problems, low viscosity, flash point, etc., which will make fuel handling very important for potential safe and operational issues. The quality control of new blended fuels when bunkering is very important because of specific fuel is received must be satisfy requirement of IMO 2020. The ISO (2019) Fuel Standards working group already published a Publicly Available Specification (PAS) in 2019, entitled "Considerations for fuel suppliers and users regarding marine fuel quality in view of the implementation of maximum 0.50% S in 2020". It provides the guidance for both fuel suppliers and users as ship owners. Besides, the IMO has also published a "Guidance on Best Practice for Fuel Oil Purchasers/Users for assuring the quality of fuel oil used on board ships".
Using scrubbers/EGCS on ship, which allows operation on regular HSFO
Obviously, HSFO will still be a significant option after 1 January 2020. However, in order to comply with the requirement of IMO 2020, the exhaust gas cleaning system (EGCS) or SO X scrubbers will be installed. It is note that the installation of scrubber could be complex, nothing to changes to operate the main engine, controlling fuel treatment system, especially for retrofits. The capital expenditure-CAPEX and operating expenditure-OPEX of EGCS or scrubber are still high, relating to the increase of power consumption, chemical consumables and residence storage handling for hybrid or closed-loop scrubbers. In Vietnam, it is well-known the national plan "The development plan of Vietnamese shipping industry for 2020 and towards 2030" acknowledged by the Decree No. 1517/QD-TTg of the Prime Minister on 26 August 2014, considering six target categories: Ship types and sizes; Shipping fleets; Seaport system; Shipbuilding industry; Supporting services and maritime transport logistics; and Human resources in maritime transport. There are a few general studies related to emission reduction strategies in Vietnam, such as the Status of ship pollution and the proposed mitigation solution (Du and Pham 2014), mitigation of ship emissions (My Van 2015), lacking of academic research related to the potential effects of the IMO 2020 Sulphur Cap on the Vietnamese maritime economy. This research should be considered as a pioneer study in discussing the potential changes on Vietnamese maritime patterns.
Research methodology
The PESTLE analysis method as a technique for "Scanning the Business Environment" that was developed by Aguilar (1976) who discusses four factors: economic, Technical, Political, and Social. After that, several authors, such as Porter (1985), (Fahey and Narayanan 1986;Morrison and Mecca 1989;Jurevicius 2013;Rastogi and Trivedi 2016), and (Abdul Rahman et al. 2016) improved to this method, including various classification such as PEST, STEEPLE, STEP, SPEPE. PESTLE method is useful tool to identify important factors that impact maritime business industries (Helmold 2019;Pulaj and Kume 2013;Syazwan Ab Talib et al. 2014;Vintilă et al. 2017).This method gives us a panoramic view of specific issue that was checked on a plan. The most research sectors have been published by using PESTLE method such as automobile industry (Li, Mao, and Qi 2009;Tan et al. 2012 The implications in the opening the Northern Sea Route of Malaysia was studied by using PESTLE/ PESTLES methods (Abdul Rahman et al. 2016Rahman et al. , 2014. However, these studies do not consider important elements such as novel seaport system, maritime security and safety, supporting services, maritime transport logistics and human resources in maritime transport. Therefore, in order to enhancing PESTLE method in this article, new arrangement of element and various subelements will be considered. For instance, new sub-element in "Environment" element is "National strategy to control air and water pollution". The "Political" element considering Stability of Government, Tax policies, Social policies, Entry mode regulation, National plan of Shipping industry, and National energy Security policies. These elements and sub-elements has been identified, analysed, and reported pattern with data by using thematic analysis. Thematic analysis allows for flexibility and detailed and complex description of data. It may develop a deeper appreciation for the situation or group of study (Braun and Clarke 2006).
In this study, in order to comprehensive analysis of the option responding IMO Strategy on Vietnamese maritime industry, the standard six elements were analysed by using thematic Political, Legal Economic, Environmental, Technological and Sociological (PLEETS) method (see Figure 1), finding the answers of seven key questions of experts from main agencies including Vietnam Maritime Administration, Vietnam Register, Ministry of Transportation, Vietnam Oil and Gas Group, Port operator, Port Authority, Shipping Company, Logistic Company and Freight Forwarders.
Political -What political aspects are likely to impact the maritime activities?
Legal -What legislation will impact the maritime activities?
Economic -What economic elements will impact the maritime activities?
Technological -What technological trend may impact the maritime industry?
Sociological -What socio-cultural aspects are likely to impact the business?
Identifying the issue
The IMO's lower sulphur limit for marine fuels effects good or bad for the shipping industries, countries and other groups. Advantages of IMO 2020 are making strongest benefits of public health and encourage development of green power generation. However, It also brings about adverse effects: Large majority of commercial fleet in the world will switch from high sulphur fuel oil (HSFO) to marine gas oil (MGO) or distillates, resulting in high pressure to refiner to increase crude oil to maximize distillate, increasing distillate prices. It is expected that the potential cost of compliant fuel for IMO 2020 will be a big difference from cost of HSFO, creating ship-owners to install scrubbers on board or using alternative fuel like blended fuels or switch to LNG. Therefore, the estimating implications of one option decision for the IMO 2020 on changing the maritime activities will be discussed by using thematic analysis combined with PLEETS analysis through different aspects (Figure 1).
Possible implications
Firstly, the changes in the geographical aspect after implement of the Kra Canal, which are new maritime routes can be developed. They are for instance: Northern region (Hai Phong port) -Hon Khoai -Kra -Middle East/Europe; Central region (Van Phong port) -Hon Khoai -Kra -Middle East/Europe; Southern region (Ba Ria -Vung Tau) -Hon Khoai -Kra -Middle East/ Europe. Vietnam has a priority strategy for developing international gateway port for vessels of larger than Figure 1. Thematic PLEETS analysis template 100,000 DWT in Hai Phong, Ba Ria -Vung Tau and Khanh Hoa, developing comprehensive navigation facilities in all channel systems. Interestingly, Hon Khoai Project was approved by Vietnam's Prime Minister that it would build a deep-water seaport, named Hon Khoai Port, on the Southeast of Hon Khoai Island 15 km off the coast of Ca Mau province. According to the opinions of various experts, Hon Khoai Port will be invested to become the largest seaport in Vietnam, making the main link, opening new gateway connection port of global goods and services in Vietnam, especially coal, petroleum and container. The "Investment report on Hon Khoai general seaport project", prepared by VIP, determined the total estimated investment of US$ 5 billion, of which US$ 3.5 billion for the "super port" and US$ 1.5 billion for the logistics. It is expected that the Hon Khoai Port will have the capacity to transport 800 million tons of cargo each year.
Secondly, after developing the Kra Canal, the geographical advantage of the Hon Khoai Port, it is predicted that the Hon Khoai Port will be one of the target destinations that will go significant benefits from the increase foreign ships calls as global logistics hub for the handling of import and export cargo of Mekong Delta River as well as transhipment cargo.
Currently, Vietnam is enhancing on waterway between Ho Chi Minh City and Ca Mau to facilitate the operations of barges of more than 2,000 tons, upgrading Xa No Canal and the Dai Ngai-Bac Lieu-Gia Rai sea route, modernizing large river ports for containerized goods including Binh Long, An Phuoc, Long Binh and Cai Lay Ports, upgrading project to dredge and expand Cho Gao Canal to 80 meters, etc.
The Hon Khoai Port becomes the new shipping hub as a special trade zone; it attracts investment of various production companies and providing opportunities to business as well as development of regional economy.
Pros and cons implications of option for the IMO 2020
Obviously, four options above also have pros and cons when considering political, legal, economic, environmental, technological and sociological aspect. In this section shows the results of option using LNG and option using blended fuel oil as potential trend of Vietnamese ship-owners.
According to plan for development of the gas industry of Vietnam by 2025 with vision to 2035 (Prime Minister Decision No 60/QD-TTg 2017), research, find markets and accelerate the construction of port infrastructure facilities to be ready to receive and import LNG with the reaching 1-4 billion cubic meter per year in the period of 2021-2015 and reaching 6-10 billion cubic meter per year in the period 2026-2035. Using LNG or sulphur-free fuel for Vietnamese commercial fleet will have both pros and cons implications of new energy. Through a data collection process or direct interviews with experts, the information is showed on the Table 2 as below. It is note that using LNG as alternative fuel oil will be potential development of new ship building which encourage for Vietnam shipping. Table 2. Pros and cons of using LNG in Vietnam.
Factors
Positive implication Negative implications Political • Vietnam will has more economic cooperation with other countries to import LNG and to reduce GHG emission (IGF Code) • It will enhance the safety, environment and sustainable development.
• It is the movement of foreign investment to develop the LNG ports or LNG vessel, LNG Bunkering chain. • Education and training for crew, expert in LNG supply and users chain Legal • Development of legal framework of LNG for shipping, seaport and logistic services activities.
• Creating a stable legal, conducive system that supports the developing and operation of the shipping like administrative reform. Economic • LNG will be one of the important marine fuel therefore more supply and users will call at the Vietnamese ports. • LNG is much cheaper form of fuel almost half the cost of crude oil and one third the cost of diesel oil. • More ship call and cargo handled by the ports lead to increasing income of logistics and supply chains. • Major ports will be potentially developed to supply bunker LNG for vessels. • Overall, there is increase in maritime economy that makes important distribution to the Vietnam economy.
• Building cost of an LNG ship is much higher and complex than other cargo ships • Space required for LNG fuel is four-times space required to store other fuel, decreasing cargo capacity of ship • It is need to huge finance to invest LNG supply chain/ infrastructure • Cost for search and rescue activities at Vietnam waters increases. Environmental • LNG is cleanest form of fuel with sulphur content less than 0.1%, reducing GHG emission • Enhancing development of green port, and supporting national strategy to reduce GHG emission.
• Methane emission from using LNG needs to be investigated because it can cause environmental pollution.
Technology
• It is enhancement of developing LNG building technology, designed LNG storage tanks • LNG bunkering infrastructure for ships is improving quite rapidly in the world such as Singapore, Japan, Korea, China, etc.
• LNG engines, LNG storages and LNG ship building require complex designed • Cost for LNG system are and will continue to be higher than expenditures associated with using a scrubber system with HSFO.
Sociological
• It improves public health • It creates jobs and income for local community around ports area because of the developing LNG logistics and supply chains.
• HSFO supply income is expected to show a downward trend.
In term of Vietnamese ship-owners, the option of using very-low-sulphur fuel oil or compliant fuel blends (0.50% sulphur) is choosing priority because of its positive and negative implications that was showed on the Table 3.
Vietnam need to find a way to benefit from the scenario of IMO 2020 is going to be realised. First and foremost, the Vietnamese Government and related Ministries need to confirm the availability of compliant fuel which encourage LNG/LSFO for Vietnam shipping and Vietnamese ports from 1 January 2020. Secondly, it is ensuring accuracy on quantity of fuel delivered through adopted technical reference for bunker mass flower metering, specification of management quality of bunker supply chain. Thirdly, according to the Regulation 17 of MARPOL Annex VI, Vietnam parties to MARPOL that are required to provide reception facilities for the collection of scrubber residue. Finally, policy to encourage all economic sectors to invest in the Vietnamese maritime industry, including the development of new compliance fuels supply and logistic services should be studied and implemented that is key factor in the shipping industry.
Conclusion
The IMO strategy reduces ship emission to 2020 and zero in 2050 is warm welcome by most of people because of real benefits that it brings to human health. Ship-owners, ship-operators, maritime authorities and oil industry have to find the solutions to meet the new emission requirements. In this study, advantages and disadvantages of available options compliant with IMO 2020 to political, legal, economic, environmental and sociological aspects showed that encourage LNG/LSFO for Vietnam shipping is an imperative need. The inherent strengths of the geographical country need to take advantage to propose plan for maritime business that boost and reshape the Vietnam's maritime economy, for instance: developing policies to enhance the control and supervision of the compliance fuels (LNG/LSFO) supply chain and logistic services; promulgating legal policy to encourage all economic sectors to invest new ship using LNG fuel and LNG barge suppliers; building long-term plan to develop main bunker fuel supply or energy source for vessels.
Disclosure statement
No potential conflict of interest was reported by the authors.
Factors
Positive implication Negative implications Political • It will help Vietnam commercial fleet satisfy with IMO 2020 • It is the potential investment to change to the blended fuel bunkering chain. Legal • Development of general industry guidance on potential safety and operational issues related to the supply and use of blended fuel for shipping, seaport and logistic services activities.
• Creating a stable legal, conducive system that supports the developing and operation of the shipping like administrative reform. Economic • Blended fuel is the number one choice for Vietnamese shipowners because the lowest conversion cost from using highsulphur fuel to blended fuel is minimal therefore more supply and users will call at the Vietnamese ports. • More ship call and cargo handled by the ports lead to increasing income of logistics and supply chains. • Major sea-ports will be potentially changed to supply bunker blended fuel for vessels. • Development of cleansing service will be developed around the sea-ports of Vietnam.
• It is need to finance and time to clean the oil fuel tanks.
• Cost for search and rescue activities at Vietnam waters increases.
• The effectiveness of operating vessel is decrease because of the characteristics of blended fuels such as stability of blended fuel oil, acid number, flash point, etc.
Environmental • Blended fuel oil is one of fuel with sulphur content less than 0.5%, reducing GHG emission • Enhancing development of green port, and supporting national strategy to reduce GHG emission.
• The content of sulphur may be increase more than 0.5% when the cleansing and washing are not clean resulting to vessel are not satisfy with the IMO 2020.
Technology
• It is enhancement of developing new blended fuel oil in Vietnam.
• The effectiveness of new blended fuel oil require various assessment of scientists. Sociological • It improves public health in Vietnam.
• It enhances the scientists and industry to research and develop new blended fuel oils and its supply chains.
• HSFO supply income is expected to show a downward trend. | 5,856.2 | 2020-04-02T00:00:00.000 | [
"Engineering"
] |
Predicting quantum emitter fluctuations with time-series forecasting models
2D materials have important fundamental properties allowing for their use in many potential applications, including quantum computing. Various Van der Waals materials, including Tungsten disulfide (WS2), have been employed to showcase attractive device applications such as light emitting diodes, lasers and optical modulators. To maximize the utility and value of integrated quantum photonics, the wavelength, polarization and intensity of the photons from a quantum emission (QE) must be stable. However, random variation of emission energy, caused by the inhomogeneity in the local environment, is a major challenge for all solid-state single photon emitters. In this work, we assess the random nature of the quantum fluctuations, and we present time series forecasting deep learning models to analyse and predict QE fluctuations for the first time. Our trained models can roughly follow the actual trend of the data and, under certain data processing conditions, can predict peaks and dips of the fluctuations. The ability to anticipate these fluctuations will allow physicists to harness quantum fluctuation characteristics to develop novel scientific advances in quantum computing that will greatly benefit quantum technologies.
Quantum phenomena such as superposition and entanglement offer many opportunities to revolutionize secure communication 1 , computation 2 , simulation 3 , and sensing 4 technologies.Realization of these opportunities necessitates the development of new arsenals of material, devices, and control routines to robustly generate quantum states (qubits) and perform quantum operations (gates).In particular, the field of quantum photonics focuses on using single photons of light as qubits to both transmit quantum information and perform quantum processing operations.Single photons are resilient to decoherence effects, making them ideal quantum information carriers.Thus classical integrated photonic technologies offer an exciting foundation on which to build and deploy wafer-scale quantum photonic systems 5 .Generation of single-photon states is a fundamental requirement for quantum photonics, where the ideal solution has a small footprint to facilitate incorporation in integrated photonic circuitry.Additionally, single-photon states will enable the production of photons at GHz rates, electrical triggers for on-demand photon generation, and indistinguishable photons with identical polarization sates and wavelengths 6 .
Generation of single photons of light with solid-state materials is an attractive solution for quantum light sources for integrated photonics 7 .Unlike solutions that rely on nonlinear optical processes, solid state quantum emitters (QEs) can be triggered on-demand and have the potential for miniaturization for incorporation into integrated photonics.In 2015, solid-state QEs were discovered in the two-dimensional (2D) semiconductor single-layer WSe 2 (1L-WSe 2 ) [8][9][10][11][12] .While this new class of quantum light sources has many exciting properties, the wavelength (and presumably the polarization) of the photons emitted significantly fluctuates on the timescales of seconds (i.e.spectrally diffuses/drifts/wanders), significantly diminishing their performance in terms of photon indistinguishability 13 .From a materials/devices perspective, increased stability may be achievable by increasing the strain in the system 14 , tuning the charge density in the material 15 , and/or modifying the surrounding dielectric environment 16,17 .Additionally, because the QEs in 2D materials are responsive to external stimuli 18 , the emitters have the potential to be actively monitored and stabilized with a feedback loop during operation.Just recently, a revolutionary concept of a smart quantum camera that leverages artificial intelligence to discern statistical fluctuations of unknown mixtures of light sources has been introduced 19 .The incorporation of a universal quantum model and artificial neural networks not only surpasses existing superresolution limitations but also presents new opportunities in microscopy, remote sensing, and astronomy.Equally noteworthy is the exploration of shaping light beams with varied photon statistics, that offers a remarkable degree of control over spatially varying photon fluctuations 20 .
Here, we present an initial exploration of the ability for an neural network-based machine learning algorithm to predict the fluctuation of QEs in 2D materials based on the immediate history of its emission.Such a predictive algorithm could be used to improve photon indistinguishability of an advanced QE device where the emission wavelength is monitored and external stimuli is applied to prevent predicted fluctuations.Our results show that a neural network that is rudimentary trained on the intensity fluctuations of discrete wavelength bins may be able to forecast future fluctuations.The work sets the stage for more sophisticated training strategies that take into account both emitter intensity and peak emission wavelengths.
Fluctuations of QEs in 1L-WS 2
Figure 1 shows representative deleterious fluctuations of QEs from randomly occuring nanobubbles 21 in singlelayer WS 2 that is deposited on a gold surface with gold-assited exfoliation techniques 22,23 .The QEs are identified has narrowband emission lines that are superimposed on a weaker background of broader, excitonic states.The time evolution of the QEs is measured by repeatedly acquiring individual photoluminescence spectra every 2 seconds over a total duration of several minutes.The time series dataset is constructed by appending individual spectra acquired in nearly continuous succession.The temporal delay between each acquisition is less than 500 ms.See Methods for additional details of the experimental apparatus.Inspection of the time evolution reveals that the center wavelengths and the emission intensities of individual states change on the timescale of seconds.These fluctuations are likely caused by the local environment surrounding the emitter, including nearby metallic and dielectric structures.QEs are identified in four wavelength bands that are centered at 606 nm, 613 nm, 621 nm, and 629 nm.Large-area encapsulation of the devices with thin layers of hBN provides potential means of mitigating these fluctuations.However, they are not an absolute solution to eliminate them completely.To address further stability, QEs that are dynamically tunable can be used alongside a feedback system that intermittently samples QLED emission and applies corrective stimuli.Machine learning time-series forecasting models have the potential to predict QE fluctuations and provide physicists with essential information to apply corrective stimuli to stabilize their emission wavelength and intensity in real time.
Time-series forecasting
Time-series forecasting is an important applied machine learning technique that includes developing models to analyse (describe and predict) sequences of data collected during time intervals.These models are widely used in different areas such as finance 24 , sales forecasting 25 , climate time series 26 , pore-water pressure 27 , and medicine 28 , to name a few.In time-series forecasting models, future values of a target y i,t for a given entity i at time t is pre- dicted.The entity displays a logical grouping of temporal information.The most straightforward models which can predict the one-step-ahead can be represented as below: In this equation, ŷi,t+1 represents the model forecast, y i,t−k:t represent observations of the target for the previous k samples, x i,t−k:t represents observations of the input over the previous k samples, s i is static meta data linked to the entity, and f (•) is the prediction model function which is learned by the model 29 .
ŷi,t+1 = f (y i,t−k:t , x i,t−k:t , s i ) www.nature.com/scientificreports/Recurrent neural networks Deep learning algorithms have been developed and frequently used to extract information from many types of data during the last several years 30,31,31 .Recurrent neural networks (RNNs) are types of deep learning networks that can be used to analyse temporal information due to their specific architecture feedback loops which is their memory.They take information from prior inputs and update the current input and output 30 whereas in traditional neural networks, inputs and outputs are independent from each other.Figure 2 illustrates the difference between a traditional feed-forward neural network and a recurrent neural network.
To be able to process the sequence data, RNNs have repeating modules arranged in chains (Fig. 3), where they can share the parameters across different time steps with the intention of employing these modules as a memory to store crucial data from earlier processing steps.The capacity of recurrent neural networks make them suitable for many applications such as natural language processing and language modeling 32,33 , speech recognition 34,35 and emotion recognition in videos 36 .
Over time, various kinds of RNNs have been developed and applied to temporal forecasting problems with strong results [37][38][39][40] .Recurrent connections can enhance neural network performance by taking use of their capacity for comprehending sequential dependencies.However, the techniques used for training RNNs might significantly restrict the memory generated from the recurrent connections.Older variants of RNNs suffer from exploding or vanishing gradients during the training phase, causing the network to fail to learn long term sequential dependencies in data 41 .
Explanation of LSTM networks Long short-term memory (LSTM) is the most popular method and model to address the insufficient memory of RRNs and tackling the problem of exploding and vanishing gradients.LSTM was introduced in 1997 for the first time to address the problem of long lasting learning to store information over extended time intervals by recurrent backpropagation 42 .In LSTM, instead of sigmoid or tanh activation functions, there are memory cells with inputs and outputs which are controlled by gates 42,43 .In other words, LSTM is a modified version of RNN made of repeating modules called memory cells.These cells consist of three gates: an update or input gate, a forget gate, and an output gate.These gates work together handle learning long term dependencies.The input gate and output gate decide what information to update and pass on to the next www.nature.com/scientificreports/cell respectively, while the the forget gate decides the least relative information to through away.This structure is shown in Fig. 4.
LSTM applications LSTM neural networks can be applied to various tasks including prediction, pattern classification, recognition and analysis.Since LSTM is able to process sequential data, it is an effective tool in many different domains such as statistics, linguistics, medicine, transportation, computer science and more 45 .As instance, in 46 LSTM based models are used in natural language processing (NLP) for sequence tagging.In another work, Xue et al. 47 have applied LSTM to pedestrian trajectory prediction.They have proposed a network based on LSTM which is able to consider both influence of social neighbourhood and scene layouts in pedestrian trajectory prediction.Precipitation nowcasting is a crucial and challenging weather forecasting problem, which the goal is to predict the future rainfall intensity in a local region over a relatively short period of time.Shi et al. 48ave used LSTM networks to build an end-to-end trainable model for the precipitation nowcasting problem.These instances are just a few of numerous applications of time-series forecasting models that are under research and applied in forecasting problems.Achieving good and strong results in recent work, was a motivation for us to apply time-series forecasting models to the QE fluctuation in intensity and energy problem.
Correlation and autocorrelation
Various physical phenomenon can cause variables within a dataset to be relevant or fluctuate in relation to each other.Finding and quantifying how reliant the variables in a data set are on one another or on themselves is crucial in many machine learning algorithms, including time-series forecasting problems to ensure the possibility of time-series predictions.Correlation is a statistical measure that indicates this relationship.Correlation can be either positive, negative or zero.A positive correlation indicates that variables change in the same direction.A negative correlation indicates that the variables change in opposite directions, meaning that while one increases, the other decreases.A zero correlation indicates that the variables have no relationship with each other.Therefore, if a signal has zero autocorrelation after a certain number of time lags, then it is theoretically impossible to predict the future states of that signal using only past measurements of that signal.The nonzero cross-correlation and autocorrelation indicates that it should be possible to perform meaningful predictions in the time-series data.The correlation coefficient is a unit-free statistical measure in the range of -1 and +1 that reports the correlation between variables.One of the most popular coefficient is Pearson's coefficient also known as the product moment correlation coefficient.It is the ratio between the covariance of two variables and the product of their standard deviations.Given a pair of variables (X,Y), the formula for the coefficient represented by ρ is defined as below: In this formulation, cov is the covariance, σ x is the standard deviation of X and σ y is the standard deviation of Y.For a sample and given paired data of X and Y, the Pearson's coefficient is commonly represented by r and can be expressed in terms of mean and expectation as below: where n is the sample size, x i and y i are the sample points indexed with i, and x and y are the sample means.
In time series problems, analyzing the correlation between a series and lagged versions of itself can provide predictive information.The correlation coefficient between a time series and its lagged versions over time intervals is called autocorrelation.In other words, autocorrelation determines the degree of similarity between a series and its lagged counterparts.Autocorrelation analysis is a popular tool for determining the degree of randomness in a data set.By measuring and analysing the autocorrelations for data values at various time lags, www.nature.com/scientificreports/this randomness is determined.If the nature of the data over time is random, such auto correlations ought to be close to zero for all time-lag separations.If the autocorrelation at one or more lag samples are significantly non-zero, then the data is not random 49 .
Methodology Data acquisition
Figure 5a shows a schematic of the optical microscope used to measure the luminescence of single 1L-WS 2 nanobubbles.The optical setup includes a standard confocal photoluminescence microscope built around a closedcycle liquid helium cryostat with optical access (Montana Instruments) [50][51][52] .The nanobubble samples consist of 1L-WS 2 that is transferred on top of a smooth gold surface, as shown schematically in Fig. 5b and by the optical micrograph in Fig. 5c.Nanobubbles form at the 1L-WS 2 /gold interface due to the coalescence of contamination that was trapped in the interface during fabrication 21 .The sample is cooled to 4 K to slow the ionization of the QE states due to heat and a laser beam (wavelength = 532 nm, continuous wave) is focused onto an area (1 × 1 µm 2 ) that contains narrowband emitters with energies less than the exciton state of 1L-WS 2 .The power of the excitation laser is adjusted to be below the saturation threshold of the emitters, which is common practice for achieving high emitter purity and for spectroscopic characterization of quantum emitters in 2D semiconductors 53,54 .
The photoluminescence from the sample is collected and spectrally isolated from the reflected laser light with a series of thin-film interference filters (532 nm long pass filters).The fluorescence is then spectrally dispersed with a Czerny-Turner optical spectrometer (Princeton Instruments) and measured with a cooled scientific CCD camera (Princeton Instruments).The continuous series of 90 spectra were acquired from an emissive region of the sample with integration times of 2 s, spanning a duration of about 180 s.
Preprocessing
In order to learn historical values and patterns to forecast the future, there needs to be a window of time that is fed to the algorithm, so it can make predictions based on the previous values.For this, we transformed the data into input and output pairs where the observations at 5 previous time steps (a window size of 10 s) are used as inputs to make predictions for one step into the future (2 s).To later train and evaluate the forecast models, we divided this dataset into three subsets: a training set, a validation set and a test set.The training set consists of the first 80% of the time sequence, the validation data consists of the next 10%, and the test data consists of the last 10% of the time sequence data.This ensures that the time-series properties of the data are consistent within a particular subset of data, and that the test data has truly never been seen by the classifier before.The performance of machine learning models depends on the quality of the data.There are various preprocessing techniques which convert the data into useful information to the models.Among various techniques, data normalization is an approach which entails scaling or transforming the data to ensure that each feature contributes equally and avoid bias towards a specific range of values in the feature data.As part of the proprocessing of our algorithm, we first checked the dataset for negative photons counts and zeroed them out as they are produced due to noise and/or calibration error.Then we normalized the data based on the statistical mean and standard deviation of the training set.Each value of the data (including those in the validation and test sets) is converted as follows, where X new is the normalized value, M is the statistical mean of the training set and σ is the standard deviation of the training set:
Experiment and model
To forecast the photon intensities, we designed two different scenarios: 1. Prediction considering all measured wavelengths.2. Prediction considering 4 different bands including the most correlated wavelengths.
For the prediction models, we developed a multivariate LSTM model and a shifted forecast that considers the current measurement as a prediction for the next step.Using a shift in the data is the simplest method of forecasting, which is a basic technique that assumes the future value of a variable will be similar to its most recent past value.This method is often referred to as the naive forecast.It provides a quick and straightforward baseline for comparison with more sophisticated forecasting methods.As there are more than one variable in the dataset, a multivariate time-series forecast can be used.This will consist of multiple variables, each of which is dependent on other variables to some extent, in addition to its own historical values.For the first scenario, we developed a model with inputs of 5 time steps (10 s) over all the wavelengths regardless of their correlations, and then we did predictions for the values one-single step into the future.For the second scenario, we divided the wavelengths into the 4 bands that include the most correlated wavelengths (see Fig. 1).Then we trained the model on each band individually considering the same time steps as the first experiment and predicted one-step in the feature.
In our research we used various open libraries in Python, such as Tensorflow, Matplotlib, Pandas, Numpy for preprocessing, training and visualisation purposes.Our LSTM model consists of one LSTM layer followed by two fully connected layers to predict the feature values.We employed a sequence length of 10 s as inputs, and predicted 2 s in the future.It took about 15 mins to train the algorithm on all the measured wavelengths (first scenario) using an Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor.All other models took less time to train.The LSTM models were trained using Adam optimizer with a learning rate of 0.0001 and a mean squared error as the loss function.Furthermore, during the training, we employed callbacks to prevent the overfitting of the network and saving the best performance results based on the error obtained on the validation set.
Evaluation
To evaluate the models, we used root mean square error (RMSE) as our cost function which is commonly used in forecasting problems [55][56][57] .Considering n as the number of samples, y i the actual value and ŷi predicted value, RMSE is defined as: For each of the experiments and models, we report the RMSE.We also present plots of the actual intensity values and predicted values, as well as the differences between these two.
Correlation
To divide the data to bands with the highest correlations, we measured the correlation matrix and picked the wavelengths with the highest correlation coefficient using Pearson's algorithm.This resulted in 4 bands with different numbers of wavelengths, as described in Table 1.The correlation coefficient matrices are shown in Fig. 6.Before starting to train the time-series forecasting models, we assessed the randomness of the data to ensure the problem is predictable.We measured the autocorrelation for the dataset for 30 lags (60 s) and plotted the center of each band for illustration purpose in Fig. 6.The blue shaded area is the confidence interval and represents the significance threshold.This indicates that autocorrelations inside the blue area are statistically insignificant, and anything outside of it is statistically significant.It is visible that, for each of the wavelengths, there are several autocorrelations that are significantly bigger than zero, therefore the time series dataset is not completely random.
Prediction
After checking the non-randomness nature of the data and developing the prediction models, we applied the trained models to the test sets and measured the RMSE.The measurement results are reported in Table 1.For each of the bands, the RMSE is calculated twice for LSTM prediction; once considering the model trained on full image (all the wavelengths) and another time considering the model trained only on the subset of wavelengths within the band.For almost all the experiments, the LSTM model does a better job than the shifted forecast; significantly for band 1 and band 2. The RMSE values measured for LSTM forecast show that higher correlations help the models to learn the historical data better, and therefore they lead to improvements in prediction results.In Fig. 7, we have displayed the prediction images using both LSTM forecast and the shifted forecast.For comparison of both models, we have shown the differences between the actual image and predicted image.
Figure 8 shows the forecast results for two wavelengths belonging to band 3.For wavelength 620.52 nm, the LSTM model was able to fairly predict and follow the actual trend.For example it was able to predict several peaks and troughs, at the times 9-12 s and 17-18 s.For the wavelength of 619.17 nm, the LSTM model was able to detect the highest peak even though the predicted value is far from the actual data in terms of intensity.In both plots, there are many peaks and troughs that were not predicted by the LSTM model.Also note that the shifted forecast, by definition, identifies all peaks and troughs, but at a delayed time.The delay in QE prediction makes a simple shifted forecast infeasible to incorporate into a feedback control system.As shown in Table 1, when the algorithm is applied to a narrow band (Band 1, 4.18 nm wide) the RMSE improves from 225.74 to 76.26.However, when applied to the wider bands, (Band 2 = 5.27 nm; Band 3 = 5.93 nm; Band 4 = 12.41 nm) and on the full image (137.03nm) the performance between the LSTM method and the shifted forecasting method is comparable.Based on these results, as well as the correlation matrices presented in Fig. 5, it seems that the information required for effectively predicting quantum fluctuations lies in a fairly small wavelength band.Future work will investigate this hypothesis by creating and analyzing datasets of narrowband quantum fluctuations.
Discussion
To the best of our of knowledge, the development of high-performance models that can forecast the fluctuations of quantum emitters (and other nanoscale light sources) has not been studied prior to this work.Being able to forecast these fluctuations would provide deep insight into their origins and enable scientists and engineers to pursue predictive stabilization of nanoscale solid-state light sources.In this work, we developed a LSTM-based models towards the prediction of QE fluctuations over time for emitters in 1L-WS 2 on gold.These emitters show dramatic fluctuations and providing and extreme test of the ability for neural network models to identify trends in the fluctuations.Despite the challenge of the task, our trained model was able to predict the general trend of a quantum emitter better than a simple shifted forecast, as measured by improved RMSE.To some extent, the model was able to predict the peaks and troughs of the fluctuations, though this behavior was inconsistent.We also showed that the fluctuations are of non-random nature, as measured by correlation and autocorrelation.Building on this initial step, our future work will focus on increasing the size and durations of the datasets on which the neural network is trained and explicitly training the neural network on wavelength and intensity fluctuations (as opposed to just the wavelength bands used here).We also plan to characterize the neural network as a function of brightness by increasing the size of our dataset which we think would be an interesting study to further explore this context.Additionally future work will investigate the relationship between model performance and the wavelength bandwidth.These next steps will also lead to developing multi-step, multivariate time-series forecasting which could predict multiple steps into the future.
Figure 1 .
Figure 1.Fluctuations in intensity and energy for a nanobubble QE in single-layer WS 2 on an Au surface.The figure on the left is shows the number of photons emitted by the device as a function of time and photon wavelength.The figure on the right shows the same data, but also indicates four different wavelength bands where quantum emissions occur.
Figure 2 .
Figure 2. Recurrent neural network and feed-forward neural network architectures.The figure on the left shows a recurrent neural network with recurrent connections on hidden states which do the back propagation, whereas the figure on the right shoes the feed-forward architecture of neural networks that have no recurrent.
Figure 3 .
Figure3.Unfolded sequential architecture in an RNN.X t is the input at time step t, Y t is the output at time step t and H t is the hidden state at time t.The repeating modules are designed to act as a collective memory, sharing parameters across various time steps to store important data from earlier processing stages.
Figure 4 .
Figure 4. Memory cells structure in LSTM44 .These cells are comprised of three gates: an update or input gate, a forget gate, and an output gate.Together, these gates facilitate the learning of long-term dependencies.The input gate and output gate determine what information to update and transmit to the next cell, respectively.On the other hand, the forget gate decides which information is least relevant and should be discarded.
Figure 5 .
Figure 5.The experimental setup used to acquire the time-series datasets.(a) A schematic of the optical microscope used to measure the photoluminescence at a sample temperature of 4 K.A 532 nm continuous wave diode laser is used to excite the sample and a 532 nm long pass dichroic mirror and fluorescence filter is used to reject the laser light before focusing the photoluminescence into a grating spectrometer.(b) A schematic of the cross-section of the sample which consists of 1L-WS 2 nanobubbles on a gold substrate.(c) An optical micrograph of the sample.The regions that contain the 1L-WS 2 with nanobubble are outlined by the dashed lines.
Figure 6 .
Figure 6.Correlation matrix and autocorrelation plot for 4 bands and their centers for 30 lags.(a), image on the left represents the correlation matrix for wavelengths withing band 1 and the image on the right represents the autocorrelation lambda = 606.49nm.(b), image on the left represents the correlation matrix for wavelengths withing band 2 and the image on the right represents the autocorrelation for lambda = 613.77nm.(c), image on the left represents the correlation matrix for wavelengths withing band 3 and the image on the right represents the autocorrelation for the lambda = 620.38nm.(d), image on the left represents the correlation matrix for wavelengths withing band 4 and the image on the right represents the autocorrelation for lambda = 630.37nm.
Figure 7 .
Figure 7. Prediction results using LSTM and shifted forecasts.Columns represent the actual image, LSTM forecast, shifted forecast, difference between the actual image and LSTM forecast, and difference between the actual image and shifted forecast, respectively.Row (a) corresponds to the full image where all the wavelengths within the dataset are considered.Row (b-e) correspond to the band 1, band 2, band 3, and band 4, respectively.
Figure 8 .
Figure 8. Single step prediction for wavelength 620.52 nm and 619.17 nm using LSTM forecast and Shifted forecast.For 620.52 nm, LSTM and shifted forecast RMSE are 420.37 and 346.7.For 619.17 nm, LSTM and shifted forecast RMSE are 648.80 and 654.57respectively.
Table 1 .
Root mean square error using LSTM and shifted forecast. | 6,442.8 | 2024-03-22T00:00:00.000 | [
"Physics",
"Materials Science"
] |
The Relationship Between Type 2 Diabetes Mellitus and Related Thyroid Diseases
Diabetes and thyroid diseases are caused by endocrine dysfunction and both have been demonstrated to mutually impact each other. Variation in thyroid hormone levels, even within the normal range, can trigger the onset of type 2 diabetes mellitus (T2DM), particularly in people with prediabetes. However, the available evidence is contradictory. The purpose of this review is to understand the pathological relationship between thyroid-related disorders and T2DM. T2DM in thyroid dysfunction is thought to be caused by altered gene expression of a group of genes, as well as physiological abnormalities that result in decreased glucose uptake increased, splanchnic glucose absorption, disposal in muscles, increased hepatic glucose output. Additionally, both hyperthyroidism and hypothyroidism can cause insulin resistance. Insulin resistance can develop in subclinical hypothyroidism as a result of a reduced rate of insulin-stimulated glucose transfer caused by a translocation of the glucose transporter type 2 (GLUT 2) gene. On the other hand, novel missense variations in (Thr92Ala) can cause insulin resistance. Furthermore insulin resistance and hyperinsulinemia resulting from diabetes can cause culminate in goitrous transformation of the thyroid gland. Thyroid-related diseases and T2DM are closely linked. Type 2 diabetes can be exacerbated by thyroid disorders, and diabetes can worsen thyroid dysfunction. Insulin resistance has been found to play a crucial role in both T2DM and thyroid dysfunction. Therefore, failure to recognize inadequate thyroid hormone levels in diabetes and insulin resistance in both conditions can lead to poor management of patients.
Introduction And Background
Thyroid dysfunction and diabetes mellitus are the most frequently occurring endocrinopathies with a large impact on cardiovascular health. Diabetes is a global pandemic. Globally, the prevalence of diabetes has increased as a result of the rise in obesity and lifestyle changes. In 2017, the global prevalence of diabetes mellitus was 425 million. Currently, the worldwide prevalence of diabetes is rising and is expected to reach 366 million by 2030, impacting 44% of all age groups [1]. On the other hand, in the United States and Europe, the prevalence of thyroid dysfunction is 6.6% in adults; it is increasing with age and is more common among women compared to males.
Thyroid disorders are also significantly more prevalent in type 2 diabetes mellitus (T2DM) patients, ranging between 9.9% and 48%. Furthermore, studies have also recorded a high prevalence of thyroid disorder in the 13.4% diabetic population, with a higher prevalence (31.4%) among females with type 2 diabetes than males with T2DM (6.9%) [2]. Evidence also suggests a strong underlying relationship exists between thyroid diseases and diabetes mellitus. Researchers have shown that the thyroid hormone plays a role in controlling glucose metabolism and pancreatic function, while diabetes can alter thyroid function. For instance, "TSH to thyrotropin-releasing hormone response" has been found to be reduced in diabetes, causing accompanying decreased T3 levels and hypothyroidism [3]. It has been proposed during diabetes reduced T3 levels can decrease the conversion of T3 from T4 on the basis of research conducted to observe hyperglycemia triggered reversible decline in hepatic thyroxine concentration and deiodinase activity. Other studies have revealed that increased T3 levels even for short period can cause insulin resistance; thereby contributing to T2DM.
Numerous investigations also have shown that an array of intricately associated hormonal, genetic, and biochemical abnormalities mimic this pathophysiological relationship. For example, the "5′ adenosine monophosphate-activated protein kinase" (AMPK) is the main target for altering thyroid hormone feedback and insulin sensitivity regulation linked with energy expenditure and appetite [4]. Additionally, hypothyroidism (for example, Hashimoto's thyroiditis) and hyperthyroidism (for instance, Graves' disease) have been linked to diabetes mellitus. According to a comprehensive study, thyroid dysfunction occurs at a rate of 11% in diabetic patients. Moreover, autoimmunity has been identified as the primary etiology of diabetes mellitus linked with thyroid disorders. Furthermore, certain genetic variations have also been found associated with thyroid disorders and T2DM, for instance, mutations in GLUT4.
However, the link between T2DM and related thyroid disorders remains highly debatable and human research have shown conflicting results. Therefore, the purpose of this review is to understand the pathological relationship between thyroid-related disorders and T2DM.
Review The Link between thyroid disorders and T2DM
Thyroid hormones exert a direct influence on insulin secretion. Hypothyroidism resulted in a decrease in insulin production via beta cells whereas hyperthyroidism led to an increase in beta-cell responsiveness to catecholamine or glucose due to increased beta-cell mass. Additionally, thyrotoxicosis results in an increase in insulin clearance [5]. All of these changes occur as a result of alternations in thyroid hormone which increases the risk of developing T2DM and can lead to diabetic complications or can worsen diabetic symptoms.
Hyperthyroidism and T2DM
Increased glucose production from the liver is a key factor in the development of peripheral insulin resistance, glucose intolerance, and hyperinsulinemia [6]. In thyrotoxicosis, glucose tolerance is triggered by an increase in hepatic glucose output and an increase in glycogenolysis [7]. This process contributes to the progression of subclinical diabetes and exacerbation of hyperglycemia in type 2 diabetes. Studies have also reported that both T2DM and hyperthyroidism share some pathological features. For example, TD2M is characterized by changes in B-cell mass, decreased insulin secretion, and elevation in intestinal glucose absorption, upsurge in glucagon secretion, increase in insulin breakdown, insulin resistance, and increased levels of catecholamines. These factors are also an important part of hyperthyroidism [8]. Among the aforementioned factors, insulin resistance has been identified as the most significant link between thyroid malfunction and T2DM. Hepatic insulin resistance is promulgated by excessive production of glucose rather than fasting hyperinsulinemia. Additionally, increased hepatic glucose output has been found to be a critical regulator of elevated fasting plasma glucose (FPG) concentrations in T2DM patients [9]. During insulin resistance, muscle glucose is increased although uptake efficiency is decreased. Reduced glucose uptake into muscles and increased hepatic glucose output result in a deterioration of glucose metabolism. It is worth noting that insulin resistance can occur in both hyperthyroidism and hypothyroidism. According to recent discoveries, insulin resistance also impairs lipid metabolism [10]. Thus, insulin resistance appears to be a possible connection between thyroid dysfunction and T2DM.
Similarly, another study showed that beta-cell dysfunction and insulin resistance both are negatively connected to TSH, which may be explained by thyroid hormones' insulin-antagonistic properties combined with a rise in TSH. A higher blood T3 and T4 levels often result in decreased TSH levels via a negative feedback process. Thyroid hormone levels drop as TSH levels decrease and insulin-antagonistic effects are diminished and when TSH level decreases, thyroid levels increase and insulin-antagonistic effects are elevated. However, the mechanism by which hyperthyroidism leads to insulin resistance is not known but it is the most commonly occurring phenomenon observed in diabetic patients with hyperthyroidism.
Genetic Variations, Hyperthyroidism, and T2DM
Following genes named mitochondrial uncoupling protein, GLUT4, GLUT1, "PPAR gamma coactivator-1 alpha (PGC-1 alpha)", phosphoglycerate kinase (PGK), [11] regulate the connection between thyroid hormone with skeletal muscles. Among the several discovered genes, UCP-3 and GLUT-4 have been extensively researched. In skeletal muscles, T3 mediates GLUT-4 and it has been revealed to increase basal and insulin-induced glucose transport [12]. "Mitochondrial uncoupling protein 3" (UCP 3) is a newly found gene that has been linked to reducing fatty acid oxidation and glucose metabolism [13]. Additionally, research has reported that this gene plays a significant role in the downregulation of "5′ adenosine monophosphate-activated protein kinase and Akt/PKB signaling" [14]. Furthermore, the potential role of T2 has been investigated, and it has been established that it is connected with sarcolemma GLUT-4. Similarly, glycolytic enzymes and phosphofructokinase have been linked to GLUT 4 activity mediated by T2 [14]. Numerous genes have also been identified as being involved in peripheral glucose metabolism.
For example, T3 activates an array of genes involved in glucose metabolism that attaches to the thyroid hormone receptors. These receptors originate from TRβ1, TRβ2, TRβ3, and TRα1, respectively. These are the four primary isoforms of T3 binding [15]. TR α1 is believed to modulate thyroid hormones' metabolic actions. TRβ2 and TRβ1 are associated with the maintenance of the "hypothalamic-pituitary-thyroid axis" and the maintenance of a normal thyroid function [16].
Similarly, 3,5,3-triiodothyronine originate from T4. It can be triggered by the type 2 or type 1 iodothyronine deiodinases type via removal of the iodine atom from the phenolic ring (D2). On the other hand, type 3 deiodinase (D3) inactivates thyroid hormone by removing an iodine atom from the tyrosyl ring. Deiodinase are involved in the regulation of T3 bioavailability and therefore the insulin response. Thyroid hormones influence the expression of deiodinases in various tissues. They regulate T3 bioavailability and hence insulin responsiveness. T3 elevations are linked to a new missense variation (Thr92Ala). This is linked to insulin resistance. Additionally, it is related to an increase in insulin-induced glucose clearance and glucose turnover in skeletal muscle and adipose tissue. In a meta-analysis, it was determined that "intracellular triiodothyronine" (T3) is associated with insulin sensitivity abnormalities [16]. Research revealed that expression of GLUT 2 was enhanced in hyperthyroidism compared to the euthyroid phase [17]. Moreover, disturbances in lipid metabolism further establish a relationship between TH and insulin resistance in [17]. Furthermore, thyrotoxicosis results in an increase in lipid peroxidation, while hypothyroidism results in a decrease in glucose oxidation. Reduced LDL cholesterol and triglyceride levels result from LDL clearance. TH stimulates catecholamine activity, resulting in lipolysis of adipocytes and an increase in circulating FA. Increased FA supply counteracts the TH-mediated enhancement of the hepatic long-chain FA oxidative pathway which is involved in glucogenesis. All these genes associated with thyroid hormones play an important part in the pathogenesis of type 2 diabetes.
Hypothyroidism and T2DM
Hypothyroidism is characterized by decreased glucose absorption from the GI tract, extended peripheral glucose buildup, gluconeogenesis, decreased hepatic glucose production, and decreased glucose disposal [18]. Hypothyroidism can affect glucose metabolism in type 2 diabetes in different ways. For example, subclinical hypothyroidism can result in insulin resistance due to a decreased rate of insulin-stimulated glucose transfer induced by a translocation of the GLUT 2 gene. Additionally, according to a study, in hypothyroidism due to decreased insulin clearance by the kidneys, the physiological need for insulin was decreased. Moreover, anorectic circumstances may also contribute to lower insulin production in hypothyroidism.
Furthermore, insulin resistance has been linked to hypothyroidism in a number of preclinical and in vitro investigations [19], where it was discovered that peripheral muscles become less sensitive to insulin under hypothyroid conditions. A plausible role for such disease has been suggested by dysregulated leptin metabolism [20]. Additionally, numerous authors have established a direct link between insulin resistance and hypothyroidism [21]. However, some researchers have observed inconsistent findings, highlighting the need for more research in this area.
Thyroid diseases and T2DM
The link between T2DM and thyroid cancer incidence is debatable. Large prospective cohort research discovered an increased risk of differentiated thyroid cancer among type 2 diabetic women [22]. Another large prospective study and a pooled analysis of numerous prospective trials [23] revealed no evidence of a significant relationship between thyroid cancer and diabetes. Additionally, a prior review of the literature revealed that any link between thyroid cancer and T2DM was most likely weak [24]. However, Korean research found that patients with early T2DM had a low incidence of thyroid cancer, with the effect continuing up to six years after the T2DM was detected [25]. Moreover, according to retrospective research published in December 2018, Chinese women with T2DM had a considerably increased risk of thyroid cancer [26].
Furthermore, evidence indicates that subclinical hypothyroidism or hyperthyroidism raises blood pressure and cholesterol levels, impairs insulin secretion, and compromises both micro-and macrovascular function, increasing the risk of peripheral neuropathy, peripheral artery disease, and diabetic nephropathy. On the other hand, another study proposed that SCH can protect against cardiovascular death in T2DM. Additionally, a previous review explored the relationship between diabetic complications and subclinical hypothyroidism. This meta-analysis discovered that T2DM patients having subclinical hypothyroidism were at increased risk of developing diabetic complications including peripheral neuropathy, nephropathy, and retinopathy. From the above findings, it is safe to assume that thyroid diseases can increase the risk of diabetic complications or can worsen diabetic symptoms. However, future research on the relationship between thyroid cancer and diabetes mellitus is highly recommended.
Effect of type 2 diabetes on thyroid diseases
In type 2 diabetes, older age, obesity, and female sex, hospitalization, and thyroid peroxidase antibody Ab positive all enhance the risk of developing hypothyroidism. Diabetes impairs thyroid function by changing thyroid-stimulating hormone (TSH) levels and by disturbing the conversion of thyroxine (T4) to triiodothyronine (T3) in peripheral tissues [27]. In euthyroid diabetic patients, the nocturnal TSH peak can be absent or diminished and the TSH response to thyrotropin-releasing hormone (TRH) can be compromised. However, long-term hyperglycemia can have a cumulative effect on thyroid dysfunction. Therefore, while interpreting thyroid function tests, it is critical to keep in mind that, similar to other acute systemic diseases, diabetic ketoacidosis can result in a drop in T3 and T4 levels while TSH levels can stay normal. Moreover, hyperinsulinemia and Insulin resistance promote thyroid tissue proliferation, increase the prevalence of nodular thyroid disease, and result in a goiter [28]. Additionally, diabetic individuals with goiter orbitopathy are at more risk of developing dysthyroid optic neuropathy than nondiabetics. Numerous researches also have shown that the association between diabetes and thyroid function can be bidirectional. For example, early type 2 diabetes or prediabetes can increase thyroid tissue hyperplasia, resulting in enlargement of the thyroid gland and development of nodules. On the other hand, thyroid dysfunction affects glucose metabolism in diabetes. Furthermore, it is well established that the prevalence of subclinical hypothyroidism increases with age. Females and males have distinct thyroid dysfunction predispositions and obesity has been demonstrated to be strongly associated with hypothyroidism [29]. A review of 36 studies concluded that T2DM females over the age of 60 have a greater prevalence of subclinical hypothyroidism. Furthermore, in India, a cross-sectional observational study observed 1,508 T2DM patients and they discovered a significantly elevated risk of hypothyroidism in older type 2 diabetic patients (more than 65 years) with an OR of 4.2, and a clear difference between females and males (OR 4.82 vs 2.60), as well as between obese and normal patients (OR 2.56 vs. 3.11) [30]. This implies that BMI status, gender, age, and sex hormones can also have a role in thyroid dysfunction and T2DM.
Conclusions
There is much evidence that thyroid disease and T2DM are closely related. T2DM is characterized by changes in beta-cell mass, decreased insulin secretion, and elevation in intestinal glucose absorption, upsurge in glucagon secretion, increase in insulin breakdown, insulin resistance, and increased levels of catecholamines. These factors are also an important part of hyperthyroidism.
Additionally, the existing evidence demonstrates that insulin resistance plays a critical role in the connection between thyroid dysfunction and T2DM. Both thyroid dysfunction and T2DM have a bidirectional relationship. Thyroid disorders such as thyrotoxicosis and hypothyroidism can cause insulin resistance. Insulin resistance can develop in subclinical hypothyroidism as a result of a reduced rate of insulin-stimulated glucose transfer caused by a translocation of the glucose transporter type 2 gene (GLUT 2). On the other hand, higher levels of T3 activate a number of genes involved in glucose metabolism and insulin resistance. Additionally, insulin resistance and hyperinsulinemia enhance thyroid tissue development, which can cause nodular thyroid disease and a goiter. Furthermore, the literature suggests that subclinical hypothyroidism or hyperthyroidism raises blood pressure and cholesterol levels, impairs insulin secretion, and compromises both micro-and macrovascular function, increasing the risk of peripheral neuropathy, peripheral artery disease, and diabetic nephropathy. All these findings suggest that a strong relationship exists between thyroid diseases and TD2M and by early screening or by recognizing the risk factors, the risk of these two conditions and their medical complications can be minimized.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 3,792.6 | 2021-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s}=8$$\end{document}s=8 TeV pp collisions with the ATLAS detector
Two searches for supersymmetric particles in final states containing a same-flavour opposite-sign lepton pair, jets and large missing transverse momentum are presented. The proton–proton collision data used in these searches were collected at a centre-of-mass energy \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s}=8$$\end{document}s=8 TeV by the ATLAS detector at the Large Hadron Collider and corresponds to an integrated luminosity of 20.3 fb\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{-1}$$\end{document}-1. Two leptonic production mechanisms are considered: decays of squarks and gluinos with Z bosons in the final state, resulting in a peak in the dilepton invariant mass distribution around the Z-boson mass; and decays of neutralinos (e.g. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\chi }^{0}_{2} \rightarrow \ell ^{+}\ell ^{-}\tilde{\chi }^{0}_{1}$$\end{document}χ~20→ℓ+ℓ-χ~10), resulting in a kinematic endpoint in the dilepton invariant mass distribution. For the former, an excess of events above the expected Standard Model background is observed, with a significance of three standard deviations. In the latter case, the data are well-described by the expected Standard Model background. The results from each channel are interpreted in the context of several supersymmetric models involving the production of squarks and gluinos.
Introduction
Supersymmetry (SUSY) [1][2][3][4][5][6][7][8][9] is an extension to the Standard Model (SM) that introduces supersymmetric particles (sparticles), which differ by half a unit of spin from their SM partners. The squarks (q) and sleptons (˜ ) are the scalar partners of the quarks and leptons, and the gluinos (g) are the fermionic partners of the gluons. The charginos (χ ± i with i = 1, 2) and neutralinos (χ 0 i with i = 1, 2, 3, 4) are the mass eigenstates (ordered from the lightest to the heaviest) formed from the linear superpositions of the SUSY partners of the Higgs and electroweak gauge bosons. SUSY models in which the gluino, higgsino and top squark masses are not much higher than the TeV scale can provide a solution to the SM hierarchy problem [10][11][12][13][14][15]. e-mail<EMAIL_ADDRESS>If strongly interacting sparticles have masses not higher than the TeV scale, they should be produced with observable rates at the Large Hadron Collider (LHC). In the minimal supersymmetric extension of the SM, such particles decay into jets, possibly leptons, and the lightest sparticle (LSP). If the LSP is stable due to R-parity conservation [15][16][17][18][19] and only weakly interacting, it escapes detection, leading to missing transverse momentum (p miss T and its magnitude E miss T ) in the final state. In this scenario, the LSP is a darkmatter candidate [20,21].
Leptons may be produced in the cascade decays of squarks and gluinos via several mechanisms. Here two scenarios that always produce leptons (electrons or muons) in sameflavour opposite-sign (SFOS) pairs are considered: the leptonic decay of a Z boson, Z → + − , and the decaỹ χ 0 2 → + −χ 0 1 , which includes contributions fromχ 0 2 → ±( * ) ∓ → + −χ 0 1 andχ 0 2 → Z * χ 0 1 → + −χ 0 1 . In models with generalised gauge-mediated (GGM) supersymmetry breaking with a gravitino LSP (G), Z bosons may be produced via the decayχ 0 1 → ZG. Z bosons may also result from the decayχ 0 2 → Zχ 0 1 , although the GGM interpretation with the decayχ 0 1 → ZG is the focus of the Z boson final-state channels studied here. Theχ 0 2 particle may itself be produced in the decays of the squarks or gluinos, e.g. q → qχ 0 2 andg → qqχ 0 2 . These two SFOS lepton production modes are distinguished by their distributions of dilepton invariant mass (m ). The decay Z → + − leads to a peak in the m distribution around the Z boson mass, while the decayχ 0 2 → + −χ 0 1 leads to a rising distribution in m that terminates at a kinematic endpoint ("edge") [22], because events with larger m values would violate energy conservation in the decay of theχ 0 2 particle. In this paper, two searches are performed that separately target these two signatures. A search for events with a SFOS lepton pair consistent with originating from the decay of a Z boson (on-Z search) targets SUSY models with Z boson production. A search for events with a SFOS lepton pair inconsistent with Z boson decay (off-Z search) targets the decayχ 0 2 → + −χ 0 1 . Previous searches for physics beyond the Standard Model (BSM) in the Z + jets + E miss T final state have been performed by the CMS Collaboration [23,24]. Searches for a dilepton mass edge have also been performed by the CMS Collaboration [24,25]. In the CMS analysis performed with √ s = 8 TeV data reported in Ref. [24], an excess of events above the SM background with a significance of 2.6 standard deviations was observed.
In this paper, the analysis is performed on the full 2012 ATLAS [26] dataset at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 20.3 fb −1 .
The ATLAS detector
ATLAS is a multi-purpose detector consisting of a tracking system, electromagnetic and hadronic calorimeters and a muon system. The tracking system comprises an inner detector (ID) immersed in a 2 T axial field supplied by the central solenoid magnet surrounding it. This sub-detector provides position and momentum measurements of charged particles over the pseudorapidity 1 range |η| < 2.5. The electromagnetic calorimetry is provided by liquid argon (LAr) sampling calorimeters using lead absorbers, covering the central region (|η| < 3.2). Hadronic calorimeters in the barrel region (|η| < 1.7) use scintillator tiles with steel absorbers, while the pseudorapidity range 1.5 < |η| < 4.9 is covered using LAr technology with copper or tungsten absorbers. The muon spectrometer (MS) has coverage up to |η| < 2.7 and is built around the three superconducting toroid magnet systems. The MS uses various technologies to provide muon tracking and identification as well as dedicated muon triggering for the range |η| < 2.4.
The trigger system [27] comprises three levels. The first of these (L1) is a hardware-based trigger that uses only a subset of calorimeter and muon system information. Following this, both the second level (L2) and event filter (EF) triggers, constituting the software-based high-level trigger, include fully reconstructed event information to identify objects. At L2, only the regions of interest in η-φ identified at L1 are scrutinised, whereas complete event information from all detector sub-systems is available at the EF. 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis along the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r, φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). The opening angle R in η-φ space is defined as R = ( η) 2 + ( φ) 2 .
Data and Monte Carlo samples
The data used in this analysis were collected by ATLAS during 2012. Following requirements based on beam and detector conditions and data quality, the complete dataset corresponds to an integrated luminosity of 20.3 fb −1 , with an associated uncertainty of 2.8 %. The uncertainty is derived following the same methodology as that detailed in Ref. [28].
Dedicated high-transverse-momentum ( p T ) single-lepton triggers are used in conjunction with the lowerp T dilepton triggers to increase the trigger efficiency at high lepton p T . The required leading-lepton p T threshold is 25 GeV, whereas the sub-leading lepton threshold can be as low as 10 GeV, depending on the lepton p T threshold of the trigger responsible for accepting the event. To provide an estimate of the efficiency for the lepton selections used in these analyses, trigger efficiencies are calculated using tt Monte Carlo (MC) simulated event samples for leptons with p T > 14GeV. For events where both leptons are in the barrel (endcaps), the total efficiency of the trigger configuration for a two-lepton selection is approximately 96, 88 and 80 % (91, 92 and 82 %) for ee, eμ and μμ events, respectively. Although the searches in this paper probe only same-flavour final states for evidence of SUSY, the eμ channel is used to select control samples in data for background estimation purposes.
Simulated event samples are used to validate the analysis techniques and aid in the estimation of SM backgrounds, as well as to provide predictions for BSM signal processes. The SM background samples [29][30][31][32][33][34][35][36][37][38][39][40] used are listed in Table 1, as are the parton distribution function (PDF) set, underlyingevent tune and cross-section calculation order in α s used to normalise the event yields for these samples. Samples generated with MadGraph5 1.3.28 [41] are interfaced with Pythia 6.426 [42] to simulate the parton shower. All samples generated using Powheg [43][44][45] use Pythia to simulate the parton shower, with the exception of the diboson samples, which use Pythia8 [46]. Sherpa [47] simulated samples use Sherpa's own internal parton shower and fragmentation methods, as well as the Sherpa default underlyingevent tune [47]. The standard ATLAS underlying-event tune, AUET2 [48], is used for all other samples with the exception of the Powheg+Pythia samples, which use the Peru-gia2011C [49] tune.
The signal models considered include simplified models and a GGM supersymmetry-breaking model. In the simplified models, squarks and gluinos are directly pair-produced, and these subsequently decay to the LSP via two sets of intermediate particles. The squarks and gluinos decay with equal probability to the next-to-lightest neutralino or the lightest chargino, where the neutralino and chargino are massdegenerate and have masses taken to be the average of the squark or gluino mass and the LSP mass. The intermediate Table 1 Simulated background event samples used in this analysis with the corresponding generator, cross-section order in α s used to normalise the event yield, underlying-event tune and PDF set Fig. 1 Decay topologies for example signal processes. A simplified model involving gluino pair production, with the gluinos following two-step decays via sleptons to neutralino LSPs is shown on the left. The diagram on the right shows a GGM decay mode, where gluinos decay via neutralinos to gravitino LSPs chargino or neutralino then decays via sleptons (or sneutrinos) to two leptons of the same flavour and the lightest neutralino, which is assumed to be the LSP in these models. Here, the sleptons and sneutrinos are mass-degenerate and have masses taken to be the average of the chargino or neutralino and LSP masses. An example of one such process, pp →gg → (qqχ 0 2 )(qqχ ± 1 ),χ 0 2 → + −χ 0 1 , χ ± 1 → ± νχ 0 1 is illustrated on the left in Fig. 1, where = e, μ, τ with equal branching fractions for each lepton flavour. The dilepton mass distribution for leptons produced from theχ 0 2 in these models is a rising distribution that terminates at a kinematic endpoint, whose value is given by m max ≈ m(χ 0 2 ) − m(χ 0 1 ) = 1/2(m(g/q) − m(χ 0 1 )). Therefore, signal models with small values of m = m(g/q) − m(χ 0 1 ) produce events with small dilepton masses; those with large m produce events with large dilepton mass. For the model involving squark pair production, the lefthanded partners of the u, d, c and s quarks have the same mass. The right-handed squarks and the partners of the b and t quarks are decoupled. For the gluino-pair model, an effective three-body decay forg → qqχ 0 1 is used, with equal branching fractions for q = u, d, c, s. Exclusion limits on these models are set based on the squark or gluino mass and the LSP mass, with all sparticles not directly involved in the considered decay chains effectively being decoupled.
In the general gauge mediation models, the gravitino is the LSP and the next-to-lightest SUSY particle (NLSP) is a higgsino-like neutralino. The higgsino mass parameter, μ, and the gluino mass are free parameters. The U(1) and SU(2) gaugino mass parameters, M 1 and M 2 , are fixed to be 1 TeV, and the masses of all other sparticles are set at ∼1.5 TeV. In addition, μ is set to be positive to makeχ 0 1 → ZG the dominant NLSP decay. The branching fraction forχ 0 1 → ZG varies with tan β, the ratio of the vacuum expectation value for the two Higgs doublets, and so two different values of tan β are used. At tan β = 1.5, the branching fraction for χ 0 1 → ZG is large (about 97 %) [50], whereas setting tan β = 30 results in a considerable contribution (up to 40 %) fromχ 0 1 → hG. In these models, h is the lightest CP-even SUSY Higgs boson, with m h = 126 GeV and SM-like branching fractions. The dominant SUSY-particle production mode in these scenarios is the strong production of gluino pairs, which subsequently decay to the LSP via several intermediate particles. An example decay mode is shown in the diagram on the right in Fig. 1. The gravitino mass is set to be sufficiently small such that the NLSP decays are prompt. The decay length cτ NLSP (where τ NLSP is the lifetime of the NLSP) can vary depending on μ, and is longest at μ = 120 GeV, where it is 2 mm, decreasing to cτ NLSP < 0.1 mm for μ ≥ 150 GeV. The finite NLSP lifetime is taken into account in the MC signal acceptance and efficiency determination.
All simplified models are produced using MadGraph5 1.3.33 with the CTEQ6L1 PDF set, interfaced with Pythia 6.426. The scale parameter for MLM matching [51] is set at a quarter of the mass of the lightest strongly produced sparticle in the matrix element. The SUSY mass spectra, gluino branching fractions and the gluino decay width for the GGM scenarios are calculated using Suspect 2.41 [52] and Sdecay 1.3 [53]. The GGM signal samples are generated using Pythia 6.423 with the MRST2007 LO * [54] PDF set. The underlying event is modelled using the AUET2 tune for all signal samples. Signals are normalised to cross sections calculated at next-to-leading order (NLO) in α s , including the resummation of soft gluon emission at next-to-leadinglogarithmic accuracy (NLO + NLL) [55][56][57][58][59].
A full ATLAS detector simulation [60] using GEANT4 [61] is performed for most of the SM background MC samples. The signal and remaining SM MC samples use a fast simulation [62], which employs a combination of a parameterisation of the response of the ATLAS electromagnetic and hadronic calorimeters and GEANT4. To simulate the effect of multiple pp interactions occurring during the same (intime) or a nearby (out-of-time) bunch-crossing, called pileup, minimum-bias interactions are generated and overlaid on top of the hard-scattering process. These are produced using Pythia8 with the A2 tune [63]. MC-to-data corrections are made to simulated samples to account for small differences in lepton identification and reconstruction efficiencies, and the efficiency and misidentification rate associated with the algorithm used to distinguish jets containing b-hadrons.
Physics object identification and selection
Electron candidates are reconstructed using energy clusters in the electromagnetic calorimeter matched to ID tracks. Electrons used in this analysis are assigned either "baseline" or "signal" status. Baseline electrons are required to have transverse energy E T > 10 GeV, satisfy the "medium" criteria described in Ref. [64] and reside within |η| < 2.47 and not in the range 1.37 < |η| < 1.52. Signal electrons are further required to be consistent with the primary vertex and isolated with respect to other objects in the event, with a p T -dependent isolation requirement. The primary vertex is defined as the reconstructed vertex with the highest p 2 T , where the summation includes all particle tracks with p T > 400 MeV associated with a given reconstructed vertex. Signal electrons with E T < 25 GeV must additionally satisfy the more stringent shower shape, track quality and matching requirements of the "tight" selection criteria in Ref. [64]. For electrons with E T < 25 GeV (≥25 GeV), the sum of the transverse momenta of all charged-particle tracks with p T > 400 MeV associated with the primary vertex, excluding the electron track, within R = 0.3 (0.2) surrounding the electron must be less than 16 % (10 %) of the electron p T . Electrons with E T < 25 GeV must reside within a distance |z 0 sin θ | < 0.4 mm of the primary ver-tex along the direction of the beamline 2 . The significance of the transverse-plane distance of closest approach of the electron to the primary vertex must be |d 0 /σ d 0 | < 5. For electrons with E T ≥ 25 GeV, |z 0 | is required to be < 2 mm and |d 0 | < 1 mm.
Baseline muons are reconstructed from either ID tracks matched to a muon segment in the muon spectrometer or combined tracks formed both from the ID and muon spectrometer [65]. They are required to be of good quality, as described in Ref. [66], and to satisfy p T > 10 GeV and |η| < 2.4. Signal muons are further required to be isolated, with the scalar sum of the p T of charged particle tracks associated with the primary vertex, excluding the muon track, within a cone of size R < 0.3 surrounding the muon being less than 12 % of the muon p T for muons with p T < 25 GeV. For muons with p T ≥ 25 GeV, the scalar sum of the p T of charged-particle tracks associated with the primary vertex, excluding the muon track, within R < 0.2 surrounding the muon must be less than 1.8 GeV. Signal muons with p T < 25 GeV must also have |z 0 sin θ | ≤ 1 mm and |d 0 /σ d 0 | < 3. For the leptons selected by this analysis, the d 0 requirement is typically several times less restrictive than the |d 0 /σ d 0 | requirement.
Jets are reconstructed from topological clusters in the calorimeter using the anti-k t algorithm [67] with a distance parameter of 0.4. Each cluster is categorised as being electromagnetic or hadronic in origin according to its shape [68], so as to account for the differing calorimeter response for electrons/photons and hadrons. A cluster-level correction is then applied to electromagnetic and hadronic energy deposits using correction factors derived from both MC simulation and data. Jets are corrected for expected pile-up contributions [69] and further calibrated to account for the calorimeter response with respect to the true jet energy [70,71]. A small residual correction is applied to the jets in data to account for differences between response in data and MC simulation. Baseline jets are selected with p T > 20 GeV. Events in which these jets do not pass specific jet quality requirements are rejected so as to remove events affected by detector noise and non-collision backgrounds [72,73]. Signal jets are required to satisfy p T > 35 GeV and |η| < 2.5. To reduce the impact of jets from pileup to a negligible level, jets with p T < 50 GeV within |η| < 2.4 are further required to have a jet vertex fraction |JVF| > 0.25. Here the JVF is the p Tweighted fraction of tracks matched to the jet that are associated with the primary vertex [74], with jets without any associated tracks being assigned JVF = −1.
The MV1 neural network algorithm [75] identifies jets containing b-hadrons using the impact parameters of asso-ciated tracks and any reconstructed secondary vertices. For this analysis, the working point corresponding to a 60 % efficiency for tagging b-jets in simulated tt events is used, resulting in a charm quark rejection factor of approximately 8 and a light quark/gluon jet rejection factor of about 600. To ensure that each physics object is counted only once, an overlap removal procedure is applied. If any two baseline electrons reside within R = 0.05 of one another, the electron with lower E T is discarded. Following this, any baseline jets within R = 0.2 of a baseline electron are removed. After this, any baseline electron or muon residing within R = 0.4 of a remaining baseline jet is discarded. Finally, to remove electrons originating from muon bremsstrahlung, any baseline electron within R = 0.01 of any remaining baseline muon is removed from the event.
The E miss T is defined as the magnitude of the vector sum of the transverse momenta of all photons, electrons, muons, baseline jets and an additional "soft term" [76]. The soft term includes clusters of energy in the calorimeter not associated with any calibrated object, which are corrected for material effects and the non-compensating nature of the calorimeter. Reconstructed photons used in the E miss T calculation are required to satisfy the "tight" requirements of Ref. [77].
Event selection
Events selected for this analysis must have at least five tracks with p T > 400 MeV associated with the primary vertex. Any event containing a baseline muon with |z 0 sin θ | > 0.2 mm or |d 0 | > 1 mm is rejected, to remove cosmic-ray events. To reject events with fake E miss T , those containing poorly measured muon candidates, characterised by large uncertainties on the measured momentum, are also removed. If the invariant mass of the two leading leptons in the event is less than 15 GeV the event is vetoed to suppress low-mass particle decays and Drell-Yan production.
Events are required to contain at least two signal leptons (electrons or muons). If more than two signal leptons are present, the two with the largest values of p T are selected. These leptons must pass one of the leptonic triggers, with the two leading leptons being matched, within R < 0.15, to the online trigger objects that triggered the event in the case of the dilepton triggers. For events selected by a single-lepton trigger, one of the two leading leptons must be matched to the online trigger object in the same way. The leading lepton in the event must have p T > 25 GeV and the sub-leading lepton is required to have p T > 10-14 GeV, depending on the p T theshold of the trigger selecting the event. For the off-Z analysis, the sub-leading lepton p T threshold is increased to 20 GeV. This is done to improve the accuracy of the method for estimating flavour-symmetric backgrounds, discussed in Sect. 6.2, in events with small dilepton invariant mass. For the same reason, the m threshold is also raised to 20 GeV in this search channel. The two leading leptons must be oppositely charged, with the signal selection requiring that these be same-flavour (SF) lepton pairs. The differentflavour (DF) channel is also exploited to estimate certain backgrounds, such as that due to tt production. All events are further required to contain at least two signal jets, since this is the minimum expected jet multiplicity for the signal models considered in this analysis.
Three types of region are used in the analysis. Control regions (CRs) are used to constrain the SM backgrounds. These backgrounds, estimated in the CRs, are first extrapolated to the validation regions (VRs) as a cross check and then to the signal regions (SRs), where an excess over the expected background is searched for.
GGM scenarios are the target of the on-Z search, where theG fromχ 0 1 → (Z / h) +G decays is expected to result in E miss T . The Z boson mass window used for this search is 81 < m < 101 GeV. To isolate GGM signals with high gluino mass and high jet activity the on-Z SR, SR-Z, is defined using requirements on E miss T and , where H T includes all signal jets and the two leading leptons. Since b-jets are often, but not always, expected in GGM decay chains, no requirement is placed on b-tagged jet multiplicity. Dedicated CRs are defined in order to estimate the contribution of various SM backgrounds to the SR. These regions are constructed with selection criteria similar to those of the SR, differing either in mll or MET ranges, or in lepton flavour requirements. A comprehensive discussion of the various methods used to perform these estimates follows in Sect. 6. For the SR and CRs, detailed in Table 2, a further requirement on the azimuthal opening angle between each of the leading two jets and the E miss T ( φ(jet 1,2 , E miss T )) is introduced to reject events with jet mismeasurements contributing to large fake E miss T . This requirement is applied in the SR and two CRs used in the on-Z search, all of which have high E miss T and H T thresholds, at 225 and 600 GeV, respectively. Additional VRs are defined at lower E miss T and H T to cross-check the SM background estimation methods. These are also sumarised in Table 2. The SR selection results in an acceptance times efficiency of 2-4 %, including leptonic Z branching fractions, for GGM signal models with μ > 400 GeV.
In the off-Z analysis, a search is performed in the Z boson sidebands. The Z boson mass window vetoed here is larger than that selected in the on-Z analysis (m / ∈ [80, 110] GeV) to maximise Z boson rejection. An asymmetric window is chosen to improve the suppression of boosted Z → μμ events with muons whose momenta are overestimated, leading to large E miss T . In this search, four SRs are defined by requirements on jet multiplicity, b-tagged jet multiplicity, Table 2 Overview of all signal, control and validation regions used in the on-Z search. More details are given in the text. The E miss T significance and the soft-term fraction f ST needed in the seed regions for the jet smearing method are defined in Sect. 6.1. The flavour combination of the dilepton pair is denoted as either "SF" for same-flavour or "DF" for different flavour Control regions Validation regions Table 3 Overview of all signal, control and validation regions used in the off-Z analysis. For SR-loose, events with two jets (at least three jets) are required to satisfy E miss T > 150 (100) GeV. Further details are the same as in Table 2 Off-Z region E miss and E miss T . The SR requirements are optimised for the simplified models of pair production of squarks (requiring at least two jets) and gluinos (requiring at least four jets) discussed in Sect. 3. Two SRs with a b-veto provide the best sensitivity in the simplified models considered here, since the signal b-jet content is lower than that of the dominant tt background. Orthogonal SRs with a requirement of at least one b-tagged jet target other signal models not explicitly considered here, such as those with bottom squarks that are lighter than the other squark flavours. For these four SRs, the requirement E miss T > 200 GeV is imposed. In addition, one signal region with requirements similar to those used in the CMS search [24] is defined (SR-loose). These SRs and their respective CRs, which have the same jet and E miss T requirements, but select different m ranges or lepton flavour combinations, are defined in Table 3.
The most sensitive off-Z SR for the squark-pair (gluinopair) model is SR-2j-bveto (SR-4j-bveto). Because the value of the m kinematic endpoint depends on unknown model parameters, the analysis is performed over multiple m ranges for these two SRs. The dilepton mass windows considered for the SR-2j-bveto and SR-4j-bveto regions are presented in Sect. 9. For the combined ee+μμ channels, the typical signal acceptance times efficiency values for the squark-pair (gluino-pair) model in the SR-2j-bveto (SR-4j-bveto) region are 0.1-10 % (0.1-8 %) over the full dilepton mass range.
The on-Z and off-Z searches are optimised for different signal models and as such are defined with orthogonal SRs. Given the different signatures probed, there are cases where the CR of one search may overlap with the SR of the other. Data events that fall in the off-Z SRs can comprise up to 60 % of the top CR for the on-Z analysis (CRT, defined in Table 2). Data events in SR-Z comprise up to 36 % of the events in the CRs with 80 < m < 110 GeV that are used to normalise the Z + jets background in the off-Z analysis, but the potential impact on the background prediction is small because the Z + jets contribution is a small fraction of the total background. For the following analysis, each search assumes only signal contamination from the specific signal model they are probing.
Background estimation
The dominant background processes in the signal regions, and those that are expected to be most difficult to model using MC simulation, are estimated using data-driven techniques. With SRs defined at large E miss T , any contribution from Z /γ * + jets will be a consequence of artificially high E miss T in the event due to, for example, jet mismeasurements. This background must be carefully estimated, particularly in the on-Z search, since the peaking Z /γ * + jets background can mimic the signal. This background is expected to constitute, in general, less than 10 % of the total background in the off-Z SRs and have a negligible contribution to SR-Z.
In both the off-Z and on-Z signal regions, the dominant backgrounds come from so-called "flavour-symmetric" processes, where the dileptonic branching fractions to ee, μμ and eμ have a 1:1:2 ratio such that the same-flavour contributions can be estimated using information from the differentflavour contribution. This group of backgrounds is dominated by tt and also includes W W , single top (W t) and Z → τ τ production, and makes up ∼60 % (∼ 90 %) of the predicted background in the on-Z (off-Z ) SRs.
Diboson backgrounds with real Z boson production, while small in the off-Z regions, contribute up to 25 % of the total background in the on-Z regions. These backgrounds are estimated using MC simulation, as are "rare top" backgrounds, including tt + W (W )/Z (i.e. tt + W , tt + Z and tt + W W ) and t + Z processes. All backgrounds that are estimated from MC simulation are subject to carefully assessed theoretical and experimental uncertainties.
Other processes, including those that might be present due to mis-reconstructed jets entering as leptons, can contribute up to 10 % (6 %) in the on-Z (off-Z ) SRs. The back-ground estimation techniques followed in the on-Z and off-Z searches are similar, with a few well-motivated exceptions.
6.1 Estimation of the Z /γ * + jets background
Z /γ * + jets background in the off-Z search
In the off-Z signal regions, the background from Z /γ * + jets is due to off-shell Z bosons and photons, or to on-shell Z bosons with lepton momenta that are mismeasured. The region with dilepton mass in the range 80 < m < 110 GeV is not considered as a search region. To estimate the contribution from Z /γ * + jets outside of this range, dilepton mass shape templates are derived from Z /γ * + jets MC events. These shape templates are normalised to data in control regions with the same selection as the corresponding signal regions, but with the requirement on m inverted to 80 < m < 110 GeV, to select a sample enriched in Z /γ * + jets events. These CRs are defined in Table 3.
Z /γ * + jets background in the on-Z search
The assessment of the peaking background due to Z /γ * + jets in the on-Z signal regions requires careful consideration. The events that populate the signal regions result from mismeasurements of physics objects where, for example, one of the final-state jets has its energy underestimated, resulting in an overestimate of the total E miss T in the event. Due to the difficulties of modelling instrumental E miss T in simulation, MC events are not relied upon alone for the estimation of the Z /γ * + jets background. A data-driven technique is used as the nominal method for estimating this background. This technique confirms the expectation from MC simulation that the Z + jets background is negligible in the SR.
The primary method used to model the Z /γ * + jets background in SR-Z is the so-called "jet smearing" method, which is described in detail in Ref. [78]. This involves defining a region with Z /γ * + jets events containing well-measured jets (at low E miss T ), known as the "seed" region. The jets in these events are then smeared using functions that describe the detector's jet p T response and φ resolution as a function of jet p T , creating a set of pseudo-data events. The jet-smearing method provides an estimate for the contribution from events containing both fake E miss T , from object mismeasurements, and real E miss T , from neutrinos in heavy-flavour quark decays, by using different response functions for light-flavour and btagged jets. The response function is measured by comparing generator-level jet p T to reconstructed jet p T in Pythia8 dijet MC events, generated using the CT10 NLO PDF set. This function is then tuned to data, based on a dijet balance analysis in which the p T asymmetry is used to constrain the width of the Gaussian core. The non-Gaussian tails of the response function are corrected based on ≥3-jet events in data, selected such that the E miss T in each event points either towards, or in the opposite direction to one of the jets. This ensures that one of the jets is clearly associated with the E miss T , and the jet response can then be described in terms of the E miss T and reconstructed jet p T . This procedure results in a good estimate of the overall jet response.
In order to calculate the E miss T distribution of the pseudodata, the E miss T is recalculated using the new (smeared) jet p T and φ. The distribution of pseudo-data events is then normalised to data in the low-E miss T region (10 < E miss T < 50 GeV) of a validation region, denoted VRZ, after the requirement of φ(jet 1,2 , E miss T ) > 0.4. This is defined in Table 2 and is designed to be representative of the signal region but at lower E miss T , where the contamination for relevant GGM signal models is expected to be less than 1 %.
The seed region must contain events with topologies similar to those expected in the signal region. To ensure that this is the case, the H T and jet multiplicity requirements applied to the seed region remain the same as in the signal region, while the E miss T threshold of 225 GeV is removed, as shown in Table 2. Although the seed events should have little to no E miss T , enforcing a direct upper limit on E miss T can introduce a bias in the jet p T distribution in the seed region compared with the signal region. To avoid this, a requirement on the E miss T significance, defined as: is used in the seed region. Here E jet T and E soft T are the summed E T from the baseline jets and the low-energy calorimeter deposits not associated with final-state physics objects, respectively. Placing a requirement on this variable does not produce a shape difference between jet p T distributions in the seed and signal regions, while effectively selecting well-balanced Z /γ * + jets events in the seed region. This requirement is also found to result in no event overlap between the seed region and SR-Z.
In the seed region an additional requirement is placed on the soft-term fraction, f ST , defined as the fraction of the total E miss T in an event originating from calorimeter energy deposits not associated with a calibrated lepton or , to select events with small f ST . This is useful because events with large values of fake E miss T tend to have low soft-term fractions ( f ST < 0.6). The requirements on the E miss T significance and f ST are initially optimised by applying the jet smearing method to Z /γ * + jets MC events and testing the agreement in the E miss T spectrum between direct and smeared MC events in the VRZ. This closure test is performed using the response function derived from MC simulation.
The Z /γ * + jets background predominantly comes from events where a single jet is grossly mismeasured, since the mismeasurement of additional jets is unlikely, and can lead to smearing that reduces the total E miss T . The requirement on the opening angle in φ between either of the leading two jets and the E miss T , φ(jet 1,2 , E miss T ) > 0.4, strongly suppresses this background. The estimate of the Z /γ * + jets background is performed both with and without this requirement, in order to aid in the interpretation of the results in the SR, as described in Sect. 8. The optimisation of the E miss T significance and f ST requirements are performed separately with and without the requirement, although the optimal values are not found to differ significantly.
The jet smearing method using the data-corrected jet response function is validated in VRZ, comparing smeared pseudo-data to data. Table 2 with a summary of the kinematic requirements imposed on the seed and Z validation region. Extrapolating the jet smearing estimate to the signal regions yields the results detailed in Table 4. The data-driven estimate is compatible with the MC expectation that the Z + jets background contributes significantly less than one event in SR-Z.
Estimation of the flavour-symmetric backgrounds
The dominant background in the signal regions is tt production, resulting in two leptons in the final state, with lesser contributors including the production of dibosons (W W ), single top quarks (W t) and Z bosons that decay to τ leptons. For these the so-called "flavour-symmetry" method can be used to estimate, in a data-driven way, the contribution from these processes in the same-flavour channels using their measured contribution to the different-flavour channels.
Flavour-symmetric background in the on-Z search
The flavour-symmetry method uses a control region, CReμ in the case of the on-Z search, which is defined to be identical to the signal region, but in the different-flavour eμ channel. In CReμ, the expected contamination due to GGM signal processes of interest is <3 %.
The number of data events observed (N data eμ ) in this control region is corrected by subtracting the expected contribution from backgrounds that are not flavour symmetric. The back-Events / 10 GeV Here the Z /γ * + jets background (solid blue) is modelled using p T -and φ-smeared pseudo-data events. The hatched uncertainty band includes the statistical uncertainty on the simulated event samples and the systematic uncertainty on the jet-smearing estimate due to the jet response function and the seed selection. The backgrounds due to W Z, Z Z or rare top processes, as well as from lepton fakes, are included under "Other Backgrounds" Table 4 Number of Z /γ * + jets background events estimated in the on-Z signal region (SR-Z) using the jet smearing method. This is compared with the prediction from the Sherpa MC simulation. The quoted uncertainties include those due to statistical and systematic effects (see Sect ground with the largest impact on this correction is that due to fake leptons, with the estimate provided by the matrix method, described in Sect. 6.3, being used in the subtraction. All other contributions, which include W Z, Z Z, t Z and tt + W (W )/Z processes, are taken directly from MC simulation. This corrected number, N data,corr eμ , is related to the expected number in the same-flavour channels, N est ee/μμ , by the following relations: where k ee and k μμ are electron and muon selection efficiency factors and α accounts for the different trigger efficiencies for same-flavour and different-flavour dilepton combinations. The selection efficiency factors are calculated using the ratio of dielectron and dimuon events in VRZ according to: where ee trig , μμ trig and eμ trig are the efficiencies of the dielectron, dimuon and electron-muon trigger configurations, respectively, and N data ee(μμ) (VRZ) is the number of ee (μμ) data events in VRZ. These selection efficiency factors are calculated separately for the cases where both leptons fall within the barrel, both fall within the endcap regions, and for barrel-endcap combinations. This is motivated by the fact that the trigger efficiencies differ in the central and more forward regions of the detector. This estimate is found to be consistent with that resulting from the use of single global k factors, which provides a simpler but less precise estimate. In each case the k factors are close to 1.0, and the N est ee or N est μμ estimates obtained using k factors from each configuration are consistent with one another to within 0.2σ .
The flavour-symmetric background estimate was chosen as the nominal method prior to examining the data yields in the signal region, since it relies less heavily on simulation and provides the most precise estimate. This data-driven Fig. 3. All other backgrounds estimated using the flavour-symmetry method are taken directly from MC simulation for this cross-check. Here, Z /γ * + jets MC events are used to model the small residual Z /γ * + jets background in the control region, while the jet smearing method provides the estimate in the signal region. The normalisation of the tt sample obtained from the fit is 0.52 ± 0.12 times the nominal MC normalisation, where the uncertainty includes all experimental and theoretical sources of uncertainty as discussed in Sect. 7. This result is compatible with observations from other ATLAS analyses, which indicate that MC simulation tends to overestimate data in regions dominated by tt events accompanied by much jet activity [79,80]. MC simulation has also been seen to overestimate contributions from tt processes in regions with high E miss T [81]. In selections with high E miss T but including lower H T , such as those used in the off-Z analysis, this downwards scaling is less dramatic. The results of the cross-check using the Z boson mass sidebands are shown in Table 5, with the sideband fit yielding a prediction slightly higher than, but consistent with, the flavour-symmetry estimate. This test is repeated varying the MC simulation sample used to model the tt background. The nominal Powheg+Pythia tt MC sample is replaced with a sample using Alpgen, and the fit is performed again. The same test is performed using a Powheg tt MC sample that uses Herwig, rather than Pythia, for the parton shower. In all cases the estimates are found to be consistent within 1σ . This cross-check using tt MC events is further validated in identical regions with intermediate E miss T (150 < E miss T < 225 GeV) and slightly looser H T requirements (H T > 500 GeV), as illustrated in Fig. 3. Here the extrapolation in m between the sideband region (VRT) and the on-Z region (VRTZ) shows consistent Table 5 The number of events for the flavour-symmetric background estimate in the on-Z signal region (SR-Z) using the data-driven method based on data in CReμ. This is compared with the prediction for the sum of the flavour-symmetric backgrounds (W W , t W , tt and Z → τ τ ) from a sideband fit to data in CRT. In each case the combined statistical and systematic uncertainties are indicated results within approximately 1σ between data and the fitted prediction.
The flavour-symmetry method is also tested in these VRs. An overview of the nominal background predictions, using the flavour-symmetry method, in CRT and these VRs is shown in Fig. 4. This summary includes CRT, VRT, VRTZ and two variations of VRT and VRTZ. The first variation, denoted VRT/VRTZ (high H T ), shows VRT/ VRTZ with an increased H T threshold (H T > 600 GeV), which provides a sample of events very close to the SR. The second variation, denoted VRT/VRTZ (high E miss T ), shows VRT/ VRTZ with the same E miss T cut as SR-Z, but the requirement 400 < H T < 600 GeV is added to provide a sample of events very close to the SR. In all cases the data are consistent with the prediction. GGM signal processes near the boundary of the expected excluded region are expected to contribute little to the normalisation regions, with contamination at the level of up to 4 % in CRT and 3 % in VRT. The corresponding contamination in VRTZ is expected to be ∼10 % across most of the relevant parameter space, increasing to a maximum value of ∼50 % in the region near m(g) = 700 GeV, μ = 200 GeV.
Flavour-symmetric background in the off-Z search
The background estimation method of Eq. (2) is extended to allow a prediction of the background dilepton mass shape, which is used explicitly to discriminate signal from background in the off-Z search. In addition to the k and α correction factors, a third correction factor S(i) is introduced (where i indicates the dilepton mass bin): These shape correction factors account for different reconstructed dilepton mass shapes in the ee, μμ, and eμ channels, which result from two effects. First, the offline selection efficiencies for electrons and muons depend differently on the lepton p T and η. For electrons, the offline selection efficiency increases slowly with p T , while it has very lit- show the ratio of the data to expected background. The error bars indicate the statistical uncertainty in data, while the shaded band indicates the total background uncertainty. The last bin contains the overflow the edge of the sensitivity of this analysis, the contamination from signal events in VR-offZ is less than 3 %.
Fake-lepton contribution
Events from W → ν+jets, semileptonic tt and single top (s-and t-channel) contribute to the background in the dilepton channels due to "fake" leptons. These include leptons from b-hadron decays, misidentified hadrons or converted photons, and are estimated from data using a matrix method, which is described in detail in Ref. [82]. This method involves creating a control sample using baseline leptons, thereby loosening the lepton isolation and identification requirements and increasing the probability of selecting a fake lepton. For each control or signal region, the relevant requirements are applied to this control sample, and the number of events with leptons that pass or fail the subsequent signal-lepton requirements are counted. Denoting the number of events passing signal lepton requirements by N pass and the number failing by N fail , the number of events containing a fake lepton for a single-lepton selection is given by where fake is the efficiency with which fake leptons passing the baseline lepton selection also pass signal lepton requirements and real is the relative identification efficiency (from baseline to signal lepton selection) for real leptons. This principle is expanded to a dilepton sample using a four-by-four matrix to account for the various possible real-fake combinations for the two leading leptons in the event.
The efficiency for fake leptons is estimated in control regions enriched with multi-jet events. Events are selected if they contain at least one baseline lepton, one signal jet with p T > 60 GeV and low E miss T (<30 GeV). The background due to processes containing prompt leptons, estimated from MC samples, is subtracted from the total data contribution in this region. From the resulting data sample the fraction of events in which the baseline leptons pass signal lepton requirements gives the fake efficiency. This calculation is performed separately for events with b-tagged jets and those without to take into account the various sources from which fake leptons originate. The real-lepton efficiency is estimated using Z → + − events in a data sample enriched with leptonically decaying Z bosons. Both the real-lepton and fakelepton efficiencies are further binned as a function of p T and η.
Estimation of other backgrounds
The remaining background processes, including diboson events with a Z boson decaying to leptons and the tt + W (W )/Z and t + Z backgrounds, are estimated from MC simulation. In these cases the most accurate theoretical cross sections available are used, as summarised in Table 1. Care is taken to ensure that the flavour-symmetric component of these backgrounds (for events where the two leptons do not originate from the same Z decay) is not double-counted.
Systematic uncertainties
Systematic uncertainties have an impact on the predicted signal region yields from the dominant backgrounds, the fakelepton estimation, and the yields from backgrounds predicted using simulation alone. The expected signal yields are also affected by systematic uncertainties. All sources of systematic uncertainty considered are discussed in the following subsections.
Experimental uncertainties
The experimental uncertainties arise from the modelling of both the signal processes and backgrounds estimated using MC simulation. Uncertainties associated with the jet energy scale (JES) are assessed using both simulation and in-situ measurements [70,71]. The JES uncertainty is influenced by the event topology, flavour composition, jet p T and η, as well as by the pile-up. The jet energy resolution (JER) is also affected by pile-up, and is estimated using in-situ measurements [83]. An uncertainty associated with the JVF requirement for selected jets is also applied by varying the JVF threshold up (0. Small uncertainties on the lepton energy scales and momentum resolutions are measured in Z → + − , J/ψ → + − and W → ± ν event samples [64]. These are propagated to the E miss T uncertainty, along with the uncertainties due to the JES and JER. An additional uncertainty on the energy scale of topological clusters in the calorimeters not associated with reconstructed objects (the E miss T soft term) is also applied to the E miss T calculation. The trigger efficiency is assigned a 5 % uncertainty following studies comparing the efficiency in simulation to that measured in Z → + − events in data.
The data-driven background estimates are subject to uncertainties associated with the methods employed and the limited number of events used in their estimation. The Z /γ * + jets background estimate has an uncertainty to account for differences between pseudo-data and MC events, the choice of seed region definition, the statistical precision of the seed region, and the jet response functions used to create the pseudo-data. Uncertainties in the flavour-symmetric background estimate include those related to the electron and muon selection efficiency factors k ee and k μμ , the trigger efficiency factor α, and, for the off-Z search only, the dilepton mass shape S(i) reweighting factors. Uncertainties attributed to the subtraction of the non-flavour-symmetric backgrounds, and those due to limited statistical precision in the eμ control regions, are also included. Finally, an uncertainty derived from the difference in real-lepton efficiency observed in tt and Z → + − events is assigned to the fake-background prediction. An additional uncertainty due to the number of events in the control samples used to derive the real efficiencies and fake rates is assigned to this background, as well as a 20 % uncertainty on the MC background subtraction in the control samples.
Theoretical uncertainties on background processes
For all backgrounds estimated from MC simulation, the following theoretical uncertainties are considered. The uncertainties due to the choice of factorisation and renormalisation scales are calculated by varying the nominal values by a factor of two. Uncertainties on the PDFs are evaluated following the prescription recommended by PDF4LHC [87]. Total crosssection uncertainties of 22 % [37] and 50 % are applied to tt +W /Z and tt +W W sub-processes, respectively. For the tt +W and tt +Z sub-processes, an additional uncertainty is evaluated by comparing samples generated with different numbers of partons, to account for the impact of the finite number of partons generated in the nominal samples. For the W Z and Z Z diboson samples, a parton shower uncertainty is estimated by comparing samples showered with Pythia and Herwig+Jimmy [88,89] and cross-section uncertainties of 5 and 7 % are applied, respectively. These cross-section uncertainties are estimated from variations of the value of the strong coupling constant, the PDF and the generator scales. For the small contribution from t + Z , a 50 % uncertainty is assigned. Finally, a statistical uncertainty derived from the finite size of the MC samples used in the background estimation process is included.
Dominant uncertainties on the background estimates
The dominant uncertainties in each signal region, along with their values relative to the total background expectation, are summarised in Table 6. In all signal regions the largest uncertainty is that associated with the flavour-symmetric background. The statistical uncertainty on the flavour-symmetric background due to the finite data yields in the eμ CRs is 24 % in the on-Z SR. This statistical uncertainty is also the dominant uncertainty for all SRs of the off-Z analysis except for SR-loose, for which the systematic uncertainty on the flavour-symmetric background prediction dominates. In SR-Z the combined MC generator and parton shower modelling uncertainty on the W Z background (7 %), as well as the uncertainty due to the fake-lepton background (14 %), are also important.
Theoretical uncertainties on signal processes
Signal cross sections are calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft gluon emission at NLO+NLL accuracy [55][56][57][58][59]. The nominal cross section and the uncertainty are taken from an envelope of cross-section predictions using different PDF sets and factorisation and renormalisation scales, as described in Ref. [90]. For the simplified models the uncertainty on the initial-state radiation modelling is important in the case of small mass differences during the cascade decays. MadGraph+Pythia samples are used to assess this uncertainty, with the factorisation and normalisation scale, the MadGraph parameter used for jet matching, the MadGraph parameter used to set the QCD radiation scale and the Pythia parameter responsible for the value of the QCD scale for final-state radiation, each being varied up and down by a factor of two. The resulting uncertainty on the signal acceptance is up to ∼25 % in regions with small mass differences within the decay chains.
Results
For the on-Z search, the resulting background estimates in the signal regions, along with the observed event yields, are displayed in Table 7. The dominant backgrounds are those due to flavour-symmetric and W Z and Z Z diboson processes. In the electron and muon channel combined, 10.6 ± 3.2 events are expected and 29 are observed. For each of these regions, a local probability for the background estimate to produce a fluctuation greater than or equal to the excess observed in the data is calculated using pseudo-experiments. When expressed in terms of the number of standard deviations, this value is referred to as the local significance, or simply the significance. These significances are quantified in the last column of Table 11 and correspond to a 1.7σ deviation in the muon channel and a 3.0σ deviation in the electron chan-nel, with the combined significance, calculated from the sum of the background predictions and observed yields in the muon and electron channels, being 3.0σ . The uncertainties on the background predictions in the ee and μμ channels are correlated as they are dominated by the statistical uncertainty of the eμ data sample that is used to derive the flavoursymmetric background in both channels. Since this sample is common to both channels, the relative statistical error on the flavour-symmetric background estimation does not decrease when combining the ee and μμ channels. No excess was reported in the CMS analysis of the Z + jets + E miss T final state based on √ s = 8 TeV data [24]; however, the kinematic requirements used in that search differ from those used in this paper.
Dilepton invariant mass and E miss T distributions in the electron and muon on-Z SR are shown in Fig. 6, with H T and jet multiplicity being shown in Fig. 7. For the SR selection a requirement is imposed to reject events with φ(jet 1,2 , E miss T ) < 0.4 to further suppress the background from Z /γ * + jets processes with mismeasured jets.
In Fig. 8, the distribution of events in the on-Z SR as a function of φ(jet 1,2 , E miss T ) (before this requirement is applied) is shown. In these figures the shapes of the flavoursymmetric and Z /γ * + jets backgrounds are derived using MC simulation and the normalisation is taken according to the data driven estimate.
For the off-Z search, the dilepton mass distributions in the five SRs are presented in Figs. 9 and 10, and summarised in Fig. 11. The expected backgrounds and observed yields in the below-Z and above-Z regions for SR-2j-bveto, SR-4j-bveto, and SR-loose are presented in Tables 8, 9, and 10, respectively. Corresponding results for SR-2j-btag and SR-4j-btag are presented in Sect. 1. The data are consistent with the expected SM backgrounds in all regions. In the SR-loose region with 20 < m < 70 GeV, similar to the region in which the CMS Collaboration observed a 2.6σ excess, 1133 events are observed, compared to an expectation of 1190 ± 40 ± 70 events.
Interpretation of results
In this section, exclusion limits are shown for the SUSY models described in Sect. 3. The asymptotic C L S prescription [91], implemented in the HistFitter program [92], is used to determine upper limits at 95 % confidence level (CL). All signal and background uncertainties are taken into account using a Gaussian model of nuisance parameter integration. All uncertainties except that on the signal cross section are included in the limit-setting configuration. The impact of varying the signal cross sections by their uncertainties is indicated separately. Numbers quoted in the text For the on-Z analysis, the data exceeds the background expectations in the ee (μμ) channel with a significance of 3.0 (1.7) standard deviations. Exclusion limits in specific models (Table 7) are considered simultaneously. The signal contamination in CReμ is found to be at the ∼1 % level, and is therefore neglected in this procedure. The expected and observed exclusion contours, in the plane of μ versus m(g) for the GGM model, are shown in Fig. 12. The ±1σ exp and ±2σ exp experimental uncertainty bands indicate the impact on the expected limit of all uncertainties considered on the background processes. The ±1σ SUSY theory uncertainty lines around the observed limit illustrate the change in the observed limit as the nominal signal cross section is scaled up and down by the theoretical cross-section uncertainty. Given the observed excess of events with respect to the SM prediction, the observed limits are weaker than expected. In the case of the tan β = 1.5 exclusion contour, the on-Z analysis is able to exclude gluino masses up to 850 GeV for μ > 450 GeV, whereas gluino masses of up to 820 GeV are excluded for the tan β = 30 model for μ > 600 GeV. The lower exclusion reach for the tan β = 30 models is due to the fact that the branching fraction forχ 0 1 → ZG is significantly smaller at tan β = 30 than at tan β = 1.5.
For the off-Z search, the limits for the squark-pair (gluinopair) model are based on the results of SR-2j-bveto (SR-4jbveto). The yields in the combined ee+μμ channels are used. Signal contamination in the eμ control region used for the flavour-symmetry method is taken into account by subtracting the expected increase in the background prediction from the signal yields. For each point in the signal model parameter space, limits on the signal strength are calculated using a "sliding window" approach. The binning in SR-2j-bveto (SR-4j-bveto) defines 45 (21) possible dilepton mass windows to use for the squark-pair (gluino-pair) model interpretation, of Table 8 Results in the off-Z search region SR-2j-bveto, in the below-Z range (20 < m < 80 GeV, top) and above-Z range (m > 110 GeV, bottom). The flavour symmetric, Z /γ * + jets and fake lepton background components are all derived using data-driven estimates described in the text. All other backgrounds are taken from MC simulation. The first uncertainty is statistical and the second is systematic Observed events 30 24 54 Expected background events 26 ± 4 ± 3 2 4 ± 4 ± 3 5 0 ± 8 ± 5 Flavour-symmetric backgrounds 24 ± 4 ± 3 2 2 ± 4 ± 3 4 6 ± 8 ± 4 Z /γ * + jets 0.6 ± 0.3 ± 0.7 1 .6 ± 0. Expected background events 35 ± 5 ± 4 3 8 ± 4 ± 8 7 3 ± 9 ± 9 Flavour-symmetric backgrounds 33 ± 4 ± 4 3 0 ± 4 ± 3 6 3 ± 8 ± 5 which the ten (nine) windows with the best expected sensitivity are selected. For each point in the signal model parameter space, the dilepton mass window with the best expected limit on the signal strength is selected. The excluded regions in the squark-LSP and gluino-LSP mass planes are shown in Fig. 13. The analysis probes squarks with masses up to 780 GeV, and gluinos with masses up to 1170 GeV.
The signal regions in these analyses are also used to place upper limits on the allowed number of BSM events (N BSM ) in each region. The observed (S 95 obs ) and expected (S 95 exp ) 95 % CL upper limits are also derived using the C L S procedure. These upper limits on N BSM can be interpreted as upper limits on the visible BSM cross section ( σ 95 obs ) by normalising N BSM by the total integrated luminosity. Results in the off-Z search region SR-4j-bveto, in the below-Z range (20 < m < 80 GeV, top) and above-Z range (m > 110 GeV, bottom). Details are the same as in Table 8 SR-4j-bveto ee SR-4j-bveto μμ SR-4j-bveto same-flavour combined Below-Z (20 < m < 80 GeV) Observed events 1 5 6 Expected background events 4.7 ± 1.6 ± 1.1 3 .6 ± 1.5 ± 1.0 8 .2 ± 3.1 ± 1.4 Flavour-symmetric backgrounds 4.1 ± 1.6 ± 1.1 3 .5 ± 1.5 ± 1.0 7 .7 ± 3.1 ± 1.3 Observed events 2 9 11 Expected background events 5.7 ± 1.6 ± 1.2 4 .5 ± 1.3 ± 1.7 1 0 ± 3 ± 2 Flavour-symmetric backgrounds 5.5 ± 1.6 ± 1.2 4 .3 ± 1.3 ± 1.0 9 .8 ± 2.9 ± 1.4 Flavour-symmetric backgrounds 730 ± 20 ± 60 800 ± 20 ± 60 1500 ± 40 ± 100 Fake leptons 30 ± 5 ± 5 6 .7 ± 3.7 ± 1.7 3 7 ± 6 ± 5 Here σ 95 obs is defined as the product of the signal production cross section, acceptance and reconstruction efficiency. The results are obtained using asymptotic formulae [93] in the case of the off-Z numbers. For SR-Z, with a considerably smaller sample size, pseudo-experiments are used. These numbers are presented in Table 11 for the on-Z search. Model-independent upper limits on the visible BSM cross section in the below-Z and above-Z ranges of the five signal regions in the off-Z search are presented in Tables 12 and 13, respectively. Limits for the most sensitive dilepton mass windows of SR-2j-bveto and SR-4jbveto used for the squark-and gluino-pair model interpre- Tables 14 and 15. These tables also present the confidence level observed for the background-only hypothesis C L B , and the onesided discovery p-value, p(s = 0), which is the probability that the event yield obtained in a single hypothetical background-only experiment (signal, s = 0) is greater than that observed in this dataset. The p(s = 0) value is truncated at 0.5.
Summary
This paper presents results of two searches for supersymmetric particles in events with two SFOS leptons, jets, and E miss T , using 20.3 fb −1 of 8 TeV pp collisions recorded by the ATLAS detector at the LHC. The first search targets events with a lepton pair with invariant mass consistent with that of the Z boson and hence probes models in which the Fig. 13 Excluded region in the (top) squark-LSP mass plane using the SR-2j-bveto results and (bottom) gluino-LSP mass plane using the SR-4j-bveto results. The observed, expected, and ±1σ expected exclusion contours are indicated. The observed limits obtained upon varying the signal cross section by ±1σ are also indicated. The region to the left of the diagonal dashed line has the squark mass less than the LSP mass and is hence not considered. Three signal benchmark points are shown, with their SUSY particle masses indicated in parentheses Table 13 Summary of model-independent upper limits for the five signal regions, in the above-Z (m > 110 GeV) dilepton mass range, in the combined ee + μμ and individual ee and μμ channels. Details are the same as in Table 12 Signal Table 14 Summary of model-independent upper limits for SR-2jbveto, in the combined ee + μμ and individual ee and μμ channels, for the ten dilepton mass windows used for the squark-pair interpretation. Details are the same as in lepton pair is produced from the decay Z → . In this search 6.4 ± 2.2 (4.2 ± 1.6) events from SM processes are expected in the μμ (ee) SR-Z, as predicted using almost exclusively data-driven methods. The background estimates for the major and most difficult-to-model backgrounds are cross-checked using MC simulation normalised in data control regions, providing further confidence in the SR prediction. Following this assessment of the expected background contribution to the SR the number of events in data is higher than anticipated, with 13 observed in SR-Z μμ and 16 in SR-Z ee. This corresponding significances are 1.7 standard deviations in the muon channel and 3.0 standard deviations in the electron channel. These results are interpreted in a supersymmetric model of general gauge mediation, and probe gluino masses up to 900 GeV. The second search targets events with a lepton pair with invariant mass inconsistent with Z boson decay, and probes models with the decay chainχ 0 2 → + −χ 0 1 . In this case the data are found to be consistent with the expected SM backgrounds. No evidence for an excess is observed in the region similar to that in which CMS reported a 2.6σ excess [24]. The results are interpreted in simplified models with squark-and gluino-pair production, and probe squark (gluino) masses up to about 780 (1170) GeV.
This section provides additional results of the off-Z search. The expected backgrounds and observed yields in the below-Z and above-Z regions for VR, SR-2j-btag, and SR-4j-btag, are presented in Tables 16, 17, and 18, respectively.
Table 16
Results in the off-Z validation region (VR-offZ), in the below-Z range (20 < m < 70 GeV, top) and above-Z range (m > 110 GeV, bottom). The flavour symmetric, Z /γ * + jets and fake lepton background components are all derived using data-driven estimates described in the text. All other backgrounds are taken from MC simulation. The first uncertainty is statistical and the second is systematic Fake leptons 21 ± 4 ± 2 7 .9 ± 3.1 ± 2.9 2 9 ± 5 ± 4 | 15,953 | 2015-07-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Automated NNLL \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$+$$\end{document}+ NLO resummation for jet-veto cross sections
In electroweak-boson production processes with a jet veto, higher-order corrections are enhanced by logarithms of the veto scale over the invariant mass of the boson system. In this paper, we resum these Sudakov logarithms at next-to-next-to-leading logarithmic accuracy and match our predictions to next-to-leading-order (NLO) fixed-order results. We perform the calculation in an automated way, for arbitrary electroweak final states and in the presence of kinematic cuts on the leptons produced in the decays of the electroweak bosons. The resummation is based on a factorization theorem for the cross sections into hard functions, which encode the virtual corrections to the boson production process, and beam functions, which describe the low-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_T$$\end{document}pT emissions collinear to the beams. The one-loop hard functions for arbitrary processes are calculated using the MadGraph5_aMC@NLO framework, while the beam functions are process independent. We perform the resummation for a variety of processes, in particular for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W^+ W^-$$\end{document}W+W- pair production followed by leptonic decays of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$W$$\end{document}W bosons.
Introduction
In many experimental measurements a veto on hard jets is imposed to suppress backgrounds.Such a veto is particularly useful to suppress top-quark backgrounds to processes involving W bosons, since the W bosons from the decay of the top quarks come in association with b-jets, which are rejected by the jet veto.For example, a jet veto is crucial to measure Higgs production with subsequent decay H → W + W − .It is imposed by rejecting events which involve jets with transverse momentum above a scale p veto T , which is typically chosen to be p veto T ≈ 20 − 30 GeV.Since the veto scale is much lower than the invariant mass Q of the electroweak final state, perturbative corrections to the cross section are enhanced by Sudakov logarithms of the ratio p veto T /Q.There has been a lot of theoretical progress over the past two years concerning the resummation of jet-veto logarithms in Higgs-boson production.Using the CAESAR formalism [1], these logarithms were first computed at next-to-leading logarithmic (NLL) order in [2], and this treatment was later extended to NNLL [3].In between these papers, an all-order factorization formula derived in Soft-Collinear Effective Theory (SCET) [4][5][6][7] was proposed [8], and a resummed result which includes almost all of the ingredients required for N 3 LL accuracy was presented [9].A third group of authors performed an independent analysis in SCET [10] and also combined the results for different jet multiplicities [11][12][13].
The jet veto is not only necessary in H → W + W − but also in the measurement of the diboson cross section itself.The fact that LHC measurements [14][15][16][17] yield values of the W + W − cross section that are higher than theoretical predictions has triggered discussions as to whether this excess could be due to New Physics [18][19][20].To be sure whether there indeed is an excess, it is important to have reliable theoretical predictions not only for the total cross section, for which the next-to-next-to leading order (NNLO) result has been obtained recently [21], but also for the cross section in the presence of experimental cuts, most importantly in the presence of a jet veto. 1 Several recent papers have addressed this issue and have come to somewhat different conclusions.In [23], the Sudakov logarithms associated with the jet veto were resummed at NNLL accuracy.It was claimed that resummation effects increase the cross section and bring the Standard-Model prediction in agreement with the experimental measurements.On the other hand, based on a study of transverse-momentum resummation, the authors of [24] concluded that resummation effects are small for the relevant values of p veto T .Most recently, the effect of using a matched parton shower to predict the fiducial cross section, as it is done in the experimental analyses, was analyzed in [25].These authors concluded that resummation effects are small and that a fixed-order computation of the fiducial rate would lead to theoretical predictions in agreement with the measurements, but that the matched parton shower overestimates the Sudakov suppression of the rate and leads to systematically lower theoretical predictions when extrapolating back to the total rate.
In the present paper, we present an automated method to perform resummations for arbitrary vector-boson production processes involving jet vetoes.Instead of computing resummed cross sections analytically, on a case-by-case basis, we obtain them in an automated way using the MadGraph5_aMC@NLO framework [26].Our method yields results which are accurate at NNLL and are matched to NLO fixed-order results.Such an automated procedure is obviously much more efficient and less error prone than computing the ingredients by hand or extracting them from the literature.Most importantly, our approach allows us to also include the decay of the vector bosons, along with cuts on the leptons in the final state.
We have implemented two different methods to perform the resummation.The first one is based on reweighting tree-level events generated by MadGraph.It yields jet-veto cross sections accurate at NNLL order.The event weight includes universal resummation factors as well as the process-specific one-loop virtual corrections, which are computed using<EMAIL_ADDRESS>the second method, we modify the NLO fixed-order computation in such a way that the end result is accurate at both NNLL and NLO.In this second method not only the hard function, which encodes the virtual corrections, but also the beam functions, which describe the emissions at small transverse momentum, are computed by Mad-graph5_aMC@NLO.
Our paper is organized as follows.We start in Section 2 by reviewing the resummation formula for cross sections in the presence of a jet veto.We also discuss non-perturbative corrections and point out that they could be sizable, similar in magnitude as the recently calculated NNLO corrections.We then explain in Section 3 how the automated resummation can be implemented in the Madgraph5_aMC@NLO framework.In Section 4 we use our method to compute cross sections for different boson-production processes and discuss in detail the scale and scheme choices and the resulting theoretical uncertainties.We compare our resummed predictions to fixed-order results for the cross sections, to the results obtained from a matched parton shower, and to the NNLL results of [23].We also match our resummed result to fixed-order NLO predictions.The relevant matching corrections turn out to be very small, which indicates that the bulk of the NLO result is already captured by the factorization formula evaluated with NNLL accuracy.This remains true after imposing cuts on the leptonic final state in the decays of the electroweak bosons.We compare predictions for the final states Z, W + W − and W + W − W + and consider ratios of cross sections, which have small uncertainties if they are properly defined.We then discuss the implications of our results on the value of the W + W − cross section and conclude in Section 5.
Factorization Theorem for Jet-Veto Cross Sections
We focus on electroweak-boson production processes with a veto on jets with transverse momentum above a cut p veto T .The large logarithms which arise in the presence of the jet veto have the form α n s ln m (p veto T /Q) with m ≤ 2n, where Q denotes the invariant mass of the boson system.Our goal is the resummation of these logarithms to all orders in perturbation theory and at leading power in the small ratio p veto T /Q.For concreteness, we will discuss the resummation for W + W − pair production in the following, but the formalism applies to any number of massive vector bosons and Higgs bosons or other massive color-singlet particles in the final state.The resummation is based on a factorization theorem which arises in the limit p veto T /Q → 0 [8].Its schematic form is shown in Figure 1.The main ingredients of the theorem are hard functions H ij , which encode the virtual QCD corrections to the partonic hard-scattering processes i + j → W + W − , and two beam functions Bi and Bj , which describe the low-p T emissions collinear to the two beams.Before writing out the factorization theorem in more detail, let us specify the kinematics of the process at low p veto T .The momenta of the incoming protons are p 1 and p 2 .The partons emerging from the parton distribution functions (PDFs) carry momenta z 1 p 1 and z 2 p 2 .After possible emissions (described by the beam functions Bi ), the momenta ξ 1 p 1 and ξ 2 p 2 are left to produce the boson pair through a hard interaction H ij .In the limit of small transverse momenta we can neglect recoil effects, so that the partons are still collinear to the proton momentum after the emissions.We define with ŝ + t + û = 2M 2 W .Note that our definition of the variable ŝ differs from the standard choice (z 1 p 1 + z 2 p 2 ) 2 .The quantity ŝ we define is the one relevant for the boson production process, i.e. the one that enters the hard function.In the small transverse-momentum limit of the emissions, we obtain where n µ = (1, 0, 0, 1) and nµ = (1, 0, 0, −1) are two light-cone vectors in the beam directions, y denotes the rapidity of q = q 1 + q 2 in the laboratory frame, and s = (p 1 + p 2 ) 2 .The crucial feature of ( 2) is that it shows that one can obtain the arguments of the hard function directly from the vector-boson (and proton) kinematics.The same is true for an arbitrary electroweak final state.At low p veto T , the differential cross section in the presence of a jet veto has the factorized form [8,9] Here i and j are the flavors of the partons which enter the hard-scattering process after initial-state radiation, and σ 0 ij (Q 2 , t) is the Born-level cross section for the production of the electroweak final state.Since the electroweak final state is a color singlet, we either deal with q q or gg.For W + W − pair production at leading order only the quark channels contribute, but starting from NNLO also the gluon-induced reaction occurs.
The second ingredient in (3) are the beam functions Bi (ξ, p veto T ), which are given by a convolution of a perturbative kernel Īq←k (z, p veto T , µ) describing the emissions with the standard PDFs φ k : The bar over these functions indicates that a factor e h i (p veto T ,µ) has been extracted from the original definitions of these functions in terms of SCET operators, called B i and and analogously for Īi←k (z, p veto T , µ).This factor is normalized such that h i (p veto T , p veto T ) = 1, and chosen such that the remaining function Bi (ξ, p veto T ) is renormalization-group (RG) invariant.The explicit form of h i (p veto T , µ) as well as the one-loop kernels Īi←k (z, p veto T , µ) are listed in the appendix.
The final ingredient in (3) is the prefactor P ij (Q 2 , t, p veto T , µ), which includes the hard function and the resummation of large logarithms.It has the form where the hard function contains higher-order finite virtual corrections to the Born-level cross section.Since these higher-order corrections contain (double) logarithms of Q/µ h , the hard matching scale µ h should be chosen of order Q.The evolution of the hard function to a lower scale µ Q is controlled by an RG evolution equation.The corresponding evolution function U i (Q 2 , µ h , µ), together with the collinear anomaly [27] and the prefactors extracted from the beam functions, is absorbed into the factor E i in (6).The collinear anomaly arises due to light-cone divergences and provides an additional source of large logarithms in processes sensitive to small transverse momenta.The explicit form of the quantity E i reads The evolution factor at NNLL accuracy is given in the appendix.It differs for quark-initiated (i = q) and gluon-initiated (i = g) processes but is independent of the quark flavors.Note that the evolution factor depends on the kinematics of the final state only via the invariant mass Q.The anomaly exponent F i (p veto T , µ, R) resums the large anomalous logarithms in the beam and soft functions, which arise from the rapidity difference between the modes which contribute to the individual functions [27][28][29].Starting from two-loop order (which is needed for NNLL resummation) this exponent depends on the jet radius R, but it is the same for any k T -style sequential jet-clustering algorithm.The explicit form of the two-loop exponent can be found in the appendix.It was calculated in [9] and is related to the function F obtained earlier in [3].We stress that the factorization theorem holds up to power corrections suppressed by p veto T /Q, and up to nonperturbative effects suppressed by Λ QCD /p veto T .For the weak-boson transverse-momentum spectrum, these corrections depend on p 2 T and hence are of second order in p veto T /Q and Λ QCD /p veto T .The definition of the jet veto, on the other hand, involves an absolute value of the jet transverse momentum, and for this reason there can be firstorder power corrections.Non-perturbative corrections to processes involving an anomaly were studied in [30], where it was found that these effects are enhanced by a logarithm of the rapidity difference between the left-and right-collinear emissions and can be viewed as a nonperturbative contribution to the anomaly exponent F i in (8).The leading non-perturbative corrections to jet-veto cross sections are therefore expected to scale as Due to the fact that the correction is of first order and logarithmically enhanced, these effects might not be negligible.For example, assuming Λ NP = 0.5 GeV and p veto T = 20 GeV, one ends up with a 6% effect at Q = 222 GeV, which is the median Q value in W + W − production.Numerically, this is not much smaller than the NNLO correction to the cross section calculated in [21].The value of the non-perturbative quantity Λ NP is unknown, but it could be obtained from the matrix element of two soft Wilson lines along the beam directions, where p jet T is the transverse momentum of the leading jet in the final state X.The phase-space integrals in the matrix element M veto suffer from a rapidity divergence, which needs to be regularized.The parameter Λ NP multiplies the rapidity divergence (see [30] for more details).To get an idea of the size of non-perturbative effects, we have computed the hadronization effects to the cross section using Pythia 8 [31] with its default tune.We find that they change the cross section by about 10% at p veto T = 10 GeV and 3% at p veto T = 20 GeV.Above p veto T > 20 GeV, the simple parametrization in (9) with Λ NP = 240 MeV provides a good description of the Pythia hadronization corrections, while a first-order power correction without logarithmic enhancement would underestimate the effects at higher p veto T values.However, one should be careful in relying on Pythia hadronization effects in the context of precision calculations.There are other examples, such as the event-shape variable thrust, where Pythia appears to underestimate the size of these effects [32].
Automated Resummation
We now explain how to automate the resummation by suitably modifying existing fixed-order results.We shall employ two different resummation schemes.In Scheme A, we work with tree-level events obtained from MadGraph5_aMC@NLO [26].We supply the beam functions from explicit calculations but compute the hard functions automatically and then reweight the events to achieve the resummation.In Scheme B, we use MadGraph5_aMC@NLO in fixed-order mode and compute the NLO cross section with a jet veto.To achieve the resummation we subtract the logarithmically enhanced pieces from the fixed-order cross section and multiply them back in resummed form.In this second scheme, both the hard functions and the beam functions are computed using<EMAIL_ADDRESS>second scheme is more convenient for practical computations but limited to NNLL order, while the first scheme allows (in principle) for arbitrary accuracy of the resummation.
Scheme A: NNLL from Reweighting Born-Level Events
The fact that the resummed result (3) has Born-level kinematics in the limit p veto T → 0 makes it possible to achieve the resummation of large logarithms by a simple reweighting procedure.If we use a tree-level event generator such as MadGraph, the resummation can be implemented by rescaling the event weights with the ratio of the resummed to the tree-level cross sections at each kinematic point.Specifically, we need to replace the PDFs φ i used in the leading-order (LO) result with the beam functions Bi , and we need to supply the hard matching correction and the resummation factor E i .For incoming particle of flavor i, j ∈ {q, q, g}, the reweighting factor at NNLL order reads All the kinematic variables are determined by the event kinematics.At leading order ξ 1 and ξ 2 are just the momentum fractions of the incoming particles, ξ i = 2 E i / √ s.Note that we do not need to adopt the same value of the renormalization scale µ as in the Born-level events, which were evaluated at a scale µ Mad inherent to the MadGraph code.However, in cases such as Higgs production, where the Born-level cross section depends on α s , we have to multiply by the appropriate power N of the ratio α s (µ)/α s (µ Mad ), where N = 2 for gluon-induced processes.We therefore only run MadGraph once, with a fixed reference scale µ Mad .Scale uncertainties can then be estimated by repeating the reweighting with different values of µ and µ h .
Let us now detail the numerical implementation of the reweighting factor, starting with the beam functions, which are defined in (4) in terms of convolutions of perturbative kernel functions with PDFs.At one-loop order, they are linear in the logarithm of p veto T , and hence To perform the reweighting in an efficient way, we compute and tabulate the convolution integrals for b i (ξ, µ) and c i (ξ, µ) for a grid of ξ and µ values.Since the beam functions are independent of the final state, this can be done once and for all.Using the same grid as the underlying PDFs itself, we then use standard PDF interpolation routines to have fast and accurate numerical representations for the beam functions.We have implemented the beam functions and the resummation factor E i (Q 2 , p veto T , µ h , µ, R) in a small Fortran code, which is called by the event reweighting routine written in Python.
The most complicated component of the reweighting factor by far is the hard function H (1) ij (Q 2 , t, µ h ).This is process dependent and its computation requires a one-loop calculation.
Fortunately, the necessary one-loop computations have been automated in the past few years.In particular, the MadGraph5_aMC@NLO framework provides the possibility to evaluate virtual corrections at specific phase-space points [33].We use this code to evaluate the virtual corrections V ij for each event.At each phase-space point, the code provides the result in the form of the coefficients C i of the double pole, single pole, and finite terms in the expansion in , which is written in the form The scale µ Mad can be chosen when running the MadLoop code.In the q q channel, the doublepole coefficient is , and the finite part in the expansion of the above expression in directly yields the hard function H (1) For Z-boson production one has C 0 (Q) = −32/3 + 4π 2 /3.For other choices µ Mad = Q this result gets modified to In practice, we first compute the hard function at some value of the reference scale µ Mad for each event and write the result in the event record.The result at a different scale can then be obtained using the above relation.The reweighting script uses the result for the hard function and combines it with the beam functions and the resummation factor.
To obtain the best possible prediction, we match our result to the NLO fixed-order result for the cross-section.This matching allows us to also include terms which are power suppressed as p veto T → 0. The simplest way to achieve the matching is to subtract from the resummed result its expansion to NLO and to then add back the full NLO result Our final NNLL+NLO result resums higher-order terms that are logarithmically enhanced, but also includes the full NLO result.To obtain the expansion of the resummed result, we simply do the reweighting with the fixed-order expansion of the reweighting factor in (11).
The NLO result can be obtained from running MadGraph5_aMC@NLO in fixed-order mode.
The difference between the full NLO result and the expansion of the resummed result is called the matching correction.By definition, this correction vanishes as p veto T → 0 and is expected to scale as p veto T /Q.As we will discuss in Section 4.2, it is numerically very small for the values of p veto T which are experimentally relevant.
Scheme B: NNLL+NLO with Automated Computation of the Beam Functions and Matching Corrections
In the reweighting scheme discussed above, we use MadGraph5_aMC@NLO to compute the hard functions but supply the beam functions from an explicit calculation.One can go even further and also compute the beam functions and the matching corrections automatically and in a single step.This is done by first factoring out the hard corrections and then performing a NLO run in the presence of the jet veto.An advantage of this second approach is that the beam functions are computed on the fly and it is therefore easy to use different PDF sets without any need to recompute the beam functions.A slight disadvantage is that one has to run MadGraph5_aMC@NLO in NLO mode.One can thus no longer work with events and will have to perform a new run when changing the cuts.However, if the matching is included in Scheme A described above, then a NLO run is needed also in this case.Note also that Scheme B only works at NNLL accuracy, while Scheme A allows for arbitrary precision if the necessary reweighting factor is supplied.
In order not to contaminate the matching corrections with the large logarithms contained in the hard function, we factor out the prefactor P ij in (6) and define a reduced cross section σij by The reduced cross section has the form where ∆σ = O(p veto T /Q) contains the power corrections and is given by the matching correction (16) divided by the prefactor.The function P ij receives one-loop corrections from the hard function and the evolution factor E i so that we can write Provided we choose µ ∼ p veto T in the reduced cross section σ, all large logarithms are resummed in the RG-invariant prefactor P ij .Multiplying back the prefactor then yields the full NNLL+NLO cross section in the form (20) Note that the matching procedure differs from the other scheme.In ( 16) above, we performed a purely additive matching, while in (20) the resummation factor E i appears as an overall factor.This multiplicative matching generates higher-order logarithmic terms also for the power-suppressed contributions of order p veto T /Q and higher.These additional terms are not controlled by the factorization theorem (3), which holds only at leading power, but one can hope that at least some of the logarithmic terms at subleading power are universal and will be captured by this treatment.For the case of Higgs production, the multiplicative matching scheme is preferred, since the perturbative corrections to the hard function are very large.In (20) they are extracted as a overall factor.For the q q-initiated processes we study in this paper, the two schemes give almost indistinguishable results, as we will see in Section 4.2 below.
To implement (20) in MadGraph5_aMC@NLO we have directly modified its Fortran code by including the logarithmically enhanced terms.The expanded logarithmically enhanced terms, i.e. the second term on the right hand side of (19), is similar to the compensating Sudakov factor introduced in the FxFx merging prescription, see (2.46) of [34], and it is therefore implemented at the same place in the code.In MadGraph5_aMC@NLO each realemission phase-space configuration has corresponding Born kinematics defined by the FKS mapping [35].Therefore we can always compute the prefactor P ij using Born kinematics, and it can multiply the complete reduced cross section, including the real-emission contributions.In order to improve the run time, the time-consuming one-loop matrix elements are computed only once for each phase-space configuration, cached in memory, and used also for the (expanded) hard function.However, compared to normal running of MadGraph5_aMC@NLO, we cannot reduce the number of calls to the virtual corrections by using suitable approximations of it, as described in Sec.2.4.3 of [26], because the reduced cross section is multiplied by them, resulting in positive feedback loops in setting up the approximations.When running MadGraph5_aMC@NLO in fNLO mode, setting the parameter ickkw in the run card.datto -1 turns on in the inclusion of the logarithmically enhanced terms and sets the hard and soft scales to Q and p veto T (given by the ptj parameter in the run card.dat),respectively.Hard and soft scale variations, as well as PDF uncertainties, can be computed at minimal CPU costs by reweighting [36].This addition to the MadGraph5_aMC@NLO will become public with the next release of the code.
Phenomenological Results
We now proceed to give numerical results for different electroweak-boson production cross sections.Before presenting our final results, we discuss a variety of issues such as the proper choice of matching and factorization scales, the size of the matching corrections and the difference between the two resummation schemes discussed in the previous section.We then present results for the W + W − cross section as well as the cross section including the decay of the W bosons with cuts on the final-state leptons.Since the published measurements [14,15] were taken at √ s = 7 TeV, we will present our results for this center-of-mass energy.For the electroweak parameters we use MadGraph5 default values, in particular α em = 1/132.5,G F = 1.166 × 10 −5 GeV −2 , M W = 80.42 GeV and M Z = 91.19 GeV.
In all of our results below, we work with the MSTW2008NNLO PDF set and its associated value α s (M Z ) = 0.1171 [37].The choice of a NNLO PDF set seems appropriate, because we believe that the resummation captures the most important part of the NNLO corrections.We will define our jets using the anti-k T algorithm with a jet radius of R = 0.4.The only quantity sensitive to the jet radius at NNLL+NLO accuracy is the anomaly exponent F ij (p veto T , µ, R), and it is the same for all k T -style clustering algorithms.As the default scheme for our plots we use Scheme A, since it is easier to disentangle and discuss the individual ingredients of the calculation (NLL versus NNLL resummation, matching to fixed-order perturbation theory) in this scheme.However, we find that both schemes give almost indistinguishable numerical results at NNLL+NLO level.
Resummed Results and Choice of the Hard Scale
In Figure 2 Z for Z-boson production and Q 2 = (q 1 +q 2 ) 2 for the W + W − final state (defined on an event-by-event basis).The resulting uncertainties are then added quadratically.In addition to the standard scale choice µ 2 h ≈ Q 2 we consider using an imaginary value for the hard matching scale, such that µ 2 h ≈ −Q 2 .The corresponding results are shown on the right-hand side of Figure 2.For comparison, we also show the NLO fixed-order results, which will be discussed in more detail in the next section.In all cases, we observe that going from NLL to NNLL accuracy improves the stability of the predictions significantly.Also, the NNLL bands are closer to the fixed-order NLO results than the NLL bands.
The use of an imaginary value of the hard matching scale µ h has been advocated in the context of Higgs production, because it maps the relevant hard function onto the space-like gluon form factor [38,39].This Euclidean quantity shows a much better perturbative behavior than the time-like form factor, which suffers from large numerical corrections ∼ (α s π 2 ) n due to imaginary parts from Sudakov double logarithms, which arise in time-like kinematics.The same arguments apply to the case of Z production.In [23], the choice µ 2 h < 0 was applied to W + W − pair production, and it was argued that this leads to a significant enhancement of the cross section, bringing the theoretical prediction in agreement with LHC measurements.T /2, 2Q] for comparison.The panels on the left refer to the standard choice µ 2 h > 0, while those on the right show results obtained using µ 2 h < 0.
Indeed, one can observe from Figure 2 that the resummed results for the cross sections obtained with µ 2 h < 0 are significantly larger than those obtained with the standard choice µ 2 h > 0. For W + W − production with p veto T = 25 GeV, the increase in the central value of the NNLL+NLO cross section is about 4.8% (which is of the order as the recently calculated NNLO corrections [21]). 2 We stress, however, that in the case of multi-particle final states such as W + W − 2 There is an ambiguity when choosing µ 2 h < 0 related to the fact that the running coupling α s (µ 2 ) has a cut along the negative µ 2 axis.One can either choose the default matching scale above or below the cut, µ 2 h = −Q 2 ± i .Our values for the cross section obtained within the MadGraph5_aMC@NLO framework correspond to the principal value prescription, while the authors of [23] adopt the default choice µ 2 h = −Q 2 −i .At NNLL order, the latter choice yields a result that is 2% higher (at p veto T = 25 GeV) than that obtained with the principal-value prescription.This difference would be reduced at higher orders.A detailed numerical comparison with [23] further revealed that their implementation of the beam functions was incorrect, which led to an additional increase of their result for the resummed cross section by about 3.5%.
the hard function depends on several kinematic scales (ŝ and t in the present case), some of which are time-like and some of which are space-like.Unfortunately, it is impossible to adopt a suitable choice of the hard matching scale, which would map the hard function onto a Euclidean quantity, such that all (α s π 2 ) n terms can be resummed by means of RG evolution equations.It is therefore not clear whether the convergence of the perturbation series can be improved by using the choice µ 2 h < 0. This problem was discussed in detail in the context of Higgs plus jet production in [40].Even though the convergence in the right panels of Figure 2 looks somewhat better than in the plots shown on the left, we have decided to adopt the conventional prescription µ h > 0 for the hard matching scale.Perhaps a more conservative way to assess the scale uncertainty would be to allow for arbitrary complex scale choices Q/2 < |µ h | < 2Q and then give the resulting uncertainty, as was recently proposed in [41].
For Higgs production, the resummation of jet-veto logarithms was performed to higher accuracy by including the two-loop hard and beam functions as well as the RG evolution factor at approximate N 3 LL order [9].The only missing ingredients for full N 3 LL+NNLO accuracy are the three-loop anomaly exponent and the four-loop cusp anomalous dimension, whose effects have been estimated and included in the error budget.It was observed in this reference that the two-loop beam functions decrease the cross section, and we expect a similar effect in the present case.In the future, it should be possible to reach the same level of accuracy also for W + W − production and related processes.The corresponding two-loop hard functions can be extracted from the two-loop virtual corrections, which have recently been obtained in [21,42].The product of beam functions integrated over rapidity could be extracted numerically from NNLO fixed-order codes for Z-boson production such as [43,44], following the procedure employed in [9].This is sufficient to obtain the inclusive W + W − cross section, while a two-loop computation of the beam functions would be required for more exclusive cross-section predictions.Once (approximate) N 3 LL+NNLO predictions for the W + W − cross sections are available, the above-mentioned ambiguities related to the choice of the hard matching scale will be reduced significantly.
Fixed-Order Results and Matching
In order to obtained the best possible predictions, we need to match our resummed results for the cross sections with fixed-order expressions at NLO.The scale dependence of the NLO expression for the for the W + W − production cross section at √ s = 7 TeV is shown in the left panel in Figure 3.We set the factorization and renormalization scales equal (µ = µ r = µ f ) and vary them from µ = p veto T /2 up to µ = 2Q.This is a much larger scale variation than is usually considered, but this wider range seems appropriate since the problem at hand involves physics at both scales.For comparison, we also show the bands one would obtain from a variation of µ by a factor of 2 around either a high default value µ = Q or a low default value µ = p veto T .Our broad scale variation is obviously more conservative, since it covers both options.Nevertheless, fixed-order computations usually adopt the high scale µ = Q as the default value, and from Figure 2 it appears that such a choice indeed leads to smaller higher-order corrections.A similar behavior is found for all cases studied in this paper.The invariant-mass distribution of the W -boson pair is shown in the right panel of Figure 3. Defining the average hard scale Q by the median value of this distribution, one obtains Q = 222 GeV.This value will be useful in our phenomenological discussion below.
As discussed earlier and shown in (16), in Scheme A this matching is purely additive, i.e.
The expansion of the resummed result is obtained by performing the reweighting with the reweighting factor expanded to NLO.If the resummation is performed with NNLL accuracy (or higher), the matching correction inside the parenthesis is power suppressed in p veto T /Q.Note that we are free to use a different scale µ m for the matching correction than for the resummed result, since the power corrections in p veto T /Q must be separately scale invariant.To obtain our uncertainty bands, the scales µ, µ h and the matching scale µ m are all varied independently.We then add the resulting uncertainties quadratically.We choose the number of flavors for the resummed results as n f = 5, but since MadGraph5_aMC@NLO cannot produce five-flavor NLO results for W + W − due to the presence of top quark resonant contributions in the NLO corrections, we have calculated the matching corrections with n f = 4 light flavors.
While the appropriate scale choice is clear for the case of the beam functions which describe emissions near the scale p veto T , the correct choice of µ m is not immediately obvious, because the matching corrections receive contributions associated with both the low and the high scale.The result for the cross section obtained with a high and a low matching scale is shown in Figure 4, along with the corresponding relative size of the NNLL matching corrections.The matching corrections are well-behaved in both cases.They are very small at the low p veto T values shown in Figure 4 and are therefore difficult to extract numerically.At larger values of p veto T they grow linearly up to 3% at p veto T = 80 GeV.At NNLL order, the matching corrections are small enough that they could be safely ignored for values up to p veto T = 35 GeV.At NLL order, on the other hand, not all leading-power NLO contributions are included in the resummed result, and therefore the predictions depend strongly on the matching scale µ m .Figure 2 shows that the NNLL results lie rather close to the NLO results at the high scale µ = Q.
Since, as we have pointed out above, the fixed-order perturbative expansion appears to work better with a high scale choice, we adopt µ m = Q as our default matching scale for all later predictions.
In Scheme B, we do not have the freedom to choose the matching scale separately, since the matching corrections are not separated out, see (20).Numerically, we find that the results of Scheme A and Scheme B are almost indistinguishable, as can be seen in the left panel of Figure 5.In the right panel of the same figure we show a comparison between our NNLL+NLO prediction for the W + W − cross section and the result obtained after combining the NLO prediction with a parton shower using the MC@NLO prescription [45].We observe that the latter prediction is lower than our result, in particular at higher values pf p veto T .This is astonishing at first sight, since one would expect that showering does not affect the cross section at higher p veto T values.However, because the shower is unitary any change of the cross section at low transverse momenta must be accompanied by a compensating change at higher transverse momenta.Looking at the cross section as a function of the p T of the leading jet, we find that the showered NLO result is higher than pure NLO result for all p j T > 20 GeV, so that the integral of the cross section for p T > 20 GeV is larger than the fixed-order result.After unitarization, this in turn implies that the jet-veto cross section, which is the integral 0 ≤ p T ≤ 20 GeV, is lower than the fixed-order result.The use of a matched parton shower therefore underestimates the jet-veto cross section.In contrast, we find that our NNLL+NLO resummed prediction lies closer to the fixed-order result indicated by the grey band.Genuine resummation effects are small as long as the fixed-order result for the cross section is computed with a high value µ ∼ Q of the renormalization scale.
Multiple Bosons and Cross Section Ratios
We are now ready to present our final results for a couple of interesting production cross sections involving multiple electroweak gauge bosons.In Figure 6, we show predictions for the Z, W + W − and W + W − W ± production cross sections at the LHC with √ s = 7 TeV; it would be straightforward to rerun our code at different values of the center-of-mass energy.In each case, we present our resummed and matched predictions at NLL+NLO and NNLL+NLO accuracy and compare them with the fixed-order NLO prediction.Notice that the value of the cross section drops by about a factor 10 3 with each additional boson.The triple-boson production cross section is tiny, but it constitutes a background to Higgs production in association with a W ± and subsequent decay H → W + W − .The fact that we can obtain predictions for three-boson final states without any additional effort nicely demonstrates the power of our automated resummation scheme.
We find that the scale uncertainties of our NNLL+NLO predictions for W + W − and W + W − W ± production are estimated to be of similar size, while we obtain a much smaller uncertainty for the case of Z-boson production.This small scale variation should perhaps be taken with a grain of salt.At larger p veto T values, our resummed cross section becomes similar to the fixed-order result, and its scale variation is similar to the scale variation of the fixed-order cross section obtained by performing a correlated scale variation with µ r = µ f .An independent variation of µ r and µ f , which is standard practice in fixed-order computations, would give an uncertainty that is twice as large.On the other hand, we have checked that the known NNLO corrections for Z-boson production are indeed compatible with our small uncertainty band.It is also interesting to note that for W + W − production the scale uncertainties of the fixed-order prediction obtained from correlated and independent variations of µ r and µ f are found to be of similar size.
We also observe that the scale uncertainties of the fixed-order NLO predictions at small p veto T values strongly increase with the number of produced bosons.This is not surprising if we consider the relevant scale ratio Q/p veto T , which governs the size of Sudakov logarithms.Using the median value Q of the invariant-mass distribution to estimate the hard scale, we find Q = M Z for Z production, Q ≈ 2.8 M W for W + W − production, and Q = 5.7 M W for W + W − W ± production.In all cases, the three-momenta at which the bosons are produced scale with the boson mass, but the average scale increases with the number of the produced bosons.Note that after the resummation of Sudakov logarithms has been performed, the width of the uncertainty bands is only weakly dependent on the veto scale.
The relative perturbative uncertainty of our NNLL+NLO prediction for the W + W − production cross section at p veto T = 25 GeV is +3.9% −3.0% .It was advocated in [46] that taking the ratio of the W + W − and Z-boson production cross sections might be a good way to reduce the uncertainty in the prediction of the jet-veto cross sections.This proposal was adopted in the experimental analysis reported in [14].We have thus studied this cross-section ratio in some detail.We find that the relative uncertainty in the cross-section ratio is +5.2% −2.8% , which is even slightly larger than the uncertainty in the W + W − production cross section itself.This makes it clear that taking the cross-section ratio does not help reducing the perturbative uncertainties, the reason being that the scale uncertainties are much smaller for Z-boson production than for W + W − production.Even though the beam functions are the same in both cases, the cross sections involve different hard functions and RG evolution factors, which spoils the cancellation.We will now explain how an improved relation between the two production channels can be obtained, which only suffers from very small theoretical uncertainties.In a first step, it is useful to consider the jet-veto efficiencies defined as σ(p veto T )/σ instead of the cross sections σ(p veto T ) themselves, because then the virtual corrections encoded in the hard functions largely drop out (even though this cancellation cannot be exact, since the hard corrections do not factor out of the total cross section).The inclusive cross section σ is evaluated at the hard scale µ h .We use the NLO (LO) cross section together with the NNLL (NLL) approximation of σ(p veto T ).Our resummed predictions for the ratio of the jet-veto efficiencies for W + W − and Z production are shown in the left plot in Figure 7.By construction, the relative uncertainties from varying µ in this ratio are the same as in the ratio of the veto cross sections.In order to obtain more accurate predictions one needs to ensure that the RG evolution factors cancel out in the ratio.This can be accomplished by considering the ratio of the W + W − cross section to the Z * production cross section with an off-shell Z * boson with invariant mass squared q 2 = Q2 , where Q ≈ 222 GeV is the median of the invariant-mass distribution for the W + W − final state shown in Figure 3.The corresponding ratio of efficiencies is shown in the right plot in Figure 7.It is close to 1 and exhibits very small scale uncertainties.A different way of relating Z and W + W − production cross sections was proposed in [25].These authors rescale the p veto T value used in the W + W − process by a factor M Z /(2M W ) before relating it to the Z-boson production process.This rescaling is chosen such that the Sudakov logarithms have a similar size in the two cases.While [25] finds a nice agreement for the NLO efficiencies obtained using this rescaling prescription, it is clear that the relation cannot be exact, since QCD is not scale invariant.Furthermore, the agreement becomes worse if one rescales the p veto T value with the more appropriate factor M Z / Q.In the middle plot of Figure 7, we show the corresponding ratio of efficiencies, which suffers from sizable scale uncertainties.
Experimental Cuts
An important advantage of our framework is that we can include the decay of electroweak bosons, together with cuts on the leptonic final state.In the experimental measurements of W + W − production, candidate events are selected with two opposite-sign charged leptons, electrons or muons, and missing transverse momentum coming from the neutrinos in pp → W + W − + X → l ν l ν + X.To account for the detector geometry and to suppress the background from Drell-Yan and top production, a number of cuts are applied to the final state in addition to the jet veto.For example, the ATLAS analysis [14] imposes the following cuts in the e + e − channel: The cuts applied in the µ + µ − channel are fairly similar, while those on the mixed final states e ± µ ∓ are looser, because they have much smaller Drell-Yan background.In Figure 8, we show the cross section for the production and decay pp → W + W − + X → e + e − ν ν + X in the presence of these cuts as a function of the jet-veto scale.The experimental analysis in [14] uses the anti-k T algorithm with R = 0.4 and fixed p veto T = 25 GeV.Comparing this figure with the lower plots in Figure 2, we see that the uncertainties of the cross section are similar to the inclusive case and that the matching corrections remain small also in the presence of the cuts.
The experimental analysis [14] imposes a few additional cuts, in particular a minimum total transverse momentum of the two charged leptons p e + e − T > 30 GeV and and minimum requirements on the missing transverse momentum p ν ν T,Rel > 45 GeV. 3 The cut p e + e − T > 30 GeV is somewhat problematic for the theoretical analysis, especially when it is applied to predict the Z-boson background to W + W − production.The difficulty is that we must make sure that the leptonic cuts do not (strongly) affect the hadronic final state.In the case of Z production the p e + e − T is equal (and opposite) to the transverse momentum p X T of the hadronic final state.Imposing a lower bound on p e + e − T is the same as imposing a lower bound on p X T .This interferes with the jet-veto cut which at NLO corresponds to an upper cut on p X T .The factorization formula in [8] does allow for additional cuts on p X T in the presence of the jet veto, but the relevant beam functions would be more complicated than those needed without such cuts.For the W + W − production process, the quantity p e + e − T is not directly related to p X T because of the presence of the neutrinos, but the corresponding cut still affects the low-p X T region.
Difficulties Associated with Photons
Our framework cannot immediately be applied to processes involving photons.The reason is that photons are massless particles and have hadronic substructure.At high energies, a photon thus needs to be treated as a photon jet, or more precisely a photon surrounded by some hadronic radiation.In fact, many photon-isolation requirements necessitate fragmentation functions.This can be avoided using the photon isolation proposed by Frixione [47], but also in this case the photon has a partonic content and a proper description needs to take into account partons emitted collinear to the photon.This implies that our factorization theorem does not apply, since it assumes that all energetic radiation is collinear to the beam.The photon isolation introduces new small scales to the problem (e.g. the hadronic energy around the photon), which give rise to additional large logarithms not associated with the jet veto.
It is nevertheless interesting to see what happens when we apply our resummation scheme to a process involving photons.To this end, we consider W ± γ production using the same setup as before ( √ s = 7 TeV, R = 0.4, n f = 4) and imposing the isolation requirement proposed in [47], with associated parameters R γ 0 = 0.4, x n = 1.0 and γ = 1.0.The corresponding results are shown in Figure 9.The pp → W γ process suffers from very large NLO corrections (the LO results are similar to the NLL result).The resummed results, on the other hand, are not very different from the LO predictions, so that the matching corrections are huge, indicating that there are indeed other sources of large corrections in this process.Likely these arise due to Sudakov effects associated with photon isolation.However, even the logarithms associated with the jet veto have a more complicated structure once a process involves partons collinear to the photon directions, which becomes possible at NLO.It would be interesting to analyze such photon processes in the context of SCET.In its present implementation our method does not resum all large corrections in these cases.
Conclusion
Higher-order logarithmic resummations in collider physics, both in SCET and using traditional methods, are typically done on a case-by-case basis, similar to the way fixed-order calculations were performed a few years ago.In the meantime, several groups have automated NLO computations in a variety of computer codes.This automation saves time, reduces the possibilities for mistakes and offers the flexibility to also study effects beyond the Standard Model.It is desirable to have the same level of automation for higher-order resummations of large logarithmic corrections.In the present paper, we have achieved this goal for electroweakboson production cross sections in the presence of a jet veto, at NNLL+NLO accuracy.This combination is natural because in the Sudakov region, where ln(Q/p veto T ) ∼ 1/α s , NNLL logarithmic terms have the same parametric scaling as NLO corrections in a region where there are no large logarithms.In contrast, taking resummation effects into account using a parton shower gives a lower parametric accuracy, and the unitarization inherent in the shower approach can sometimes be problematic.In the case of the jet-veto cross section for W + W − production, for example, unitarization leads to cross sections that are systematically lower than the NNLL+NLO results.
Resummations are relevant in kinematical configurations which are close to the Born-level kinematics and can therefore be obtained by reweighting Born-level cross sections with appropriate factors.The most complicated ingredient for NNLL resummations are the one-loop hard functions, which encode the virtual corrections.Their computation has been automated, and we use the MadGraph5_aMC@NLO framework to obtain the hard function required for our analysis.We have also presented a modified scheme, in which the beam functions accounting for collinear emissions and the matching onto fixed-order results is automated and performed using existing fixed-order codes.This is possible, because the hard function and the resummation of large logarithms are just overall factors in the differential cross section.
We have used our method to perform a detailed analysis of resummation effects for the W + W − pair production cross section, for which experimental measurements found a slight excess compared to theoretical predictions based on NLO computations matched to parton showers.We observe that the NLO result with a high value of the renormalization and factorization scales µ r ∼ µ f ∼ Q is in good agreement with the NNLL+NLO resummed predictions, while the results obtained with a matched parton shower are systematically lower.This effect, together with the positive NNLO corrections to the total rate which are now known, helps to bring the Standard Model prediction into better agreement with the measurements.It would be important to include the two-loop virtual corrections into the resummation and to also compute and include two-loop beam functions.This improvement, which is beyond the scope of the present work, would lead to very precise predictions, which could be directly compared with the experimental results.This level of accuracy has already been achieved in Higgs production by extracting the beam functions numerically.It was found that the two-loop corrections to the beam functions were sizable, because they are enhanced by logarithms of the jet radius.In Higgs production, the NNLO corrections to the hard function increase the cross section, while the two-loop beam functions lower it.We expect the same behavior for the W + W − case, and it will be interesting to see the combined effect of these improvements on the final predictions.Also, at NNLO the gg channel starts to contribute to W + W − production and could give rise to important corrections.Since this channel has already been implemented into the MadGraph5_aMC@NLO framework, it will be straightforward to perform the corresponding resummation using our method.
It would also be interesting to generalize our methods to processes with jets in the final state.In addition to hard, beam and soft functions, these processes involve jet functions describing the energetic final-state radiation.Furthermore, the hard function then has nontrivial color structure.Existing programs which compute virtual corrections for NLO processes currently only supply squared matrix elements summed over colors, but they can be modified to provide the color information needed for SCET-based resummation.This color structure is then contracted with the color structure of the soft function after RG evolution.The soft, beam and jet functions will in general need a separate calculation.However, since the jet and beam functions are two-point functions and the soft function is given by a single emission from eikonal lines, these computations are much simpler than full-fledged real-emission computations and could be automated as well.We are confident that such automated resummations will become available in the future and provide higher-order logarithmic resummations for a much wider range of observables.
where r = α s (µ)/α s (µ h ).A similar expression, with the coefficients Γ j replaced by γ i j , holds for the function a γ i .The relevant expansion coefficients of the anomalous dimensions and β-function can be found, e.g., in [50].
The anomaly exponent and the factor h i are given by [27] F i (p veto T , µ) = (A.6) The anomaly coefficient d veto 2 (R) given in [9] is of the form where the expansion of f i (R) for small R reads, in numerical form,
Figure 1 :
Figure 1: Structure and kinematics of the factorization theorem for the W + W − production cross section in the presence of a jet veto.
we show the results for the resummed Z-boson and W + W − pair production cross sections at √ s = 7 TeV, obtained with n f = 5 light quark flavors and jet radius parameter R = 0.4.Here and below the two scales µ and µ h are varied independently by factors of 2 about their default values µ = p veto T and µ h = Q, where Q is the invariant mass of the electroweak final state, i.e.Q 2 = M 2
Figure 2 :
Figure 2: Resummed cross sections for Z-boson production (top) and W + W − pair production (bottom) obtained at NLL (red) and NNLL (blue) order.The bands are obtained by varying the hard matching scale µ h and the factorization scale µ by factors of 2 about their default values |µ h | = Q and µ = p veto T .The grey bands show the fixed-order NLO results with scale variation µ r = µ f ∈ [p vetoT /2, 2Q] for comparison.The panels on the left refer to the standard choice µ 2 h > 0, while those on the right show results obtained using µ 2 h < 0.
Figure 3 :
Figure3: Left: NLO predictions for the W + W − production cross section obtained with a conservative estimate of scale uncertainties (grey), and with scale variations about high (green) and low (magenta) default values; see text for further information.Right: Kinematic distribution in the variable Q of the leading-order cross section.
Figure 4 :
Figure 4: Resummed and matched predictions for the W + W − production cross section (obtained by varying the matching scale about the default value µ m = p veto T and µ m = Q) compared with the fixed-order result at NLO.The panels below the plots indicate the relative size of the power-suppressed matching corrections at NNLL order.
Figure 5 :
Figure 5: Left: Comparison of the resummed and matched NNLL+NLO predictions for the W + W − cross section obtained in Scheme A (additive matching) with Scheme B (multiplicative matching).Right: Comparison of the NNLL+NLO predictions with the NLO result matched to Pythia using aMC@NLO.
Figure 6 :
Figure 6: Resummed and matched predictions for the cross sections for Z, W + W − , and W + W − W ± production, compared with NLO fixed-order predictions.
Figure 7 :
Figure7: Resummed predictions for the ratio of the jet-veto efficiencies for W + W − and Zboson production (left).In the middle plot the p veto T value of the W + W − process is rescaled by a factor M Z /(2M W ), as proposed in[25].The right plot shows the same ratio for W + W − and Z * -boson production, where the off-shell boson has invariant mass QWW = 222 GeV.The bands are obtained by varying the low scale µ about its default value µ = p veto T , while keeping the hard matching scale µ h fixed.
Figure 8 :
Figure 8: Resummed and matched predictions for the pp → W + W − + X → e + e − ν ν + X cross section with the cuts on the leptonic final state described in the text.
Figure 9 :
Figure9: Theoretical predictions for W γ production obtained from our resummation scheme.The left plot shows the resummed results without matching to NLO, while the right plot shows the results obtained after the matching has been performed.A proper treatment of production processes with high-energy photons in the final state would require a generalization of the factorization formula (3). | 13,795.6 | 2014-12-29T00:00:00.000 | [
"Physics"
] |
Covasna county in the mirror of economic , social , environmental factors
Globalisation means new challenges to Covasna County’s economy, society and environment. We can observe regional disparities and unresolved economic and social problems in the region. Sectors, which represent the province's competitive advantage, such as a medical tourism, agricultural production, need further improvements in order to encourage real economic growth. For inducing development in the economic space the improvement of good quality infrastructure, businesses based on local resources and the innovation are essential. Covasna County is rich in environmental values, which are important to be managed in a more targeted way, due to the increasing value of untouched nature, clean air. The local initiatives increase the value of both rural and urban space and contribute to the increase of attraction of the space; however, in order to promote real economic development of the area, the needs and expectations of the region must be taken into account and competitive products must be created. Covasna County cities are now competing for investors, external resources and to obtain development resources. In this process the economic, social and environmental aspects of cities are very important. These factors greatly affect whether Covasna county cities get to the winning or the losing side of the competition. It is a great challenge to highlight the unique values of a region and further develop competitive advantages from them.
Introduction
The economic, social and environmental challenges of globalisation had a great impact on Covasna county in the 21th century.The current situation, since the economic crisis in 2008, can be characterised by growing regional inequalities and unsolved economic problems.In our opinion the path of development is determined by new conditions and challenges.One of the development opportunities of our society is the proper management of common financial and other types of resources (GYÖRGY, 2007; KÁPOSZTA-NAGY, 2014; KÁPOSZTA-NAGY, 2015).
The current topic was chosen for analysis due to the fact that a development process have started recently, which does not have the same impact on all the parts of Covasna county; there is a significant disparity among its settlements, and the reason behind this process is a complex interrelation of economic, social and environmental factors.During our investigation it was our goal to shed light on these processes in Covasna county, to analyse these economic, social and environmental factors and to investigate the development strategy of Covasna county.This way an objective side could be established about the development level of the county, and also about the trends and possible future changes.
The general introduction of Covasna county
The total area of Covasna county 3,705 km 2 and it lies in the mid-areas of Romania, being part of the Central Development Region.It is the smallest county in its region of Romania considering both its population number and size.There are five larger towns in the county: http://dx.doi.org/10.15414/isd2016.s7.07 two with the rank of municipality -Sepsiszentgyörgy and Kézdivásárhely -, and three others, namely Covasna, Barót és Bodzaforduló.There are 40 smaller towns and 122 villages in the county as well (BOTOS, 2005).
The county has a large agricultural area, which is more than half (50.3%) of the total area.The majority of the population (210,000 people based on data from the census of 2011) lives in rural or near-rual areas, and 109,366 of them live in rural areas.The population density of the county is 55.6 people per km 2 , which is lower than the country average.From a nationality perspective 22.09% of the population are Romanians, 73.59% are Hungarians, 3.99% are roma, 0.05% is Germans and 0.1% has other nationality (insse.ro,2015).
The economy of Covasna county is an open one with competition between the actors of the economic sphere.The amount of GDP produced in Covasna county fluctuated between 2007 and 2014, similarly to the economic situation of the whole regions.
Material and methods
The study deals with the complex space of Covasna county from economic, social and environmental perspectives; therefore, the spectrum of the analysis is very wide.
The basis of the study is the review of literature dealing with previous investigations in similar topics regarding to this region.
The other main source was the statistical data.The source of economic and social data was mainly the Romanian National Institute of Statistics.These factors were investigated from the time period between 2007 and 2014.Besides the statistical data we analysed literature about Covasna county and the Central Development Region.The development strategies prepared for the region, the county and the towns with special attributes got greater emphasis.To shed more light about the situation of the county, the situation of the whole region and the country was analysed too, to establish the background of the situation analysis.
We prepared and used questionnaires, as well as SWOT analysis for our regional analysis.
From the data gathered the study focuses on those deemed the most relevant.Zipf formula (about the rank and size of settlements) analysis and Lettrich-type employment structure categorisation were also conducted to see interactions between towns inside the county.
Research results
The development level of a region is affected by complex economic social and environmental factors.During the investigation regarding to Covasna country we intended to find out more about these impacts.
In urban environment
There are five towns inside Covasna county, from which two, Sepsiszentgyörgy and Kézdivásárhely, have municipality rank.
Judging by the economic characteristics of the towns we can establish that their economic roles and weights are changing.Industries employing significant amounts of people disappeared, became insignificantak or transformed, which resulted a steep increase in the unemployment levels and economic development halted (RITTER, 2014).
By using the Lettrich formula we could see how the employment structure has changed and that which categories the local towns belong to according to their most important economic sectors.Based on the available data we can state that the tertiary sector took the leading role in the county, having more employed persons than industry.
There are 109,119 people living in towns in the county, which is 49.41% of its total population.The rank-size rule (Zipf-formula) was applied to these towns to investigate the relationship between their sizes and ranks (Figure 1).Based on the rank-size rule it can be established that the town structure of Covasna county is a primacy type network, because the real values stayed below the values the Zipf-formula produced.
It is typical for primacy type networks that: settlements of such networks have not been urbanisaed for long, they have relatively simple political and economic structures, the past their dominant settlements are very significant (NAGYNÉ MOLNÁR, 2014).
The towns of Covasna county are also historical micro-regional centres (Sepsiszék -Alsóháromszék -Sepsiszentgyörgy, Kézdiszék-Felsőháromszék -Kézdivásárhely, Orbaiszék -Covasna, Erdővidék -Barót), were the administrative and economic functions are focused.These towns, being part of the same settlement-network, have strong relationships with each other and with the smaller settlements in their environment, but also with settlements outside the border of the county.
Every settlement has a certain gravitatin power based on their resources.The towns of the county differ judging by their economic development levels and population numbers as well.
With the help of a gravitation model we could analyse the gravitation power of the county seat and the other towns.In the current study the gravitation effects between Sepsiszentgyörgy and Kézdivásárhely, and between Sepsiszentgyörgy and Barót, are demonstrated.
B-Kézdivásárhely
The distance of the threshold from town B = By observing the results of the gravitation model it can be seen how the population number (social factor) and the geographic position (environmental factor) of a settlement affects its gravitionanal power.Based on the calculations, Sepsiszentgyörgy has the largest gravitational power in Covasna county.The town is at the top of the settlement hierarchy of Covasna countythe county level administrative institutions, the majority of workplaces, the highest level of infrastructure and the greatest attractive power for investors can be found here.The town has a very advantageous geographical situationit lies 34 km from Brasov, one of the most dynamicly developing of the Central Development Region.The towns of Covasna county compete for investors, for gaining new (financial) resources.The economic, social and environmental factors are very important in this process.These factors have a great impact on whether the towns of Covasna county get to the winner side of the competition, or become losers.
According to György Enyedi (1997), a successful city is able to change its economic structure, it is based on knowledge-based production and high innovative abilities, it provides technologically advanced environment.It is also a place for decision-making and it possesses a large external network.
The evaluation of the questionnaires
The questionnaire contained 21 questins.We gathered six types of information, which were categorised as follows: personal data, jobs, employment opportunities, commuting, satisfaction with life quality, development opportunities, the openness of locals and their involvement in development processes.
The survey was conducted in two of the towns: Barót and Kézdivásárhely.33 questionnaires were filled out in Kézdivásárhely and 23 in Barót.
The distribution of the responders by gender: 57% female and 43% male in Barót, 64% female and 36% male in Kézdivásárhely.From the responders 68% works in the settlement and 32% works in other areas in the case of Barót.In Kézdivásárhely 64% works in the settlement, the rest is 36%.From the ones employed employed in Barót think that they have a chance to change jobs; in Kézdivásárhely the result was 63%.
When discussing the development opportunities the economic and infrastructural development projects were held most important ones by the locals.
In rural environment
The majority, 87.02% of the total area of Covasna county belongs to the countryside (with its 40 smaller towns and 122 villages).We can observe the impact of economic, socal and environmental factors at every element of the rural space.
The majority of the active population of the rural areas commutes to the nearby towns, while the ones working at home work in the agricultural sector (animal husbandry, plant production), and only a very few people try to make a living from operating an enterprise.The human capital of the rural areas is very poor, because people with higher education leave the region to find better jobs and wages.The population of the rural areas is aging.
The natural values are plentysome in the rural space.Biodirversity is high; there are large forests with unique, protected animal and plant species.Many protected natural areas can be found in Covasna county, such as the Rétyi Nyíri Nature Conservation Reserve (which is called the Sahara of Háromszék by Balázs Orbán).Well springs and unique natural landscape elements are not unusual either (such as the Almás cave in Vargyas).
There is a large extent of built cultural values in the region as well, such as mansions in Bikkfalva and Réty, szekler gates, monuments and churches (for example, the church of Gelence with paintings about Saint László; tge church of Szacsva from the 14 th century; the castle-church of Illyefalva and the church of Vargyas.
The values and opportunities of the rural areas in Covasna county are not utilised properly by the local communities.Although these elements would provide perfect basis for rural-or medical tourism, the locals are not prepared to use them appropriately due to the lack of expertise and capital.
The evaluation of the questionnaires
From the rural settlements 20 questionnaires were filled out in Előpatak 20, 15 in Kézdialmás, 12 in Réty and 11 in Szacsva.The reason why these settlements were surveyed is the following: Előpatak, Réty and Szacsva belong to the gravitational area of Sepsiszentgyörgy, and Kézdialmás belongs to Kézdivásárhely's (which are the two most developed settlements in the county).
The surveyed settlements have uniques resources.Előpatak is famous for its excellent well springs, which were famous even in the 19 th century of Europe.This fame has faded since then, however, because the current political system moved the medical institution from this settlement to some other place.
Conclusions
During the investigation of the urban and rural areas of Covasna county our aim was to analyse the complex impacts of the economic, social and environmental factors.
The level of economic development is affected by the social problems, and this process is further affected by environmental characteristics (NAGY, 2009).Economic, social and http://dx.doi.org/10.15414/isd2016.s7.07 environmental factors of the settlements also have great impact on their abilities to compete for financial resources.The success of the settlements of Covasna county depends on their ability to utilise local resources, the way they can develop their natural and built environment, their ability to improve their human resource, and the way they utilise European Union and national financial resources for local development.Local governments.
The county has towns being more disadvantageous than the others.One of them is Barót, which faced unemployment and decreasing economy after its coal mine was closed.This resulted its separation from more developed towns and cities (Sepsiszentgyörgy, Brassó) and worsening infrastructural endowment.Measures to attract and create new enterprises brought success initially, but economic units not preferred by the European Union (especially in the field of meat production) closed.Kézdivásárhely has better economic endowments, but a large scale economic reconstruction was brought about here, which resulted better infrastructural endowment by now, more openness from the people and better environment.However, we can find significant problems due to the emigration of youth (especially welleducated ones) in both investigated economic spaces.
One of the primary problems of the rural space is the lack of great volume investments (due to low-level infrastructure); therefore the local financial resources are not abundant.In the case of rural settlements along international roads and railways we can observe more activities aiming to stimulate the economy.The level of expertise, their knowledge and enthusiasm to develop local communities of settlement leaders contribute a great deal to the development of settlements and to achieve sustainable development.
Kézdialmás is a positive example for the previous phenomenon.Thanks to the cooperation of the management and the civilians the settlement could initiate proper development activities.
Measures aiming to support local farmers were carried out, beside the creation of local enterprises.Also, the built environment and infrastructure have also been improved recently, thanks to mainly EU and national financial support.
One of the negative examples is Előpatak, which used to be famous for its medical tourism, having a fully equipped and renovated medical center; however, due to higher level political decisions the institution has been moved to another place.The current (much smaller) medical center was built as part of the Borvizek (wine-waters) project which included many settlements from Hargita and Covasna counties.The local environmental endowments are advantageous, but there is no sufficient economic and social capital for settlement development.The other great challenge is a social one; namely, the integration of roma people, who are usually either unemployed or have little to no qualification in the settlement.The revival of medical tourism and establishment of new accomodations could be one of the possible ways to revitalise the settlement.By the initiation of EU funded trainings the local roma people could also be involved in the production of crafter products, which would result an increase in their life quality as well.between settlements as well.It can be observed on county and regional levels that more developed areas, such as Sibiu, Brasov and Mures counties have higher level infrastructure network than others, which is an advantage in finding investors and capital.The infrastructure in Covasna county is very lacking; it is satisfactory only in Sepsiszentgyörgy and Kézdivásárhely, but even they require improvement.The infrastructure of smaller towns, such as Barót, Covasna, Bodzaforduló and rural settlements is urgently in need for improvement to be more attractive for investors.Social capital is decreasing due to the fact the youth with higher education leaving the area, which means it is one of the most important tasks to keep them in the county by making it more attractive for them.This way, settlements would be even more attractive for investors and new kinds of jobs could be created besides the ones using workforce with low qualification (with lower wages).That would eventually lead to an increase in the welfare of the population.
There has been only very few research and development activities in the county in the investigated time period.This topic must be focused on in order to make rural areas more attractive for highly qualified people.The IT sector must be emphasised as well, since it is not yet significant in Covasna county, but it proved in other regions that is could bring about robust growth.The natural values of the county get increasing appreciation as well from the locals, which process is supported by non-governmental organisations as well, which can carry out small-scale programmes with EU support.One of these programmes is the Sepsi Green Way programme, which ties together 19 settlements in Covasna county in order to encourage the locals and tourists as well to visit and appreciate natural and cultural values in the area.
Beside the negative results, it was found that the county has many strong sides as well.These elements were the agriculture (with competitive potato and sugarbeet production) and tourism (medical tourism, the cardiology institute in Covasna, Sepsiszentgyörgy with its rich cultural programmes, such as the annually organised and internationally famous Saint George days, exhibitions and theatrical performances, etc.).
The local initiatives increase the value of both rural and urban spaces.Also, they contribute to the attractiveness of regions.However, to support economic development the needs of the region are must be observed as well, and competitive products must be created as well.
The effects of economic, social and environmental factors of settlements can only lead to development if utilised properly, rationally and efficiently.
Figure 1 :
Figure 1: The values and the values provided by the Zipf formula shown by the towns of Covasna county distance between towns A and B 1+√ The population number of settlement A The population number of settlement B The distance of the threshold from town B = distance between towns A and B 1+√ The population number of settlement A The population number of settlement B
Table 1 : The amount of GDP produced in Romania, in its Central Development Region and in Covasna county LEU million
Source: National Institute of Statistics (INSE), 2014The number of employed people was 45,858 in May 2014, from which 24,000 worked in the industry or services sectors, 20,000 in the field of construction and the rest in the agricultural sector.The net income in Covasna county was LEU 1,241 in May 2014, which is 26% less than the country level average net income (LEU 1.682).The total annual revenue of the companies found in Covasna county is EUR 1.1 billion based on data from the National Institute of Statistics (morfondir.ro,2014).Land use categories of the county are divided as follows: arable land (22.5%), pasture (16.5%), meadows (11.0%), vineyards and orchards (0.3%), forests (44.5%), water surfaces 5.2% (2009) (insse.ro,2014).
Table 3 : The income levels of the households investigated Income levels Barót Kézdivásárhely
Source: Own data collection via questionnaires, 2015 http://dx.doi.org/10.15414/isd2016.s7.07 Réty is a very advantageous settlement regarding its geographic position and natural values (Nyír Natural Reserve of Réty, touristic attractions); however, it cannot be characterised by economic development, because the locals are not open to tourism as much as it was needed.The settlement is very close to the county seat, the population in active age commutes there.Szacsva has very weak social and economic indicators, but it also has great environmental potential.It would be suitable for establishing processing and production plants, which could provide jobs for locals with lower education levels.It was usual during the investigation to see how low the economic performance and how high the unemployment is in Covasna county.Inequalty occurs not only between counties, but http://dx.doi.org/10.15414/isd2016.s7.07 | 4,523.4 | 2016-06-24T00:00:00.000 | [
"Economics"
] |
Combining chirp mass, luminosity distance and sky localisation from gravitational wave events to detect the cosmic dipole
A key test of the isotropy of the Universe on large scales consists in comparing the dipole in the Cosmic Microwave Background (CMB) temperature with the dipole in the distribution of sources at low redshift. Current analyses find a dipole in the number counts of quasars and radio sources that is 2-5 times larger than expected from the CMB, leading to a tension reaching 5$\sigma$. In this paper, we derive a consistent framework to measure the dipole independently from gravitational wave (GW) detections. We exploit the fact that the observer velocity does not only change the distribution of events in the sky, but also the luminosity distance and redshifted chirp mass, that can be extracted from the GW waveform. We show that the estimator with higher signal-to-noise ratio is the dipole in the chirp mass measured from a population of binary neutron stars. Combining all estimators (accounting for their covariance) improves the detectability of the dipole by 30-50 percent compared to number counting of binary black holes alone. We find that a few $10^6$ events are necessary to detect a dipole consistent with the CMB one, whereas if the dipole is as large as predicted by radio sources, it will already be detectable with $10^5$ events, which would correspond to a single year of observation with next generation GW detectors. GW sources provide therefore a robust and independent way of testing the isotropy of the Universe.
INTRODUCTION
One of the basic assumptions of the ΛCDM cosmological model is that our Universe is homogeneous and isotropic on large scales.The latter follows from the high large-scale isotropy in observations of the Cosmic Microwave Background (CMB) temperature and of the large-scale structure of the Universe.Combining this with the cosmological principle further leads to the homogeneity of the Universe.In the ΛCDM model, the observed dipole anisotropy in the CMB temperature (Aghanim et al. 2020) is due to the fact that we, as observer, are moving with respect to the homogeneous and isotropic background. 1f this picture is correct, then the same motion should induce a dipole in the distribution of sources, due to aberration and magnification effects (Ellis & Baldwin 1984).This idea has been put to the test in the past years, through measurement of the dipole in the number counts of quasars and radio sources (Colin et al. 2017;Bengaly et al. 2018;Secrest et al. 2021;Siewert et al. 2021;Secrest et al. 2022) at redshifts 0 ≤ ≲ 3. The direction of these dipoles is well aligned with that of the CMB, however the amplitude is 2-5 times larger than expected, leading to a tension with the CMB dipole reaching up to 5.1 (Secrest et al. 2022).Supernovae type Ia light curves (see e.g.Riess et al. (1995)) provide an alternative means of assessing the dipole anisotropy (Bonvin et al. 2006b).Indeed, measurements from supernovae catalogs performed in Singal (2022); Horstmann et al. (2022) have found a dipole again aligned with the CMB.However, they lead to an amplitude either compatible with the measurement from radio and quasar sources (Singal 2022), or even lower than the CMB value (Horstmann et al. 2022).Another measurement in Sorrenti et al. (2022) shows an amplitude consistent with the CMB, but a strong tension in the direction.Hence, the inconclusive measurements of the dipole from type Ia supernovae do not resolve any tensions.
The discrepancy in the dipole amplitude could be due to systematic effects in the quasar and radio source data sets, to an imperfect theoretical modelling of the expected dipole in the number counts, which currently neglects evolution effects (Dalang & Bonvin 2022;Guandalin et al. 2022), or to a violation of isotropy in our Universe.In this last case, the large dipole in the quasars and radio sources would not be solely due to the observer velocity, but it would have an intrinsic part generated by a large local anisotropy in the Universe (inconsistent with the ΛCDM predictions).
One way to test these scenarios is to use other data sets to measure the dipole at low redshift, and see if the result is consistent with the CMB dipole, or with the dipoles from quasars and radio sources.A promising avenue is to use gravitational waves (GWs) from binary systems of black holes (BBH) or neutron stars (BNS).In Cai et al. (2018), it was proposed to look for a dipole in the luminosity distance measured from GWs.This paper does not model the kinematic contribution to the luminosity distance dipole, but it forecasts how large a dipole should be to be detected (independently of its origin).In Mastrogiovanni et al. (2023), some of us proposed to use the distribution of BBH to measure the cosmic dipole.We derived a modelling of the signal, taking into account aberration and threshold effects, and showed that with the next generation of interferometers (XG), like the Einstein Telescope (ET) and the Cosmic Explorer (CE) it will be possible to confidently detect the dipole.In Kashyap et al. (2023), the authors propose to use both the number counts of GWs and the chirp mass to measure the dipole.They apply their method to current data and show that no dipole anisotropy is detected.This is in agreement with Stiskalek et al. (2021); Essick et al. (2023); Payne et al. (2020), who found no evidence for an anisotropy in the distribution of current GW sources.
In this paper, we derive a consistent framework to use GW detections to optimally measure the cosmic dipole.In particular, we exploit three of the quantities that can be extracted from the GW waveform and amplitude: the angular position of the binary system, its luminosity distance, and its redshifted chirp mass.These three quantities are all affected by the observer velocity and can therefore be combined to measure the velocity in an optimal way.We build on the results of Mastrogiovanni et al. (2023) to derive a modelling of the mean luminosity distance and the mean chirp mass per angular pixel.This modelling accounts not only for aberration, but also for threshold effects.We show that the distance and chirp mass dipoles are correlated with each other, but both are independent of the number count dipole.Combining them therefore requires taking into account this correlation, in order not to overestimate the constraints.We also show that threshold effects can be robustly modelled and computed, which is essential in order not to bias the measurement of the observer velocity.
We apply our framework to synthetic catalogues of binary systems (BBH and BNS) that would be observed by the next generation of interferometers, like ET and CE, and we forecast how well the dipole can be detected in these catalogues.We explore two scenarios: one where the dipole would be purely kinematic and therefore consistent with the CMB value, and a second one where the dipole is consistent with the results from radio sources.For this second scenario, we take the extreme case of an observer velocity that would be 5 times larger than the CMB one (Bengaly et al. 2018), and we call this the "AGN case".Note that if we take the AGN case at face value, the dipole cannot be only due to the observer velocity (that would be inconsistent with the one extracted from the CMB), but it would have a large intrinsic part due to a strong anisotropy in the large-scale structure, as discussed above.However, when assessing the detectability of such a dipole, it does not matter if it is purely of kinematic origin, or if there is also an intrinsic contribution.Therefore we can simulate the AGN dipole as if it were due to an observer velocity which is 5 times larger than expected.
We find that the chirp mass from BNSs provides the estimator with lower variance, i.e. larger signal-to-noise ratio, followed by the number counts of BBHs and BNSs.This is not surprising given the predominant impact of this parameter on the BNS inspiral waveform; the possibility of exploiting this fact has already been subject of various investigations (Finke et al. 2022;Chernoff & Finn 1993;Taylor et al. 2012;Taylor & Gair 2012).Our analysis further shows that, combining all estimators, we could detect a dipole consistent with the CMB one at > 1 significance with more than 10 6 events, achieved with 5 years of observations of ET and CE.If the dipole is as large as predicted in the AGN case, then we can detect it at > 3 significance with 10 5 events already, achieved in a year of observation.
The rest of the paper is structured as follow: in Sec. 2 we derive a theoretical modelling of the luminosity distance and chirp mass dipole.In Sec. 3 we define estimators of these dipoles and we compute their variance and covariance.In Sec. 4 we measure the dipoles from our synthetic catalogues of events and assess the detectability of the six estimators (three for BBHs and three for BNSs) and their combination.In Sec. 5 we forecast how well the observer velocity can be measured with XG detectors and we show that an imperfect knowledge of threshold effects does not degrade the constraints significantly.We conclude in Sec. 6.
THEORETICAL MODELLING OF THE LUMINOSITY DISTANCE AND CHIRP MASS DIPOLE
To compute the impact of the observer velocity on the luminosity distance and the chirp mass measured from GWs, we follow the same steps as in Mastrogiovanni et al. (2023), which computed the dipole in the number of events.Since the observer velocity is significantly smaller than the speed of light, we keep only terms at linear order in 0 / in the derivation.
Luminosity distance
The luminosity distance of a source situated at conformal distance from the observer is defined as the ratio between the intrinsic luminosity of the source, , and the observed flux, where n denotes the direction in which the observer sees the source.The observer velocity impacts the flux measured by the observer and consequently modifies the measured luminosity distance.At first order in 0 / the luminosity distance is given by (see Bonvin et al. (2006a) 2 ) where d () denotes the background luminosity distance in a homogeneous and isotropic universe.Note that other perturbations, in particular the peculiar velocity of the source, affect the luminosity distance besides the observer velocity, see Bonvin et al. (2006a) for the full expression.However, here we are interested in the dipole which is strongly dominated by the observer velocity, and other contributions are negligible compared to typical uncertainties of about 20% for luminosity distance measurements from GW events (Callister et al. 2020;Iacovelli et al. 2022).
The luminosity distance enters directly in the amplitude of the GWs, which decays as 1/ .Combining measurements from different interferometers allows one to measure for each binary system.We can then compute the mean luminosity distance from a population of binary systems in direction n. Here, denotes the number of events detected at distance with a signalto-noise ratio (SNR) above a threshold value * , summed over all intrinsic binary masses 1 and 2 3 .Following Mastrogiovanni et al. (2023), we adopt a simplified, zero Post-Newtonian (0PN) order for the SNR, which can be computed as (Finn & Chernoff 1993) where M is the redshifted chirp mass as measured in the detector frame (in the following we drop the "redshifted" for simplicity).In Eq. ( 5), is the total detector frame mass of the system and is the GW frequency corresponding to the innermost stable circular orbit.As to Θ 2 , it is a geometrical factor that accounts for the binary inclination angle and average detector's antenna patterns: where is the angle formed by the normal vector of the orbital plane and the line of sight.The numerical prefactor in the above equation represents an average value of all the detector's antenna patterns.The function F is calculated from the integral of the power spectral density ( ) following Maggiore (2007).Since the SNR (5) depends on the luminosity distance and on the chirp mass, that are both affected by the observer velocity, the number of detected events above threshold * will be modified by the observer velocity.These threshold effects add to the effect of aberration, which modifies the number of events per solid angle.As shown in Mastrogiovanni et al. (2023), the final result at linear order in 0 / is given by 3 Note that here we have dropped the dependence on the SNR threshold, * , in the left-hand side of Eq. ( 4) for simplicity.When we refer to "detected" events, by definition we refer to events above the SNR threshold.We state the dependence explicitly only when it is not integrated over masses.
A (, 1,2 ) ≡ 1 Inserting Eqs. ( 8) and (2) into Eq.( 3) and keeping only linear terms in 0 / we finally obtain for the mean luminosity distance where the monopole is given by The fractional dipole is given by where the prefactor takes the form with (, 1,2 , * ) being the mean probability density function of detected sources, Note that in Eq. ( 13) we have introduced the parameter factorising out a minus sign in such a way that = 1 in the case where there are no threshold effects, i.e. when = 0. We see from Eq. ( 14) that if there are threshold effects but and A are constant in , then the integral vanishes and = 1.We can indeed rewrite the monopole as since the dipole contributions in the luminosity distance and in the number of detected events vanish when integrated over direction (note that here we neglect terms or order ( 0 /) 2 ).Inserting this into Eq.( 14) we see that the integral exactly vanishes if and A are independent on .Since these functions are expected to evolve slowly with , we expect threshold effects to be partially suppressed, meaning that will be close to 1, even in the case where is non-zero.In Sec.4.4, we compute from our population model of BNSs (where threshold effects are present), and compare this with measurements of from our synthetic catalogues.We find that can be well predicted and, as expected, that it is very close to 1.This is important, because is fully degenerated with the observer velocity 0 / and if we do not know it, we cannot extract information on 0 / from a population of sources with threshold effects.
Chirp mass
The chirp mass that can be extracted from the GW is the redshifted chirp mass, i.e. the product of the intrinsic chirp mass of the binary system and the redshift.Since the latter is affected by the observer velocity, to first order, the redshifted chirp mass is given by where M (, 1,2 ) is the background (redshifted) chirp mass in a homogeneous and isotropic universe.Note that, as for the luminosity distance, other perturbations contribute to the redshifted chirp mass, that can be neglected since they have a negligible impact on the dipole.Analogously to Eq. ( 3), we can compute the mean detected chirp mass in direction n, averaged over all binary systems (i.e. over all masses 1,2 ) as This can be expanded in powers of n • v 0 as a monopole, M (0) , and dipole term: where Combining Eqs. ( 8) and ( 17) allows to write the fractional dipole term as where we introduced M as the chirp mass analogue of : As for the luminosity distance, we see that threshold effects are nonzero only if and A vary with distance .Kashyap et al. (2023) have also computed the dipole in the mean chirp mass (that they call mass intensity).Their modelling differs from ours, since they assume that all events above a given mass threshold are detected.In practice, this is however not the case.What determines if an event is detected or not is the SNR, which depends not only on the mass of the binary system, but also on its distance from the observer, as can be seen from Eq. ( 5).This has an impact on the modelling of threshold effects, leading to a different expression for the dipole signal.
Number counts
The dipole in the GWs number count has been derived in Mastrogiovanni et al. (2023) and follows directly from Eq. ( 8).The total number of GW events detected in direction n is given by where with
STATISTICAL ESTIMATORS OF THE DIPOLES
We now build statistical estimators for the three dipole signals defined in Sec. 2.
Luminosity distance
For each direction n ′ in the sky, we can build the following observable where ′ is the angle between the (a priori unknown) dipole velocity direction and n ′ .This observable is maximized when evaluated along the dipole direction and, in the absence of threshold effects, it is exactly equal to the observer velocity, 0 /, at the maximum.
Let us now build a statistical estimator for the observable −n ′ .We divide the sky in sky angular pixels of same solid angle, and associate a vector n pointing to the center of each pixel .Within any pixel , there are det detected events (labeled by ), whose corresponding luminosity distance is labeled by ( ) .The estimator is then defined as As shown in Appendix A, if shot noise and the uncertainty in the measurement of the luminosity distance are subdominant (i.e.smaller than their mean), the estimator ( 27) is unbiased, i.e.
We then compute the variance of this estimator.It can be written as ratio of two variables / with and = d defined in Eq. ( 28).The variance of such a ratio can be written as (if the variances of and are significantly smaller than their mean) var We can show that the third term in Eq. ( 30) is vanishing and the second one is smaller than the first one by a factor ( 0 /) 2 and can therefore be neglected (see Appendix A for a detailed derivation).
The variance of ( 27) is therefore directly proportional to the variance of and can be written as To compute this, we divide each solid angle into bins in distance of size Δ, and we rewrite the sum over as a sum over the -bins.The sky is therefore divided into pixels of angular size ΔΩ and radial size Δ.We denote by the number of events detected in the pixel centered in direction n and distance and by ( ) the corresponding luminosity distance.Due to shot noise, the number of events fluctuates around the mean from pixel to pixel, Note that here we neglect the fluctuations in due to the uncertainty on sky localisation.We assume that this uncertainty is of ∼ 3 degrees, significantly smaller than the size of the angular pixels that we consider.In addition to shot noise, the variance of the estimator is affected by uncertainty in the measurement of the luminosity distance, which we write as where (Δ ) is the error in the measurement of from one binary system, in angular bin and radial bin .Note that in Eqs. ( 32)-( 34) we neglect the contributions from the observer velocity that would lead to subdominant contributions to the variance.The main contribution comes indeed from a "fluctuation" of the monopole (due to shot noise and measurement uncertainty) that would mimic a dipole.Using the fact that errors on count and on luminosity distance are uncorrelated and that number counts follow a Poissonian statistics, we obtain for the variance (see Appendix A for more detail) where tot is the total number of events detected and ⟨(Δ ) ⟩ denotes the typical error on luminosity distance measurement at distance .The term in the first line of Eq. ( 35) is the contribution from luminosity distance uncertainty to the variance of the estimator.If ⟨(Δ ) ⟩ is independent on distance, we see that the variance from this term scales as 1/ tot , as expected for tot measurement of uncorrelated quantities.The contributions in the second line are due to shot noise: the mean distance in a given solid angle depends on the radial distribution of events.Since shot noise generates fluctuations in the number of events, a given solid angle can have more events closer to the observer, whereas another solid angle can have more events further away.This generates fluctuations in the mean distance, that can mimic the presence of a dipole.Note that since we are measuring the mean distance per angular pixel and not the sum of distances per pixel, we are not sensitive to the impact of shot noise on the total number of events per pixel, but rather to its impact on the radial distribution of sources.
Taking the continuous limit, the variance can be rewritten as where (, * ) is the radial distribution of detected sources integrated over masses.In Fig. 1, we plot the variance (36) for BBHs as a function of tot for different relative errors on the luminosity distance measurements, ranging from 10% to 100% (and constant in ).We use the radial distribution of sources obtained from our simulated catalogues (see Sec. 4 for more detail).We see that both the shot noise contribution and the distance-uncertainty contribution scale as 1/ tot .The relative importance of the two terms depends therefore only on the uncertainty on .For an uncertainty of up to 20%, the shot noise contribution completely dominates in the variance.If the uncertainty reaches 50% however, its contribution to the variance is not negligible anymore and the SNR is degraded.Similar results are obtained for BNSs, see Fig. A1 in Appendix A. The only difference comes from the radial distribution of sources, which differs for BBHs and BNSs and leads to a slightly larger variance for BBHs (due to a wider redshift range of observation for BBHs, that can be observed at higher redshift than BNSs).In the following we will assume a 20% uncertainty on the measurement of as a proxy of typical distance uncertainties for GW events (Callister et al. 2020;Iacovelli et al. 2022), meaning that we are in the regime where shot noise completely dominates.
Chirp mass
The same procedure can be applied to the chirp mass.We first build an observable equivalent to (26), We repeat the process of dividing the detected observations in sky angular pixels, as in Eq. ( 27), to associate the following statistical estimator to As for the luminosity distance, this estimator is found to be unbiased assuming that shot noise and uncertainty in the measurement of the chirp mass are subdominant compared to the mean.For the variance, we generalise the computation of Sec.3.1 and Appendix A done for the luminosity distance to have data not only distributed in radial bins, but also in bins of the two source masses, 1,2 , which constitute the binary.This is needed since, unlike the luminosity distance, the redshifted chirp mass function depends on all (, 1,2 ).As for the luminosity distance, we neglect subdominant Variance of the luminosity distance estimator for BBHs, plotted as a function of the total number of events tot .The different panels are for different values of the relative error on (assumed to be independent of redshift).We show separately the shot noise contribution and the distance-uncertainty contribution, as well as the total.
contributions from the observer velocity in the variance.We obtain where the -index is the radial bin index, while the ℓ-index is the index over the various 1,2 bins.Mℓ is the expected redshifted chirp mass in such a bin, while N ℓ is the expected number of detected events in that bin, i.e. such that N = ℓ=1 N ℓ .The quantity ΔM ℓ denotes the typical uncertainty in the measurement of the chirp mass in a radial bin centered at and in the mass bin labelled by ℓ.
Taking the continuous limit, the variance (40) becomes As for the luminosity distance, we see that the variance contains two contributions: one from measurement uncertainty of the chirp mass, and the second one from shot noise.In addition to the fact that shot noise affects the radial distribution of events in a given solid angle, it will also affect the distribution in masses 1,2 .This effect is encoded in the dependence of the distribution function on the masses and it will also generate fluctuations in the mean chirp mass that can mimic a dipole.In Fig. 2 we plot the different contributions as a function of tot assuming a 10% uncertainty in the measurement of the chirp mass (expected for BBHs) and a 1% uncertainty (expected for BNSs), see Iacovelli et al. (2022).We see that in both cases shot noise completely dominates over mass uncertainty.Comparing the shot noise contribution for BBHs and BNSs, we see that it is significantly smaller for BNSs, by a factor 3. This is due to the fact that BNSs have a narrower mass range than BBHs: the mass distribution of BNSs only span 2 ⊙ , while BBHs are observed between 5 ⊙ and 100 ⊙ .Since shot noise changes the mass distribution of events, it has more impact in the second case.For example, a shot noise fluctuation generating one more event at the higher end of the mass range will more drastically affect the mean mass of BBHs than the mean mass of BNSs.
Number counts
The dipole estimator for the number counts and its variance have been derived in detail in state the final result for completeness: The variance of the number count estimator is only due to shot noise and is given by (44)
Covariance of the estimators
In order to use the distance, mass and number counts estimators together to measure the dipole from the same population of GW sources, it is necessary to compute their covariance.Below we show that the distance and mass estimators are correlated, but that these estimators are both uncorrelated with the number count estimator.Naturally, the estimators for two different populations of sources (for example BBHs and BNSs) are uncorrelated.
Mass-distance covariance
The covariance is calculated following the same steps as for the variance in Secs.3.1 and 3.2.Here we assume that the uncertainty in the measurement of the chirp mass is uncorrelated with the uncertainty in the measurement of the distance.The chirp mass is mainly measured from the phase, whereas the luminosity distance is measured through the amplitude of the wave.Hence, this is certainly a good approximation for BNSs, which have a long inspiral in the detectors' band and thus a good phase determination.Moreover, inspection of typical BBHs signals shows that the − M entries of the normalized Fisher matrix are generally smaller than the diagonal ones, which we have already shown to give subdominant contributions to the total noise (see Figs. 1 and 2).This justifies the assumption of uncorrelated chirp mass and distance measurements uncertainties as well for the BBH type of sources.
We obtain for the covariance Taking the continuous limit we obtain From Eq. ( 46) we see that the two estimators are correlated through shot noise.Indeed, if in a given solid angle there are more events closer to the observer, the mean distance in that solid angle will be smaller than on average, and the mean chirp mass as well.Hence a shot noise fluctuation that can mimic a dipole in one estimator will automatically mimic a dipole in the other estimator as well.
Count-distance and count-mass covariance
We start by computing the covariance between the number counts and the distance estimators.Since shot noise is uncorrelated with the measurement uncertainty in the distance, we obtain, following the same steps as previously Using that the number of detected events follows a Poisson distribution, we can easily show that the two terms in Eq. (47) exactly vanish.
A similar calculation shows that the covariance between mass and number counts also vanish.This is not surprising: contrary to the distance and mass estimators, the number count estimator is not sensitive to the radial distribution of events.Only fluctuations in the total number of events in a solid angle can mimic a dipole in the number count.On the contrary, the mass and distance estimators are not sensitive to the total number of events in a solid angle (since we divide by the total number of events to obtain the mean) but rather to their radial distribution.As a consequence, a shot noise fluctuation that mimics a dipole in the number count estimator does not necessarily mimic a dipole in the mass and distance estimator, and vice versa.
Optimal combination
In the case where there are no threshold effects, the six estimators (three for BBHs and three for BNSs) are estimators of the same quantity: 0 / cos ′ .We can therefore look for an optimal estimator of this quantity, i.e. an estimator that would maximise the signal-tonoise ratio.In the following we drop the dependence in n ′ , since the optimal estimator can be defined in exactly the same way for each value of n ′ .As a first step, we build combinations of estimators that are independent.Since only the distance and the mass estimators are correlated for the same population of sources, we simply need to diagonalise that part of the covariance matrix.We obtain two new estimators (per population) given by with The denominator in Eq. ( 48) insures that the new estimators have mean 0 / cos ′ .
We can now build an optimal estimator of 0 / cos ′ by linearly combining the independent estimators in the following way with labeling all the possible observables (number count, plus and minus combinations) for both BBHs and BNSs.This combination is the one that maximizes the SNR (in every direction n ′ ), defined as Note that the normalisation in Eq. ( 50) is there to keep the condition ⟨ videal ⟩ = 0 / cos ′ , but it is otherwise irrelevant.
If threshold effects are relevant, the mass, distance and number count estimators are not estimators of the same quantity, since the respective 's are different.In this case, we need to divide each estimator by its respective before combining them.Since the 's can be determined only with limited precision (using population models, see Sec. 4.4 for more detail), this would induce additional contributions to the variance of the optimal estimator and degrades the SNR.In Sec.5.2, we quantify this effect using a Fisher analysis.
MEASUREMENT OF THE DIPOLE FROM SYNTHETIC CATALOGUES OF GW SOURCES
To test our estimators we build synthetic catalogues of BBHs and BNSs and use our estimators on these simulated events.We compare the measurement with our theoretical modelling for the signal and the variance.
Simulating BBHs and BNSs
To build catalogues of BBH and BNS sources, we draw source frame masses for BBHs and BNSs from the same probabilistic models used in Mastrogiovanni et al. (2023) (see Appendix A therein).These mass models are consistent with current BBH and BNS detections (Iacovelli et al. 2022).The redshift distribution of GW sources is determined by the merger rate model.In our simulation, the merger redshift is distributed according to where , and are parameters that control the merger rate model, while d d is the differential of the comoving volume.As fiducial values, we take = 2.7, = 3 and = 2.The distribution of cos is chosen to be uniform.Once a set of BBHs and BNSs is drawn, we add the effect of the observer velocity.Aberration is included by shifting the angular position by ′ = − 0 / sin , where is the angle between the source position and the observer velocity.The luminosity distance and chirp mass are modified according to Eqs. ( 2) and ( 17) respectively.We produce two copies of both the BBHs and BNSs catalogue, one with the "CMB value" of the observer velocity, 0 / = 1.2 • 10 −3 and the other one with the "AGN value" of the observer velocity, which is 5 times larger: 0 / = 6 • 10 −3 .
We can then calculate the SNR of each event using Eq. ( 5).We consider a network of XG detectors including ET (Punturo et al. 2010) and two CE (Dwyer et al. 2015;Reitze et al. 2019).The power spectral density of ET is set to the one used in Iacovelli et al. (2022); Mastrogiovanni et al. (2023), while for CE we take the power spectral density taken from the CE consortium 5 .If the SNR from this network exceeds a detection threshold of * = 9, we label the binary as detected.
Finally, we add an extra step in the simulation to mimic the fact that we will not be perfectly able to measure the sky location, luminosity distance and redshifted chirp mass of the source.Once a binary is detected, we do not save its true values for the sky position, distance and chirp mass but instead, we register a scattered value around the true one.We include Gaussian scatter of the sky location by 3 degrees around the true sky position and of the luminosity distance by 20% of its true value.For the chirp mass, we use a scattering of 1% for BNSs and of 10% for BBHs.These are typical errors that we might expect to obtain with XG detectors (Iacovelli et al. 2022).Note that the 20% uncertainty on the luminosity distance implies that shot noise is the dominant contribution, as can be seen from Figs. 1 and A1.
The luminosity distances, redshifted chirp masses and sky positions are then used to compute the estimators using Eqs.( 27), ( 38) and ( 42).The results are shown in Fig. 3 for the case of the AGN observer velocity.We show the values of the estimator obtained for 10 6 BBH detections (left panel) and 10 6 BBN detections (right panel) as a function of the angle ′ between the true direction of the dipole and a chosen direction n ′ .The spread in the signal comes from the fact that for a given angle ′ we have pixels in different azimuthal directions, which give slightly different values for the dipole estimator (due to shot noise and measurement uncertainties).At the equator there are clearly more azimuthal pixels than at the pole (exactly at the pole there is just one), leading to a larger spread at the equator.
As discussed in Mastrogiovanni et al. (2023), for BBHs threshold effects do not contribute to the amplitude of the dipole, since all events are above the threshold (see Fig. 2 (2023)).As a consequence, at the maximum, i.e. when n ′ coincides with the direction of the observer velocity, the estimators are roughly equal to 0 / = 6 • 10 −3 .The peak of the dipole estimators could be slightly shifted from the true position due to the variance.For BNSs, we see that the amplitude of the dipole at the peak is also very close to the observer velocity.This is due to the fact that, as we will show in Sec.4.4, the amplitude of threshold effects are actually small for BNSs, below 10%, which is smaller than the spread in the signal.
Fig. 3 also reports the 1, 2, 3 fluctuations (grey areas) of the estimators due to shot noise and measurement uncertainties on the luminosity distance and chirp mass.The fluctuation levels are generated by shuffling GW detections isotropically over the sky a hundred times.We see that the chirp mass from BNSs is the estimator with smaller variance, consistent with the theoretical results of Sec. 3.For all estimators, the fluctuation levels obtained through sky shuffling agree with the theoretical calculation of the variance from Eqs. ( 36), ( 41) and ( 44), that are indicated with dashed lines on the plot (see Sec. 4.2 for a more detailed comparison).When the estimator values exceed a certain noise threshold, the cosmic dipole can be detected (see Sec. 4.3 for a more in-depth discussion on detectability).
Comparison of the theoretical variance and covariance with simulations
The first sanity check that we perform is to see if our predictions of the variance and covariance agree with the numerical simulations.
As explained above, the numerical variance and covariance are simply obtained by shuffling the sources isotropically over the sky.This removes the true dipole signal, meaning that the remaining fluctuations are due to shot noise and measurement uncertainties of the luminosity distance and chirp mass.Here we use 2'000 sky shufflings to obtain an accurate numerical estimate of the variance and covariance.
Theoretically, the variance of the number count estimator is given by Eq. ( 44), and simply depends on the total number of detected events.On the other hand, the variance of the mass and distance estimators as well as the covariance between them has to be computed from the integrals (36), ( 41) and ( 46), which requires a model for the (, 1,2 ) source distributions.To estimate this, we numerically generate GW events following reference mass and redshift distributions models, and approximate the integrals in ( 36), ( 41) and ( 46) by sampling those distributions.The sampled number of detections is increased until we reach numerical convergence.
The top plots of Fig. 4 show the simulated variance compared with the theoretical one, for BBHs and BNSs, as a function of the total number of events tot .The agreement between the simulated and theoretical variances is excellent.As expected from Eqs. ( 36), ( 41) and ( 44) the variances scale as −1 tot .For the number count estimator, the variance is the same for BBHs and BNSs: it depends only on the total number of events.For the chirp mass estimator, on the other hand, the variance for the BNSs is smaller by a factor of 3. As already discussed in Sec.3.2, this is due to the narrower mass range of BNSs.For the luminosity distance estimator we see that the variance is also slightly smaller for BNSs.Again, this is due to the slightly narrower radial distribution of BNS events compared to BBH events.
Comparing the different estimators, we see that the one with smaller variance is the mass estimator for BNSs.This is due to the fact that shot noise only affects the mass estimator by changing the radial distribution and 1,2 -distribution of sources.If the chirp mass would be constant in 1,2 and in , the last two terms in the second line of Eq. ( 41) would cancel each other.Although the chirp mass does depend on 1,2 and , the distribution of BNSs in masses and redshift is narrow enough for a reduction to occur.This leads to a shot noise contribution smaller than the one for the number count estimators, where there is no such reduction.For BBHs however, the wider range leads instead to an increased variance.
Finally, we see that the variance of the distance estimator is slightly larger than that of the number count both for BBHs and BNSs, due to the large variation of distance with .The luminosity distance varies indeed as (1 + ), while the redshifted chirp mass varies as (1 + ), i.e. significantly slower.
Since the means of the three estimators are very similar (see Sec. 4.4), the mass estimator for BNSs is optimal in terms of SNR.On the other hand, BNSs are affected by threshold effects, meaning that the coefficients need to be modelled if one wants to measure the observer velocity.Including the uncertainty in the modelling of the 's in the analysis generates an extra contribution to the variance, which needs to be accounted for.In Sec. 5 we quantify this effect and show that, despite it, the BNSs mass estimator strongly contributes to the constraints on the observer velocity.
The bottom panels of Fig. 4 show the values of the correlations (i.e. the covariance divided by the square root of the respective variances) as a function of the number of BBH and BNS detections.As we can see from the plot, also in the simulations, we obtain that the number count estimator does not correlate with the mass and distance estimators.On the other hand, as expected, we find that the distance and mass estimators are positively correlated, in excellent agreement with the theoretical calculation.As explained before, this is due to the fact that the two observables are similarly sensitive to the radial distribution of sources.
Detection efficiency of the dipole
We now assess the detection efficiency of the dipole from the three estimators, for the BBH and BNS populations.For this we report in Figs. 5 and 6, the detection probability versus false alarm probability (FAP) for several cases.The FAP identifies a threshold for the dipole detection and it is defined as the probability that a random fluctuation in the absence of a dipole (due to shot noise or measurement uncertainty), would result in a false positive.The detection probability is defined as the probability that, in the presence of a dipole, the esti- False Alarm Probability (FAP) Figure 7. Detection probability for the cosmic dipole (vertical axis) versus false alarm probability (horizontal axis) using the combined estimator for a mixed population of GW sources.The different columns consider different observation times.Following Iacovelli et al. (2022), for 1 year of observations we have taken 7.5 • 10 4 BBH detections and 10 5 BNS detections.The number of events expected in 6 months and in 3 and 5 years of observations are found by linearly scaling the previous fiducial values.The first row corresponds to a dipole consistent with the CMB cosmic dipole and the second to a dipole consistent with the AGN dipole.The solid curve marks the detection probability/FAP relation in the case of fluctuations arising from an isotropic background.The detection probability is calculated by generating 200 population realisations while the FAP threshold is calculated using 2'000 noise realisations obtained with the sky shuffling method.The vertical solid, dashed and dotted lines mark the 1, 2, 3 false alarm probabilities.ties, related to the fact that the variance and covariance are generated using only 2'000 sky shufflings.
In Fig. 6 we show the results for BNSs.We clearly see that in this case the mass estimator has the highest detection efficiency, better than the number count estimator.This is directly related to the shot noise suppression discussed in Sec.4.2.The distance estimator performs better for the BNSs than for the BBHs, due to the smaller redshift range spanned by BNSs, which also reduces the shot noise contribution with respect to the BBH case.Despite this, the efficiency of the distance estimator remains below that of the mass and number count.Combining the three estimators leads to a non-negligible gain in terms of detection efficiency6 .
Finally, we also simulated a more realistic scenario for the detection of the cosmic dipole, where we do not perform separate analyses for BNSs and BBHs, but instead combine all the BBHs and BNSs detected in a given observing time obs .Note that in this combination, we still apply the estimators separately on the BNS and BBH populations, since we want to preserve the fact that BNSs have a smaller mass range and radial range than BBHs.We consider that in one year, the network of ET + 2CE detectors would be able to observe 7.5 • 10 4 BBHs and 10 5 BNSs.In Fig. 7 we show the detection efficiency for the combined estimator for 6 months, 1 year, 3 years and 5 years of observations.On one hand, we find that, with 5 years of observations and a bit less than 10 6 sources detected, we can detect the dipole with 1 significance, but it will be unlikely to reach a high significance of 3 .On the other hand, we find that a dipole consistent with the AGN one could be detected with high significance already with one year of observations, thanks to the BNS population.Therefore, a nondetection of the cosmic dipole in the first year of XG detectors would automatically rule out the AGN value of the dipole, thus providing a strong indication for the presence of un-modelled systematic in the AGN measurements.
Modelling of the 𝛼's
The results of the previous section show that the mass estimator for the BNSs is better than the count estimator, both for BNSs and BBHs, in terms of detectability of the dipole signal.However, BNSs (contrary to BBHs) are affected by threshold effects.In order to use the BNSs mass estimator to measure the observer velocity, it is necessary to have a modelling of the coefficients .
The parameters depend on the population of sources, through the parameters and A defined in Eqs. ( 9) and (10).We use the mass model defined in Appendix A of Mastrogiovanni et al. (2023) to describe the population of BNSs.We then bin the events in (, 1,2 , * ) around * = 9, and compute with finite differences the derivative 1.The histogram width is due to shot noise and distance and mass uncertainty, which vary over the 200 realisations.The first row of plots considers 10 6 detections and the second row 10 7 detections.The first column is for BBHs, while the second for BNSs.The plots are generated with the AGN value of the observer velocity.with respect to * to obtain through Eq. ( 9).This needs to be done for each (, 1,2 ) bin, at * = 9.We then use interpolation in and 1,2 to promote the binned values to a function.The function A depends on F , which quantifies the sensitivity of the detector.We first evaluate numerically F on a discrete set of points and then interpolate between them, allowing to attribute one A value to each event.We can then estimate the integrals ( 14), ( 22) and ( 25) to obtain , M and .
The results are shown in Table 1.We see that the values are very close to one in all three cases.Threshold effects seem to be slightly suppressed in the mass and distance case with respect to the number count, probably due to the fact that the integrals ( 14) and ( 22) vanish if and A are constant in .
Since we know the observer velocity in our synthetic catalogues of BNS events, we can use the dipole to measure the parameters , M and and compare this with our modelling.In reality, we would not be able to do that and the only quantity that can be measured is the product of 0 / and the respective .In Figs. 8 and 9 we show the values obtained for the 's assuming an observer velocity consistent with the AGN dipole and CMB dipole, respectively.For BBHs, the histograms are centered around 1, as expected.For BNSs, the histograms are slightly displaced due to threshold effects.The peak is in excellent agreement with the theoretical predictions for the 's.This is important for two reasons: first it shows that threshold effects are indeed small and should not spoil too much our measurement of the observer velocity.In particular, one of the goals of using GWs to measure the dipole is to determine if GWs are consistent with the AGN dipole or with the CMB dipole.Having threshold effects of the order of 10% means that this test can be done robustly also using BNSs.Indeed, we can conclude that if we find a dipole consistent with the AGN one, it would be unlikely that this was due to very large threshold effects increasing the 's by a factor 5. Second, the plots show excellent agreement between the theoretical modelling and the measured 's.Since in practice the 's can only be modelled (and not measured), it is important to know that this can be done in a robust way.In Fig. 10, we check the dependence of the 's on the population model.We vary the value of one of the parameter in the population model by 10% around its fiducial value and compute the histograms for the 's.Comparing the width in the 's with the one only due to shot noise and measurement uncertainty (Fig. 8), we see that varying the model does not generate an additional spread in the 's.This means that the uncertainty from the choice of model is smaller than the variance of the dipole.
DETERMINATION OF THE AMPLITUDE OF THE OBSERVER VELOCITY
Let us now use the Fisher formalism to estimate how well we can measure the amplitude of the velocity, 0 /, by combining the different estimators.We consider two cases, the first one where the only unknown is the observer velocity, i.e. we assume that the parameters , and M are perfectly known; and the second one where these parameters are treated as free parameters, that can be determined (through additional measurements or through a theoretical modelling) with some uncertainty.In both cases, we assume that the direction of the dipole has already been determined by maximising the estimators νn ′ with respect to n ′ .
Known 𝛼's
We use the Fisher formalism to estimate the uncertainty on 0 / obtained from the measurement of the dipole amplitude from the number count, chirp mass and luminosity distance of the BBH and the BNS populations.The signal contains therefore six measurements: The Fisher for the (unique) parameter 0 / is then given by where for = BBH, BNS.Here, we have used that the BBH and BNS measurements are uncorrelated (meaning that the Fisher matrix can be written as a sum over the two populations) and that within one population, the number counts are uncorrelated with the distance and the mass, as shown in Sec.3.4.2.The error on 0 / is then given by 1.
The error, 0 / , is reported in Table 2 for different cases.First, we compute the error from each estimator taken individually.We see that, as expected, the chirp mass estimator for BNSs gives the smallest uncertainty.The number count estimator for BNSs and for BBHs are the other two best ones.The number count for BNSs is very slightly better than for BBHs, due to threshold effects, that increase from 1 to 1.08.Combining the three BBH estimators improves the constraints by 20% compared to using the number counts alone, while combining the three BNS estimators improves the constraints by 32%.Combining all estimators improves the constraints by 50% compared to the number counts of BBHs alone (studied in Mastrogiovanni et al. (2023)).We also show the results for the top 3 estimators, i.e. mass estimator for BNSs and number count estimators for BBHs and BNSs.We see that the constraints are very similar to those obtained with all estimators.
The absolute error on the observer velocity is the same for the CMB and AGN case, however the relative error is reduced by a factor 5 for the AGN case, as can be seen from Table 2. Hence we see that a robust measurement of the observer velocity requires 10 6 events if the observer velocity is consistent with the CMB dipole, but only 10 5 events if the observer velocity is consistent with the AGN one, which is consistent with the detection efficiency results in Figs. 5 and 6.This means in particular that if we do not detect a dipole with 10 5 events, the GW dipole is in tension with the AGN dipole.
Adding uncertainties on 𝛼's
Whereas for the BBHs the coefficients are known and equal to 1, for BNSs it is not the case.These coefficients are affected by threshold effects.As shown in Sec.4.4, these coefficients can be computed, assuming a given model for the population of BNSs.The uncertainty on the model will directly impact the determination of the 's.To account for these uncertainties in the Fisher computation, we add the coefficients in the signal, and assign a corresponding error in the covariance matrix.We then compute how this uncertainty degrades the constraints on 0 /.We consider the combination of all estimators and compute the error on 0 / for different values of the uncertainties on the 's.The signal is written as The Fisher matrix contains now four parameters: and 4 = BNS M and it is given by The covariance matrix has the form cov = (60) where the 3 × 3 blocks cov BBH and cov BNS are given by Eq. ( 56).In Eq. ( 60), denotes the relative uncertainty on the determination of the parameters for the BNS population, that we assume here to be the same.We consider three cases for : 10%, 20% and 50%.The error on 0 / is then given by The results are reported in Table 3. Comparing with the combination of all estimators (second last column) in Table 2, we see that having a 10% uncertainty on all three BNS degrades the constraints on 0 / by at most 1% compared to the case where these parameters are assumed to be known perfectly.Increasing this uncertainty to 50% degrades the constraints by at most 5%.Hence, even if our modelling of the threshold effects is not very accurate, the degradation remains small.Comparing Table 3 with the BBH column of Table 2, we see that even in the case where the uncertainty on the is as large as 50%, we still gain information by including BNSs.For example, for 10 6 events and a dipole consistent with the CMB one, we gain 29% in the measurement of 0 / by adding BNSs.If we can model the 's with a precision of 20%, this gain increases to 36% (38% for a 10% precision).Seen as a function of the BNS uncertainty, the constraints from all estimators are upper-bounded by the constraints from the combination of the three BBH estimators.At worst, if the uncertainty on the BNS is too large, no information is gained by adding BNSs.
Table 2. Error, 0 / , obtained from: the individual estimators (first 6 columns), combining the three BBH estimators, the three BNS estimators, all estimators, or using the top 3 combination of mass and number count estimator for BNSs and number count estimator for BBHs.In the top three rows, we consider a dipole consistent with the CMB one, while in the bottom three rows we consider a dipole consistent with the AGN one.
CONCLUSIONS
In this paper we have developed a robust framework to measure the cosmic dipole using GW detections.Contrary to radio sources or quasars, for which only the sky position can be used, GWs have the advantage of providing three quantities that are affected by the observer velocity: sky position, luminosity distance and redshifted chirp mass.We have developed estimators of these three dipoles, and we have calculated their variance and covariance.We have found that the mass and distance estimators are partially correlated, but that they are both uncorrelated with the number count estimator.Combining the three of them does therefore increase the detectability of the dipole.
BBHs have the advantage over BNSs to be unaffected by threshold effects, since all sources within the frequency range of ET and CE will have SNR above threshold.On the other hand, a significant fraction of BNSs will have an SNR below threshold, meaning that threshold effects are relevant in this case.The dipole from BNSs can of course be detected even without knowing the amplitude of threshold effects.However, to interpret the results and determine if the amplitude is consistent or not with the CMB dipole, it is necessary to have a modelling of these effects.We have developed such a modelling and computed the amplitude of threshold effects.For our population model, we have found that these effects are small, of the order of 10% at most for all three estimators.The amplitudes of these effects depend of course on the population model that is used, however we expect that the order of magnitude we estimated will not change when changing the details of the population model.This shows that it is worth including BNSs to measure the observer velocity and test the isotropy of the Universe.
Comparing the three BNS and three BBH estimators, we have found that the BNS chirp mass estimator is the one with higher detectability, i.e. lower variance.This is due to the fact that the variance is fully dominated by shot noise, which generates fluctuations in the radial distribution of sources, consequently changing the mean mass per pixel.Since the intrinsic mass distribution of BNSs is very narrow, this shot noise contribution is mainly due to the redshift dependence in the chirp mass, which is significantly smaller than the spread in luminosity distance.After the BNS chirp mass, the other two best estimators are the number counts of BBHs and BNSs.
Overall, we have found that combining all events, we need a few 10 6 events to detect a dipole consistent with the CMB one.On the other hand, if the dipole is consistent with the AGN one, we should detect it with 10 5 events.This can be achieved already after one year of observation.In this context, the fact that threshold effects are small is crucial, since it ensures that they cannot boost the dipole by a factor 5, thus mimicking the amplitude of the AGN dipole (which is 5 times larger than the CMB one).Hence, if we see results consistent with the AGN dipole, we can robustly conclude that it is not due to threshold effects, but rather to a large intrinsic anisotropy of the large scale structure.and introduce the dipoles in the luminosity distance and chirp mass distributions of detected events.
For the luminosity distance, the expansion in velocity at fixed has been done in Bonvin et al. (2006a) where in the last step we used integration by parts and the assumed asymptotic behaviour for to make the boundary term vanish.The equivalence of the integrals holds both for the 0 and 1 terms.Note that if the integration boundaries over are [0, ∞[, they should in principle become [(0), ∞[ for the x integration.However, at first order in , we may take integrals over [0, ∞[ for x as well, the correction between the two being of order 2 .
With this, we can build the monopoles and dipoles in the distribution of detected luminosity distances and chirp masses, analogously to Eqs. (3) and ( 18).The redshift integrals over the dipoles at fixed are equivalent to the integrals over dipoles at fixed , by the above argument.
Figure 3 .
Figure 3. Value of the dipole estimators as a function of the angle between the true dipole direction and a chosen direction n ′ .The plots are generated using the AGN value of the observer velocity, and with 10 6 BBH detections (left panel) and 10 6 BNS detections (right panel).The grey shaded areas are the 1, 2, 3 values associated with the shot noise and measurement uncertainties in the absence of dipole, obtained by shuffling the sources isotropically over the sky one hundred times.The horizontal dashed lines mark the theoretical expectations for the variance obtained in Sec. 3. Top plot: Number count estimator.Middle plot: Chirp mass estimator.Bottom plot: Luminosity distance estimator.
Figure 4 .
Figure 4.The plots report the variance (first row) and correlation (second row) of the number count, mass, and distance dipole estimators as a function of the number of detections tot .The variances and correlations are obtained by reshuffling 2'000 times isotropically a population of GW sources.The blue circles indicate the values obtained for the BBH population while the orange diamonds the values for the BNS population.The black lines (dashed for BBHs and dotted for BNSs) indicate the theoretical variance calculated in Sec. 3. We find that the maximum deviation from the expected value of zero for the number countchirp mass and number count -luminosity distance is 0.05, arising from the limited number of sky shufflings.
Figure 8 .
Figure 8.The plots indicate the distribution of the parameters obtained for the 3 estimators with 200 population realisations.The vertical black lines indicate the theoretical prediction, see Table1.The histogram width is due to shot noise and distance and mass uncertainty, which vary over the 200 realisations.The first row of plots considers 10 6 detections and the second row 10 7 detections.The first column is for BBHs, while the second for BNSs.The plots are generated with the AGN value of the observer velocity.
Figure 9 .
Figure 9. Same as Fig. 8, but with the CMB value of the observer velocity.
Figure 10 .
Figure 10.The plots indicate the distribution of the parameters obtained for the 3 estimators with 200 population realisations.Each realisation also considers a random realisation of the merger rate model parameters with an uncertainty of 10% around its fiducial value of = 2.7.The vertical black lines indicate the theoretical prediction from Table1for the fiducial value.The histogram width is due to shot noise, distance and mass uncertainty, and also variation of the merger rate parameter.The top plot is for 10 6 detections while the second one for 10 7 detections.The plots are generated with the AGN value of the observer velocity.
To compute 0 / , we need to know the values of the coefficients for the different populations and the different estimators.For the BBH, since threshold effects are negligible we have BBH = BBH = BBH M = 1.For the BNS, we use the values calculated theoretically in Sec.4.4 and reported in Table and it reads (, n) = d () 1 + n • v 0 H () r () ,(B7)with H () the comoving Hubble parameter, and where r () is the monopole of the velocity expansion of at fixed , which from the third expression in Eq. (B2) reads: r() = ∫ 0 d ′ ( ′ ).The redshifted chirp mass does not have a dipole with respect to fixed observed redshift slices, since it is simply the product of the source chirp mass with (1 + ), which is obviously constant on a slice of constant .As expected, the dipoles at fixed are different from the dipoles at fixed .Let us now see what happens once we integrate over and .Integrating Eq. (B1) between 0 and ∞, we obtain∫ d (0) () + ∫ d (1) () = ∫ d (, ) = ∫ d x ( x, ), d d x = ∫ d x (0) ( x) + ∫ d x (1) ( x) − d (0) ( x) d x ( x) − (0) ( x) d( x) d x = ∫ d x (0) ( x) + ∫ d x (1) ( x) ,(B8) of Mastrogiovanni et al.
Table 1 .
Expected values for the BNS parameters, for a fiducial astrophysical population model.
Table1for the fiducial value.The histogram width is due to shot noise, distance and mass uncertainty, and also variation of the merger rate parameter.The top plot is for 10 6 detections while the second one for 10 7 detections.The plots are generated with the AGN value of the observer velocity.
Table 3 .
Fisher bound on the error 0 / , obtained from the combination of the six estimators assuming different uncertainties on the 's for the BNSs.In the top three rows, we consider a dipole consistent with the CMB one, while in the bottom three rows we consider a dipole consistent with the AGN one. | 14,560 | 2023-09-01T00:00:00.000 | [
"Physics"
] |
Ways and Mechanism of Saving and Increasing Employment In The Period of Pandemics In Uzbekistan
The rapid spread of Covid-19 has had a negative impact on the development of the world economy, and the development of the labor market. According to the International Labor Organization, about 25 million people worldwide are unemployed worldwide and the income of employees can be reduced by at least 3.4 trillion. Therefore, the formation of the insurability of growing in the world economy and labor market on issues of stable growth and employment in Uzbekistan is relevant for today's day. The article presents information on the measures taken to mitigate the pressure rendered to the labor market caused by a pandemic in Uzbekistan
1.INTRODUCTION
On a global scale, the reduction in working time in 2020 led both to a reduction in employment and to a decrease in working time for the preserved work; At the same time in different regions, the volume of losses varied significantly. The time of all the employment was reduced in North and South America, the least -in Europe and Central Asia, where measures to preserve jobs contributed to a smaller decline in working time, especially in Europe. In the end, in 2020, an unprecedented decline in employment occurred all over the world, the equivalent loss of 114 million jobs compared to 2019 in relative terms, the reduction in employment among women (5.0%) was higher than among men, and among young Employees (8.7%) are higher than among older workers [1].
Millions of people are unemployed in different countries around the world because of the corporal pandem. In order to support those who have lost their work in different countries, they occur depending on the possibility of each state. The Coronavirus Pandemy is causing millions of people in the world to survive. According to the International Labor Organization, as a result of the exchange of work hours, it is in danger to stay 1,6 billion people working in the informal economy in an informal economy. It was noted that the second quarter of 2020 is more likely to be deported than expected for the labor market, while more than 300 million jobs can be lost.
I. MATERIAL AND METHODS
The usual look of the labor market has changed, the demand for labor in many professions has declined sharply, and the number of unemployed has increased, while society is vital for medical workers and volunteers for social services of unprotected populations [2]. After the sharp slowing in growth rate of the growth rate of the World Bank in 2020, the economy of Uzbekistan is projected to be partially recruiting the influence of the Covid-19 crisis. Until the economy is fully restored, it is necessary to continue the practice of supporting them to maximize the pandemi's revenue with lowincome families and negative impacts that have suffered. The country's medium-term economic prospects are also said. Measures should be taken to improve reforms and production efficiency economies and production efficiency economies of trade partnerships and improved the world economy. This supports growth in the private sector. It should be noted that the official employment of the population, the revenue of citizens, incomes and economic opportunities of citizens, to accelerate poverty reduction process [3].The report also hears key issues in the recovery of the Uzbek economy from Pandemia and the work needs to be made. After the starting stage of the market liberalization process, Uzbekistan is moving to another, more complicated stage of the Uzbek land use and the right to own the labor market and the capital market, as well as the transformation of the capital market. South Korea, like most other countries, has been experiencing the unpredictable spread of COVID-19, since the first case was diagnosed on January 20, 2020 (see Figure A.1 in the Supplemental Online Appendix). Globally, social distancing policies that suggest (or enforce) staying at home have been implemented, and many facilities such as schools have been closed to contain the infection. Consequently, there has been a significant downturn in economic growth [4]. The medium-term task will be to ensure the Inclusion and Transparency of Reformation. By accelerating the process of state enterprises and creating a model of a competitive and inclusive economic growth model of the private sector, the state will reduce its role in the economy. As a result, it will help to eliminate the model's breeder in handmen. Although the previous economic growth model ensured high GDP growth (in the period from 2000 to 2016, in the period from 2000 to 2016, the rapidly growing economic opportunities and economic opportunities for the growing population. The crisis caused by the Covid-19 pandemic showed the importance of transition to a market economy. About 9 percent of the country's population still lives below the poverty border (World Bank for low-income countries -$ 3.2 per day). More citizens live close to this border. At the time of quarantine limits, this problem was worsened -about one another million civilian limits. To reduce these risks, the Government is required to improve the dynamic growth of the economy, as well as improving health care and educational services. In this way, it is necessary to pay attention to the reforms to strengthen the social protection system, improve the conditions of the labor market and remove barriers to human capital. The higher participation and share of the private sector in the economy, as well as the higher the quality of jobs, as well as an important sign of the success of reforms. It is difficult to solve these problems with limited administrative capabilities due to the ongoing influence of pandemic.
Uzbekistan has declined sharply by 5.8% in 2019 to 1.6%. This was due to the introduction of quarantine limits and due to the pandemic transactions. At the same time, Uzbekistan became one of the few countries that demonstrated economic growth in Europe and Central Asia last year. This provided anti-crisis action, which allowed the sustainable growth of agricultural production and the increase in health care and economic support of households. Due to the pandemic, the tax-budgeting and investment, the reduction of exports and imports in 2020, created the situation in consumption (state and private). This has become the main driver of economic growth by demand for more than ten years. Unemployment rate increased from 9% to 11.1% in Southern 2010. The poverty rate rose to up to 9 percent, and in 2020 it exceeded 7.4 percent to the crisis. This is due to the loss of jobs, income and money transfer of employment in the population and migrants of employment. A significant expansion of social assistance programs has helped the country's harmed households to a certain extent. As of August 1, 2020, 617 thousand citizens were employed. Of these, 506 thousand fell precisely for a period of pandemic.The mechanism for granting subsidies to citizens returned from external migration and failed to work out abroad. In particular, citizens were allocated funds for the development of household sections in the amount of from 3 to 30 minimum wages. To date, 25 thousand families are provided with work, 54 billion soums are spent. It was noted that China's experience in reducing poverty was studied, and the population received economic support based on financing to 10-fold minimum wages. As a result, 318 cooperatives were created for 3 months, 16 thousand people were attracted to work. In total, it was spent 26 billion soums. By the end of the year, it is planned to create another 327 cooperatives and attract 17 thousand inhabitants to them. 17 such cooperatives are based on crafts and unite women. Self-employment routes were increased from 24 to 67. Also reduced its payment from 1 million 50 thousand to 115 thousand soums. Currently, 201 thousand citizens are employed in the republic. By the end of the year, this figure will reach 50 thousand people, the Ministry of Employment and Labor Relations of the Republic of Uzbekistan said.
II.
RESULTS AND ITS DISCUSSION During the pandemic, the unique side of the crisis in the economy is that along with both demand and the proposal problem with the invitation crisis. This was due to the following factors: First, demand crisis is a decrease in the consumption of goods (mainly long-term consumer goods) and services as a result of 80% of the world's quarantine; Second, the result of gross demand in large economies decreased by 5-10% GDP growth. According to Goldman Sachs forecast, in 2020 the United States will drop to GDP -6% in the United States, China GDP; Third, there are also economic problems in developing countries since large economic problems in large economies. These problems are largely a decrease in prices for raw materials in the world markets, especially since the last 18-year minimum of 2020 (The Economist 2020a; CNN 2020); Fourth, the restrictions used in the emendation have led to an increase in the costs associated with the protection of the population almost in almost all countries.
A variety of measures have been made to support the population in the country. More than 60 professionals who have been self-employed were exempt from income tax. During the quarantine period, more than 120,000 low-income families were appointed, and their number is currently 600,000 people.
III. CONCLUSION
During the pandemic period, the experience of Uzbekistan in the field of social and medical protection of its population was given the greatest priority first of all to the health of the population, and as a result of the full mobilization of all opportunities in this regard, the gradual abolition of quarantine measures was achieved in a short period of time. At the same time, many modern and effective measures have been developed to ensure the functioning of the economy and all sectors, banks, producers and business entities. Measures of social and material support of the population were instituted in a systematized manner. The introduction of large-scale measures of support for all segments of the economically active population, along with ensuring the stability of all sectors of the national economy in the post-pandemic period, will provide an opportunity to increase the competitiveness of the current rate of relief, benefits and initiatives.
The establishment of new workplaces for the rural population, the solution of their social problems, the increase of labor activity serves to restore free and comfortable life in our country, further raising the standard of living of the population. In increasing employment and income of rural dwellers and with it business is important in solving related social problems [5]. Today, the widespread coronavirus pandemic worldwide continues to have the world economy of the world. As a result of restrictive measures taken by states, employees working in different sectors of the economy lose their work. Citizens deprived of their work and the source of income feel the need for social assistance and external support of the state.
The pandemic period teaches us a new environment and consists of: -A healthy and balanced living habit formed in the same period is reflected in the decision-making and purchasing behavior of consumers even in the coming period; -Retail purchases and driverless cars based on low intervention of people (online purchases) develop smart city technologies and digital automation because people adapt to new business models and systems in times of crisis; -Experience working from home leads to an increase in tools and technologies that allow you to work remotely; -After several months of online distance learning, e-learning grows and even after the crisis, many online courses begin to be preferred; -The hospital and healthcare industries will consider adapting to the new requirements and jobs arising from this experience; -The adult population layer, which is forced to adapt to digital economy solutions during the pandemic, is also now becoming the largest online consumer.
In order to quickly and effectively combat the economic consequences of the COVID-19 pandemic, the United Nations Development Program and the Chamber of Commerce and Industry have launched a business clinic program to support small and medium-sized businesses in Uzbekistan. information on measures to support entrepreneurship provided by the state. | 2,804.2 | 2021-06-29T00:00:00.000 | [
"Economics",
"Political Science"
] |
Imaging nanoparticle flow using magneto-motive optical Doppler tomography
We introduce a novel approach for imaging solutions of superparamagnetic iron oxide (SPIO) nanoparticles using magneto-motive optical Doppler tomography (MM-ODT). MM-ODT combines an externally applied temporally oscillating high-strength magnetic field with ODT to detect nanoparticles flowing through a microfluidic channel. A solenoid with a cone-shaped ferrite core extensively increased the magnetic field strength (Bmax = 1 T, ) at the tip of the core and also focused the magnetic field in microfluidic channels containing nanoparticle solutions. Nanoparticle contrast was demonstrated in a microfluidic channel filled with an SPIO solution by imaging the Doppler frequency shift which was observed independently of the nanoparticle flow rate and direction. Results suggest that MM-ODT may be applied to image Doppler shift of SPIO nanoparticles in microfluidic flows with high contrast.
(Some figures in this article are in colour only in the electronic version) Rapid progress in the nanosciences and requirement to characterize structures and particles has led investigators to utilize several sophisticated imaging modalities including atomic force microscopy (AFM), magnetic force microscopy (MFM) and scanning near-field optical microscopy (SNOM). Most of these techniques utilize a probe that has a nanometresized sharp tip to allow imaging of a gas-solid interface with nanometre scale. As a micrometre scale imaging modality, optical coherence tomography (OCT) uses the short temporal coherence properties of broadband light to extract structural information from heterogeneous optically turbid samples such as biological tissue. During the past decade, numerous advancements in OCT have been reported including increased resolution (2-3 μm) [3] and real-time imaging speeds [9]. To date, OCT is not widely utilized in the nanosciences due to relatively poor axial and lateral resolutions. Recently, OCT and ultrasound (US) imaging methods have been demonstrated 3 These authors made equal contributions.
to image cells and tissues containing paramagnetic iron oxide nanoparticles [7,6]. These methods, referred to as magnetomotive OCT and US respectively, utilize an externally applied high-strength magnetic field gradient to activate mechanical movement of the nanoparticles. The nanoparticle movement produces a local time-varying mechanical strain field that is detected with OCT or US.
Magnetic nanoparticles are of great interest because of the unique properties of magnetism in nanomaterials and ability to control at a distance without mechanical contact. Magnetic nanoparticles attract significant interest in the applied sciences due to possible technological applications for information storage, magnetic refrigeration, bioprocessing gas sensors and ferrofluids, to name a few. Ferrofluids have numerous potential application/market areas including ferrofluid steppers, gauges and sensors [17,12]. Monitoring/manipulating of ferrofluidicbased micro-channels is also of interest [4].
Since the ability to characterize fluid flow velocity using OCT was first demonstrated by Wang et al [13], several phase resolved [18], real-time [10,11] optical Doppler tomography (ODT) approaches have been reported. In ODT, the Doppler frequency shift is proportional to the projection of the scattering wavevector (k s −k i ) on the scatterer's flow direction. When the two directions are perpendicular, the projection is zero and no Doppler shift is observed. Because a priori knowledge of the Doppler angle is usually not available, and conventional intensity OCT imaging provides a low contrast image of microfluidic flow, detecting flow in small-diameter microfluidic channels is difficult. In this paper, we demonstrate a novel mechanism to increase contrast in ODT images by using superparamagnetic iron oxide (SPIOs) nanoparticles activated with an externally applied magnetic field. The use of SPIOs as contrast agents for magnetic resonance (MR) imaging of bowel, liver, spleen, lymph nodes, bone marrow, perfusion and angiography [14] has been extensively studied since the early 1990s [2,15]. The magneto-motive ODT method was reported for detecting red blood cells (RBCs) that are important endogenous contrast agents in the biomedical optics field [5]. However, RBCs are very weakly paramagnetic with a magnetic susceptibility (χ ∼ 10 −5 ), whereas SPIOs susceptibility is unity. This means that SPIOs can be actuated with much lower magnetic field intensity.
Herein we present a novel extension of the magnetomotive approach by modulating the Doppler shift to improve magnetic nanoparticle contrast. Contrast in ODT images is enhanced by activating mechanical motion of the nanoparticles with an externally applied high-strength magnetic field gradient. We describe the magneto-motive optical Doppler tomography (MM-ODT) experimental setup and present Doppler images of flowing SPIO nanoparticles under the influence of an externally applied magnetic field gradient.
The material parameter characterizing magnetic materials is the magnetic volume susceptibility, χ, which is dimensionless in SI units and is defined by the equation M = χH where M is the magnetization at the point under study and H is the local density of magnetic field strength. A SPIO with χ ≈ 1 suspended in solution and placed in a magnetic field gradient experiences forces and torques that tend to position and align it with respect to the field's direction. Magnetic energy, U , of a SPIO nanoparticle in an external magnetic field is given by, where m is the magnetic moment, B is the magnetic flux density, V is the particle volume, μ 0 is the permeability of free space and χ is the difference between susceptibility of the nanoparticle and surrounding solution. Magnetic force acting on SPIO nanoparticles becomes: In our experiments we apply a sinusoidal magnetic flux density that is directed principally along the z direction. Hence, we write B(x, y, z; t) = sin(2π f m t)B z (z)k and the magnetic force F m acting on nanoparticles, where f m is the modulation frequency of the applied sinusoidal magnetic field. In addition to the magnetic force, the SPIO nanoparticle experiences a pressure gradient, body, viscous drag forces which combine to produce a dynamic displacement [z(t)] that can be included in the analytic OCT fringe expression [3], I f , where I R and I S are the back-scattered intensities from reference and sample arms, respectively, f 0 is the fringe carrier frequency, n is the medium refractive index, z(t) is the dynamic nanoparticle displacement and λ 0 is the light source centre wavelength. A schematic of the MM-ODT apparatus is shown in figure 1. The ODT light source consisted of a superluminescent diode (B&W TEK, DE) centred at 1.3 μm with a bandwidth of 90 nm. Light was coupled into a singlemode optical fibre based interferometer that provided 1 mW of optical power on the microfluid channel containing SPIO solutions. A rapid-scanning optical delay (RSOD) line was used in the reference arm and aligned such that no phase modulation was generated when the group phase delay was scanned at 4 kHz. Phase modulation was generated using an electro-optic waveguide phase modulator that produced a single carrier frequency (1 MHz). To reduce intensity noise from the OCT interference signal, a dual-balanced photodetector was used. A hardware in-phase and quadrature demodulator with high/bandpass filters was constructed to improve imaging speed. Doppler information was calculated with the Kasai autocorrelation velocity estimator [16]. Labview software (National Instruments, Austin, TX) was used to implement the MM-ODT system with a dual processor based multitasking scheme. The maximum frame rate of the system was 16 frames per second for a 400 × 512 pixel sized image. In the sample path of the interferometer, a collimated beam was redirected to the microfluidic channel by two galvanometers and a scanning system that permitted three-dimensional scanning. The probe beam was focused by an objective lens, which yielded a 10 μm diameter spot at the focal point. A 500 μm inner-diameter glass capillary tube was used as a microfluidic channel and placed perpendicularly to the probe beam. SPIO solutions used for flow studies were injected through the tube at a constant flow rate controlled by a dual-syringe pump (Harvard Apparatus 11 Plus, Holliston, MA) with ±0.5% flow rate accuracy.
A solenoid coil (manufacturer: Ledex, part number: 4EF) with a cone-shaped ferrite core at the centre (figure 2) and driven by a current amplifier supplying up to 960 W was placed underneath the sample during MM-ODT imaging. The combination of the core and solenoid, using high power operation, dramatically increased the magnetic field strength (B max = 1 T and |B| 2 = 220 T 2 m −1 ) at the tip of the core and also focused the magnetic force on the targeted samples. The magnetic force applied to the capillary tube was varied by a sinusoidal current to induce SPIO nanoparticle movement. The capillary tube was placed upon a 1 mm thick base plate. To demonstrate MM-ODT detection of nanoparticles in solution, we first recorded B-mode OCT/ODT images of a rectangular glass capillary tube filled with a flowing near-zero susceptibility turbid solution without and with an external magnetic field as a control sample. The near-zero susceptibility turbid solution was a mixture of deionized water and 0.5 μm latex microspheres (μ s = 5 mm −1 ) at a 13 mm s −1 flowrate. The magnetic flux density and its frequency were approximately 1 T and 40 Hz, respectively. The field gradient, ∂ B/∂z, over 1 mm was 220 kT mm −1 . B-mode OCT/ODT images were acquired over a 650 μm × 650 μm cross section in the microfluidic channel. Figures 3(a) and (b) show B-mode OCT and ODT images without any external magnetic field, whereas figures 3(c) and (d) show B-mode OCT/ODT images with a 40 Hz externally applied magnetic field. No distinguishable Doppler shift is observed in the ODT image with applied magnetic field (figure 3(d)) indicating no interaction between the external magnetic field and turbid solution without nanoparticles.
An SPIO nanoparticle solution was injected through the glass capillary tube by a syringe pump at a constant flow rate. An oscillating Doppler frequency shift resulting from nanoparticle movement could be observed (figure 4) at three flow rates (3,12, and 30 mm s −1 ). The angle between the probing beam and the tube was set at 5% so that the Doppler phase shift did not wrap, at the high flow rate (30 mm s −1 ). In our experiments, the probe beam was first aligned at the centre of the conical ferrite core, and the tube was placed just above the conical tip so that the direction of the field gradient was parallel to the probe beam (along the z direction). M-mode MM-ODT images (figure 4) consisted of 634 × 400 pixels axially and temporally, respectively, resulting in an image acquisition time of 100 ms. The images were recorded after 5 s following activation of the magnetic field. A Doppler frequency shift of 100 Hz was continuously observed during the magnetic activation. The flow velocity can be quantified as Superparamagnetic nanoparticles under the influence of a strong magnetic field gradient tend to move toward the field source. As the local concentration of nanoparticles increases in response to an external magnetic field, osmotic and elastic recoil forces from the inner tube boundary increase and hinder further movement into the field. Equivalently, forces driving nanoparticles find an equilibrium state where the magnetic force is balanced by the vector sum of the recoil forces.
When only a magnetic force is present (recoil and pressure gradient forces are absent or neglected), direct integration of the magnetic force (Fm) gives z(t) = ε(t) + z 0 cos(4π f m t) where ε(t) = a 0 t 2 , and a 0 , z 0 are constants and dependent on χ, V , B, and f m is the modulation frequency of the magnetic flux density. The initial transient response of nanoparticles to the magnetic field (figure 6) contains both components of z(t). As the low-pass filtered profile (figure 6(b) red line) indicates, the SPIO nanoparticles moved in one direction and soon movement was reduced. However, the Doppler frequency oscillation could be observed at the beginning of magnetic field application. In confined systems such as a microfluidic channel, ε(t) becomes negligible at sufficiently long times (several seconds) because recoil and drag forces impede the free-space acceleration of the nanoparticles. Once forces on the nanoparticles equilibrate, free-space acceleration of the SPIO nanoparticles approaches zero and the sinusoidal variation of the magnetic force dominates nanoparticle displacement.
The nanoparticles used in our experiments are nanoscale while the probing beam had a wavelength of 1.3 μm so that individualized imaging is not possible. However, the data reported here represents an aggregate response to the probe beam. While it has been reported that the introduction of a magnetic field changes the amplitude of the fringe [8], we applied the magnetic field in the same direction as the probing beam, so that phase changes of the nanoparticles, rather than the amplitude change, were produced and monitored. The minimum detectable concentration of nanoparticles was 0.21× 10 12 iron particles μl −1 (0.36 μg iron μl −1 ) which is three times higher than the manufacture recommended dilution level (0.07 × 10 12 iron particles μl −1 ) for human procedures. The minimum detectable number of particles in the coherence volume, defined as beam spot area times coherence length of the MM-ODT, is 165 × 10 3 . The sensitivity of the MM-ODT is about 90 dB, but by increasing the sensitivity of the system, one can expect to detect weaker concentrations. For example, a Fourier Domain OCT with 115 dB sensitivity may be able to detect 1500 particles per coherence volume. With increased sensitivity, spatial resolution can be increased by reducing the beam spot size and coherence length.
We have demonstrated the implementation of MM-ODT for superparamagnetic iron oxide nanoparticle imaging using an external oscillating magnetic field. Doppler shift leading to a phase modulation of the signal due to nanoparticle movement in the flow was introduced by applying a temporally oscillating high-strength magnetic field in the same direction as the probe beam. The controlled and increased Doppler frequency shift in MM-ODT with superparamagnetic nanoparticles may provide a new investigational tool to study superparamagnetic nanoparticle dynamics for nanosciences and various related studies. | 3,220 | 2007-01-03T00:00:00.000 | [
"Physics"
] |
Dark-Singular Straddled Optical Solitons for the Dispersive Concatenation Model with Power-Law of Self-Phase Modulation by Tanh-Coth Approach
: This paper recovers dark-singular straddled optical solitons for the dispersive concatenation model with power-law of self-phase modulation using the tanh-coth approach. The individual dark or singular solitons are not supported by the model for power-law unless this law collapses to Kerr law as proven with the usage of undetermined coefficients, earlier.
Introduction
A decade ago, in 2014, an intriguing model was proposed to explore the propagation of solitons through optical fibers [1,2].This is a combination of the Sasa-Satsuma equation, the Lakshmanan-Porsezian-Daniel (LPD) model, and the well-known nonlinear Schrödinger's equation (NLSE).This model is now thoroughly examined, and a plethora of results have been reported.Shortly thereafter, a dispersive variant of the concatenation model came int existence during 2015 [3][4][5].This is obtained by conjoining dispersive fifth order NLSE, the LPD, and the Schrödinger-Hirota equation (SHE).It is indeed a dispersive concatenation model since it contains fifth order and third-order dispersive effects that come from the fifth order NLSE and the SHE.Subsequently, this model attracted a lot of attention too [6][7][8][9][10][11][12][13][14][15].
A few preliminary results for the model have been recovered and reported.One main result is addressing the model by the method of undetermined coefficients that yielded single soliton solutions, save bright and singular solitons.It was proved that the model with power-law of self-phase modulation (SPM) does not support dark or singular optical solitons unless this power-law reduces to Kerr law.Hence this paper dives into the tanh-coth approach to retrieve dark-singular straddled optical solitons.The details of the derivation of such straddled solitons are exhibited in the rest of the paper after a quick and succinct introduction to this model.
Governing model
The concatenation model with power nonlinearity is formulated as [15]: (1) In equation ( 1), the independent variables, x and t, stand for the spatial and temporal coordinates, respectively, and the dependent variable, q(x, t), is a complex-valued function that represents the wave amplitude.The linear temporal evolution is represented by the first term, 1 i = − , and the chromatic dispersion and SPM are represented by the second and third terms, respectively, with their coefficients being a and b.The extension of NLSE to formulate the SHE is represented by the coefficient of c 1 .Next, the LPD components and the fifth-order NLSE that incorporates the dispersive impact from the fifth-order dispersion term are represented by the coefficients of c 2 and c 3 , respectively.The parameter n represents power-law of SPM and from earlier studies, it is well-known that soliton solutions exist for 0 < n < 2. Numerical simulations and experimental results also prove this.
Travelling wave solution
The solutions of Eq. ( 1) are assumed to be Where ( ) x t ξ γ = − and the phase 0 ( , ) x t kx t θ ω θ = − + + .Also, ( ) u ξ is the amplitude components of the wave, γ is its speed, k is the Soliton frequency, ω is its wavenumber and 0 θ is the phase constant.Using Eq. ( 2) and its derivatives, Eq. ( 1) transforms to Eqs.
Tanh-coth method
Assume ( ) , by using the ansatz, [16][17] (23) tanh( ) Y µξ = that leads to the change of variables: For the next step, assume that the solution for Eq. ( 18) is expressed in the form 0 1 ( , ) using the principle of the homogeneous balance method between the nonlinear term v 3 v'''' and the linear term v 6 from Eq. ( 18), then 3 4 6 N N N + + = , which gives N = 2. Hence, Eq. (26) becomes Here a 0 , a 1 , a 2 , b 1 and b 2 are constants to be determined.Then, substituting Eq. ( 27) and its derivatives into Eq.( 18), and assuming for simplicity that a 1 = b 1 = 0, we obtain the following:
Results and discussion
In this section, we present and discuss the results obtained from the analysis of the dark-singular straddled optical soliton solution described by the complex-valued solution (39).The evolution of this solution is illustrated through various subfigures in Figure 1, where we explore the impacts of different parameters, including time (t), power nonlinearity (n), higher-order dispersion (σ 3 ), and nonlinear dispersion (σ 4 ).The detailed analysis of these parameters provides a comprehensive understanding of the behavior and characteristics of the dark-singular straddled optical soliton solution.Figure 1 (a) presents a surface plot of the dark-singular straddled optical soliton solution.This plot illustrates the three-dimensional representation of the soliton's amplitude over time and space.The surface plot clearly shows the evolution of the soliton, highlighting the regions of maximum and minimum amplitude.The visualization helps in understanding the overall structure and dynamics of the soliton, emphasizing the interaction between dark and singular components.Figure 1 (b) depicts a contour plot of the dark-singular straddled optical soliton solution.This plot provides a two-dimensional representation, where contour lines represent the amplitude levels of the soliton.The contour plot is particularly useful for identifying the spatial distribution of the soliton's amplitude and observing the changes in its shape and intensity over time.The contour lines reveal the presence of distinct dark and singular regions within the soliton structure.The 2D plots in Figures 1 (c) to Figures 1 (g) offer detailed insights into the specific effects of various parameters on the dark-singular straddled optical soliton solution.These plots present the soliton's amplitude as a function of space at different instances and parameter settings.Figures 1 (c) and Figures 1 (d) show the 2D plots of the soliton's amplitude at different time variables: t = 0.2, 2.2, 2.4, 2.6.The evolution of the soliton over time reveals the dynamic nature of the dark and singular components.The soliton undergoes periodic changes in amplitude, with the dark region appearing as a dip in amplitude and the singular region characterized by sharp peaks.Figure 1 (e) addresses the effect of power nonlinearity (n) on the dark-singular straddled optical soliton solution.By varying the power nonlinearity variables: n = 0.5, 0.9, 1.5, 2, 2.5, we observe significant changes in the soliton's amplitude and shape.As the power nonlinearity increases, the amplitude of the soliton becomes more pronounced, indicating a stronger interaction between the dark and singular components.Figure 1 (f) illustrates the impact of higher-order dispersion (σ 3 ) on the soliton solution.The higher-order dispersion variables are set to: σ 3 = 1, 2, 3, 4, 5.The results show that higherorder dispersion affects the spreading and localization of the soliton.Increased dispersion leads to broader soliton profiles, whereas lower dispersion maintains a more localized structure.Figure 1 (g) explores the effect of nonlinear dispersion (σ 4 ) on the dark-singular straddled optical soliton solution.The nonlinear dispersion variables are: σ 4 = 0.4, 0.5, 0.6, 0.7, 0.8, 0.9.The plots demonstrate that nonlinear dispersion significantly influences the soliton's stability and amplitude modulation.Higher nonlinear dispersion values enhance the soliton's robustness and intensity.The darksingular straddled optical soliton solution is also analyzed with respect to several key parameter variables: k = 1, a = 1, b = 1, c 1 = 1, c 2 = 1, c 3 = 1, σ 1 = 1, σ 2 = 1, σ 9 = 1.These parameters are critical in defining the soliton's characteristics and behavior.The comprehensive analysis of these parameters provides insights into how different physical factors (f) (g) contribute to the formation and evolution of the dark-singular straddled optical soliton.In addition to the surface, contour, and 2D plots, Figure 1 also includes the modulus of the dark-singular straddled optical soliton solution.
The modulus representation emphasizes the amplitude of the soliton, providing a clear visualization of its intensity distribution.The modulus plot is particularly useful for identifying regions of maximum and minimum amplitude, and for understanding the overall energy distribution within the soliton.The detailed analysis presented in Figure 1 highlights the complex and dynamic nature of the dark-singular straddled optical soliton solution.By examining the effects of time, power nonlinearity, higher-order dispersion, and nonlinear dispersion, we gain a comprehensive understanding of how these factors influence the soliton's behavior.The surface plot, contour plot, and 2D plots provide valuable insights into the soliton's structure, evolution, and stability.The findings from this study contribute to the broader understanding of optical solitons and their potential applications in photonics and optical communications.
Conclusion
The dispersive concatenation model was integrated in the current research, exhibiting power-law nonlinearity that led to dark-singular straddled optical solitons.Tanh-Coth integration technique has enabled this retrieval.The current paper paves way for further investigations in this avenue.This model will be addressed further along with additional issues such as bifurcation analysis, numerical studies using Laplace-Adomian decomposition and variational iteration approach, studying the model with differential group delay and with dispersion-flattened fibers and several many features [18][19][20][21][22][23].The results of such research activities will be disseminated that would yield wider perspective to this model.Those results are currently awaited.
Figure 1 .
Figure 1.Profile of a dark-singular straddled optical soliton (a) Surface plot; (b) Contour plot; (c) 2D plot setting the time variable: t = 0; (d) 2D plot with the time variable t set at various intervals; (e) 2D plot with the effect of power nonlinearity; (f) 2D plot with the effect of higher order dispertion; (g) 2D plot with the effect of nonlinear dispertion b 2 in Eq. (35), Set of algebraic equations SAE is obtained | 2,179.8 | 2024-08-12T00:00:00.000 | [
"Physics"
] |
Electrocatalytic Semihydrogenation of Alkynes with [Ni(bpy)3]2+
Electrifying the production of base and fine chemicals calls for the development of electrocatalytic methodologies for these transformations. We show here that the semihydrogenation of alkynes, an important transformation in organic synthesis, is electrocatalyzed at room temperature by a simple complex of earth-abundant nickel, [Ni(bpy)3]2+. The approach operates under mild conditions and is selective toward the semihydrogenated olefins with good to very good Z isomer stereoselectivity. (Spectro)electrochemistry supports that the electrocatalytic cycle is initiated in an atypical manner with a nickelacyclopropene complex, which upon further protonation is converted into a putative cationic Ni(II)–vinyl intermediate that produces the olefin after electron–proton uptake. This work establishes a proof of concept for homogeneous electrocatalysis applied to alkyne semihydrogenation, with opportunities to improve the yields and stereoselectivity.
Electrochemical experiments
All electrochemical experiments were performed outside of the glovebox, in DMF 0. For experiments assessing post-activity electrodes, at the end of a standard electrolysis, the working electrode (carbon foam) was quickly disconnected, taken out of the solution and was not rinsed. The final electrolyte solution was removed from both chambers of the electrolysis cell. The cell, which was not rinsed, was then filled with the same electrolyte containing the alkyne and acid but exempt of Ni. The nonrinsed, previously used working electrode was plunged in the fresh electrolyte and a new electrolysis performed.
Isolation procedure. The electrolysis was performed as described above but with following alteration: the concentrations At the end of the electrolysis, the cathodic electrolyte (5 mL) was mixed with 5 ml deionized water. The mixture warms following water addition and was allowed to cool down to room temperature. The DMF/water phase was extracted 3 times with 10 mL pentane. The combined pentane phases were washed with 10 mL distilled water and directly filtered over a short silica pad (ca 9 g). Full elution of the product with additional volumes of pentane was followed by thin layer chromatography. The desired fractions were combined and the solvent removed on rotary evaporator. The obtained oil was taken in ca 5 mL of diethyl ether and the solvent removed on rotary evaporator to afford the final product as an oily colorless solid (48.2 mg; 53% yield).
Spectroscopic measurements were recorded during chronoamperometry at -1.9 V (vs reference electrode).
UV-SEC experiments were performed in a thin-layer quartz glass cell (1 mm optical path length, 013511 spectroelectrochemical cell kit, ALS Co., Ltd, Japan) using a gold mesh in the optical path, a platinum wire, and the AgNO3/Ag electrode described above as working, counter, and reference electrodes, respectively. The cell was filled with 1 mL of the electrolytic solution under investigation. The spectra were recorded every 10 s for 10 min.
IR-SEC experiments were conducted in an optically transparent thin-layer electrochemical (OTTLE; Department of Chemistry, University of Reading) cell fitted with NaCl windows, equipped with a gold mesh working electrode in the optical path, a Ag wire pseudo-reference electrode, and a platinum mesh counter electrode. 2 The OTTLE cell was filled with 0.3 mL of the solution under investigation. The spectra were recorded every 30 s for 5 min.
Analytical methods
Samples were analyzed by gas chromatography using gas chromatographs equipped with a flame ionization detector (GC- Integrals of the GC-FID and NMR peaks of the substrates and products were normalized over the one of the internal standard (mesitylene) for quantification. The quantification of carbon balance, alkyne conversion, alkene yield, faradaic efficiency (F.E) toward alkenes and turnover numbers (TONs) were calculated using the following equations: Where C i (S), C t (S), C i (SH ! ) and C t (SH ! ) are concentrations in alkyne S or alkene SH2 at the beginning of reaction (Ci) and at the given time (Ct), # (SH ! ) is the amount of alkene at a given time, $ (Ni) is the amount of Ni at the beginning of the reaction, Qt is the charge passed through the system at a given time and F is the Faraday constant (96485 C·mol -1 ).
Carbon balance, alkyne conversion, yields in alkenes were quantified from GC-FID measurements, unless otherwise noted.
The reported Z/E ratios of products are evaluated based on the integration of the signals of the olefinic protons in 1 H NMR, unless otherwise noted. In the case of 4-octenes, the Z/E ratio was calculated from the GC-FID chromatograms using two standard isomers. The presence of detectable amounts of alkane products was assessed using GC-FID when reference alkane compounds are available. When reference alkane compounds are not available, GC-MS was used to provide a qualitative evaluation of the presence of alkane.
Analysis of the post-activity electrodes was performed following methods established in literature. 3,4 The working electrode was namely disconnected right after electrolysis and taken out of the solution.
For electron microscopy, the working electrode was dried under a N2 stream overnight and the dried sample was mounted on an aluminum stub with gold holder and introduced in the electron microscope. Scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX) were recorded on S-5500 (Hitachi, Japan) microscope at an acceleration voltage of 30 kV.
For metal trace titration, the working electrode was digested in a 69% HNO3 solution for 15 min at 200 o C using a microwave oven (MARS6, CEM, USA) and the resulting solution submitted to analysis by inductively coupled plasma mass spectrometry (ICP-MS; ICPMS-2030, Shimadzu, Japan).
Thermodynamic calculation of standard potential
The interconversion of an alkyne S with the corresponding alkene SH2 in a solvent (s) with the presence of an acid (HA) is described by: with the corresponding standard potential E S/SH 2 ,HA,s 0 .
In the case of benzoic acid (BA) as a proton source in DMF, the parameters are as follow: Thus: In first approximation here, we do not account for the homoconjugation of the acid with the corresponding base, although the phenomenon is known and quantified for BA in DMF.
In the case of the hydrogenation of 4-octyne 1 to 4-octene 1H2 in DMF, the parameters are as follow: (1)
Electrochemical behavior of Ni
Fig. S1 CVs of Ni.
Calculation of bipyridine released from Ni
The peak current at a reversible CV wave of a freely diffusing species is given by equation 3-1: 10 where ip,c is peak current, F is Faraday constant (96485.3 C·mol -1 ), S is the surface area of working electrode, C 0 is concentration of the analyte in bulk, D is diffusion coefficient of the analyte, ν is scan rate, R is universal gas constant (8.314 J·K -1 ·mol -1 ), and T is absolute temperature.
From the cathodic peak current at the bpy 0/couple (E1/2 = -2.60 VFc) observed in CVs of bipyridine at different concentrations ( Fig. S3), we estimated a diffusion coefficient of bipyridine under our conditions at: D(bpy) = 6.08 10 -5 cm 2 ·s -1 . Using that value, the concentration of bipyridine released from Ni upon addition of 1 could be recovered from the cathodic peak current at -2.65 VFc (Fig. S2b). The values are reported in Table S1 and Fig. S4. S13 Table S1. Concentration of released bipyridine from Ni upon addition of 1. [
iR-corrected potentials applied in electrolysis
In our experimental electrolytic setup (two-compartment cell split by a P3 frit), the application of iR compensation during electrolysis led to an oscillating behavior. For that reason, electrolyses were performed without compensating for ohmic drop. Nevertheless, an estimation of the potentials corrected from ohmic drop can be obtained using the ohmic drop measured prior to electrolysis (Rcell), the potential applied during electrolysis (Eapp) and the current averaged during electrolysis (<i>).
An estimate of the iR-corrected potential is then given by: Eapp,corr = Eapp -<i>×Rcell. The corresponding values are reported in Table S3. We note that these Eapp,corr values represent conservative negative estimates of the potentials applied at initial time, since the magnitude in current decays as the electrolyses proceed and the alkyne is consumed. The electrolysis in an electrolyte saturated with H2 but exempt of acid (BA) under conditions otherwise identical to our standard ones shows only minor conversion of the alkyne 1 (3.4%), with olefin evolution below traces (Fig. S16). This result further assesses that, in our system, the nature of the conversion process is an electrocatalytic hydrogenation and not an electrochemically-assisted hydrogenation. At the end of an electrolysis run under our standard conditions for 45 min (Fig. S18a, mauve area), the working electrode (carbon foam) was quickly disconnected, taken out of the solution and was not rinsed. This procedure is aimed to prevent the re-dissolution of any Ni deposits that can occur in the absence of (cathodic) applied potential and lead to false negative rinse-tests. 4 The final electrolyte solution was removed from both chambers of the electrolysis cell (also not rinsed) and replaced by fresh electrolyte containing 1 and BA but exempt of Ni. The non-rinsed, previously used working electrode was plunged in the fresh electrolyte and a new electrolysis performed. This second electrolysis produces only traces of conversion (3.9 % at 45 min; Fig. S18a, grey area). A similar experiment but stopping the first electrolysis at 15 min (this duration being our shortest estimate to reach full alkyne conversion) produces a similar result (Fig. S18b), although with a remainder of activity in the second electrolysis (at a rate at least 20 times slower) likely due to the presence of active Ni complex in the remainder of initial electrolyte carried over by the non-rinsed electrode and cell.
Without and with applied potential
We also note that no induction period is observed prior to activity during electrocatalytic runs with Ni (see Fig. S11,15), which would indicate the degradation of the molecular complex into another species responsible for catalysis.
S30
was not rinsed, such results can indicate a deposit of Ni strongly adsorbed on the electrode or Ni species dissolved in the remaining film of electrolyte covering the electrode. Although we cannot conclude on the exact speciation of these deposits (adsorbed molecular complexes, clusters or nanoparticles), SEM pictures of an electrode following an electrolysis under our standard conditions (including [Ni]) and disconnected as described above (Fig. S20) do not evidence Ni deposits, which suggest that these deposits would be of small size (< 10 nm).
Collectively, these results conclusively demonstrate that, if Ni deposits may form during electrolysis, these deposits are not the species responsible for electrocatalytic alkyne semihydrogenation in our conditions. The IR-SEC experiment conducted with Ni and 1-phenyl-1-propyne 2 (used to avoid overlay with solvent signature) produces a band at 1924 cm -1 (Fig. S40b,d), which is not observed in the absence of 2. This value is intermediate between the ones encountered for unsaturated C-C bond stretching in disubstituted alkynes (2200-2250 cm -1 region) and in 1,2disubstituted alkenes (1600-1650 cm -1 region). 14 Fig. S10b). This increase suggests that the reduced Ni-hydride is also competent to transfer the hydride to the triple C-C bond. The electrocatalytic activity towards alkyne is further assessed by electrolysis of 1 with TFA (Ni/1/TFA 1:10:100), which produces the alkene 1H2 in 38% yield and 3.8 TONs (Table S2). This result confirms that TFA is a suitable acid for alkyne semihydrogenation S56 electrocatalyzed by Ni. Altogether, a mechanism for alkyne semihydrogenation shuttling via a Ni hydride complex is thus more plausible with strong acids such as TFA (Fig. S41b).
Additional mechanistic discussion
An electrocatalytic wave developing from -1.80 VFc being also observed at high excess of BA vs 2 (Ni/2/BA 1:5:1-50; Fig. S8a) suggest that a Ni hydride mechanism may also operate with that acid, although at potentials more cathodic than the mechanism involving a nickelacyclopropene intermediate. | 2,700.6 | 2022-02-22T00:00:00.000 | [
"Chemistry"
] |
Assessment of the evidence yield for the calibrated PP3/BP4 computational recommendations
Purpose: To investigate the number of rare missense variants observed in human genome sequences by ACMG/AMP PP3/BP4 evidence strength, following the calibrated PP3/BP4 computational recommendations. Methods: Missense variants from the genome sequences of 300 probands from the Rare Genomes Project with suspected rare disease were analyzed using computational prediction tools able to reach PP3_Strong and BP4_Moderate evidence strengths (BayesDel, MutPred2, REVEL, and VEST4). The numbers of variants at each evidence strength were analyzed across disease-associated genes and genome-wide. Results: From a median of 75.5 rare (≤1% allele frequency) missense variants in disease-associated genes per proband, a median of one reached PP3_Strong, 3–5 PP3_Moderate, and 3–5 PP3_Supporting. Most were allocated BP4 evidence (median 41–49 per proband) or were indeterminate (median 17.5–19 per proband). Extending the analysis to all protein-coding genes genome-wide, the number of PP3_Strong variants increased approximately 2.6-fold compared to disease-associated genes, with a median per proband of 1–3 PP3_Strong, 8–16 PP3_Moderate, and 10–17 PP3_Supporting. Conclusion: A small number of variants per proband reached PP3_Strong and PP3_Moderate in 3,424 disease-associated genes, and though not the intended use of the recommendations, also genome-wide. Use of PP3/BP4 evidence as recommended from calibrated computational prediction tools in the clinical diagnostic laboratory is unlikely to inappropriately contribute to the classification of an excessive number of variants as Pathogenic or Likely Pathogenic by ACMG/AMP rules.
INTRODUCTION
Genetic testing identifies many variants of uncertain significance (VUS), of which the majority of coding variants are missense (non-synonymous).).In the 2015 recommendations, in silico evidence (PP3 and BP4) was capped at "Supporting" for or against pathogenicity.
2 Furthermore, no explicit recommendations concerning the prediction tools or thresholds to be used were specified, enabling non-standardized application of criteria and resulting in inconsistencies in variant classification between clinical diagnostic laboratories.
3
Recently, Pejaver et al., (2022) refined the use of computational prediction tools to provide evidence of pathogenicity using the Bayesian adaptation of the ACMG/AMP framework.
4,5
For 13 computational prediction tools frequently used in clinical workflows, evidence-based calibrated thresholds were introduced corresponding to "Supporting," "Moderate," "Strong," and "Very Strong" PP3/BP4 evidence strengths, and also defined an indeterminate range.These thresholds demonstrated that the initial framework underweighted evidence from computational prediction tools, as many had the ability to provide evidence beyond "Supporting" strength.
Since the release of the PP3/BP4 recommendations, we have received questions from users regarding the key steps to implementation, calling for practical guidance on the intended use of the PP3/BP4 recommendations for variant curation in disease-associated genes (see Box 1).In particular, concerns have arisen due to the impression that an excessive number of variants are reaching PP3_Strong.Here, by demonstrating the level of PP3/BP4 evidence allocated to rare missense variants in the genome sequences of patients with rare disease, we specifically aimed to address these concerns.Can the calibration of these methods be trusted?-PP3/BP4 are empirically calibrated evidence codes -Confounders that could be addressed directly were eliminated -As with any approach, it is expected that the evidence strength provided will be too high or too low for some variants when applying the calibrated PP3/BP4 codes -The calibrated codes have been extensively validated, including in this study What are some of the limitations to the calibration?-Variants used for the calibration may not be representative of novel variants to be classified -Computational prediction tools were assumed not to have had a major role in the classification of variants used in the calibration - The calibration provides the evidence strength, on average, across the thousands of genes assessed; however, it is a probability that will vary across genes Will more calibrations need to be performed in the future?-New and revised methods will require independent calibration More detailed answers to these questions are provided in the Supplemental Material .
METHODS
Study participants and data. .
and genome-wide.These methods were also repeated for missense variants according to the VEP "most severe consequence" across all transcripts (see Supplemental Material).Statistical analyses.Proportions between two groups were compared with two-tailed binomial tests with Bonferroni correction for multiple testing.Bootstrap resampling with replacement (1,000 iterations) was performed to provide a 95% confidence interval (CI) for the mean.
RESULTS
Detection of missense variants in disease-associated genes.1B and Figure S1).ClinVar provides classifications for 54-72% of the unique variants with PP3_Strong evidence per prediction tool, of which 12-29% are currently reported as P/LP, 63-.
Using a less stringent AF threshold of ≤5% resulted in a subtle increase in variants with PP3 evidence in disease-associated genes (median = 1 PP3_Strong, median = 4-6 PP3_Moderate, median = 5-6 PP3_Supporting) (Table S3).Using the VEP "most severe consequence" across all transcripts to detect variants, a rare disease analysis approach that is sometimes used to increase the detection of potentially deleterious missense variants in alternative transcripts versus using only MANE Select transcripts, we also did not see many more variants reaching PP3_Supporting-Strong in disease-associated genes (median = 1 PP3_Strong, median = 3-5 PP3_Moderate, median = 4-5 PP3_Supporting) (Table S4).
"-" indicates that the given prediction tool is not able to provide BP4 evidence of this strength. .
DISCUSSION
The use of computational prediction tools to provide evidence of pathogenicity and benignity within the ACMG/AMP framework was recently refined by Pejaver et al.
4
and certain prediction tools were found capable of reaching "Strong" and "Very Strong" evidence for PP3 and BP4 codes, respectively.These changes were expected to have important implications for the final classification of missense variants in the clinical diagnostic setting, given that previously the codes were capped at "Supporting" and could only be applied if "Multiple" lines of computational evidence support a deleterious effect on the gene or gene product. 2 Through various scientific meetings and interactions following release of the recommendations, concerns were raised due to the impression that an excessive number of PP3_Strong variants are generated.To explore these concerns, we assessed the observed number of rare missense variants by PP3/BP4 evidence strength in the genome sequences of 300 research participants with rare disease.
In our analyses, at ≤1% AF, a standard threshold in rare disease analysis, we found a median of one PP3_Strong variant per individual (range 0-4) across ~950 of over .
To better understand why users reported an excess of PP3_Strong variants, we also extended our analyses to more frequent variants up to 5% AF, the threshold for stand-alone evidence of benignity in the ACMG/AMP guidance, and to variants that are missense on alternative transcripts (VEP "most severe consequence").These analyses did not result in a considerable increase in the number of PP3_Strong variants.Furthermore, though Pejaver et al.
made no recommendation about running computational prediction tools genome-wide, given that the thresholds are calibrated for disease-associated genes only, we applied the same thresholds to variants genome-wide.We found an approximately 2.6-fold increase in the number of PP3_Strong variants genome-wide compared to within disease-associated genes only, consistent with the genome having ~5-fold as many genes as covered by ACMG/AMP classification rules and the prior for pathogenicity genome-wide being ~5-fold lower (~1%) 15,18 than for disease-associated genes (~4.5%)
Importantly, deleterious in silico prediction does not equate to pathogenicity and, in the absence of additional evidence, one line of "Strong" evidence from the PP3 code classifies a variant as a VUS in the ACMG/AMP framework.In the case that a variant does reach P or LP classification in combination with other codes, there is a 99% or 90% posterior probability of pathogenicity, respectively, which implies that 1-10% of variants may not actually be causative Moreover, computational prediction tools were assumed not to have played a major role in the classification of the variants used for the calibration.Given these limitations, we appeal to .
1
Limited availability of functional data means that it is often necessary to turn to computational in silico prediction tools for evidence of deleteriousness.The American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) provide a sequence variant classification (SVC) framework to combine distinct lines of evidence of pathogenicity or benignity of varying strengths to reach a final variant classification (Benign [B], Likely Benign [LB], VUS, Likely Pathogenic [LP], or Pathogenic [P]
Figure 1 .
Figure 1. A. Rare (≤1% AF) missense variants in disease-associated genes per proband by PP3 evidence strength for analyzed computational prediction tools.B. Rare (≤1% AF) missense variants in disease-associated genes with PP3 evidence per proband by evidence strength and reported mode of inheritance (AD-only and AR-only) for analyzed computational tools.Boxplots correspond to the first, second, and third quartile of data, with whiskers denoting 1.5 × IQR.Outliers are displayed as individual points.
of disease.The PP3/BP4 codes should be used within the framework of the ACMG/AMP recommendations including the updates that have been made by ClinGen to determine the pathogenicity of a variant.Code combination does, however, require great care and there are a number of important caveats.In particular, (meta)predictors may use data partially captured by other codes, notably key domains and critical residues and population AF, increasing the risk of double-counting of evidence (see Supplemental Material for further recommendations on code combination).The PP3/BP4 calibration by Pejaver et al. does have limitations.It was performed on variants classified in the past several years that were not used in the training sets of the analyzed prediction tools and may be non-representative of novel variants to be classified.
.
Box 1. Key steps in implementing the PP3/BP4 missense variant recommendations How should PP3/BP4 evidence be used for missense variants?
Genome sequencing (GS) data were obtained from the Rare Genomes Project (RGP) at the Broad Institute of MIT and Harvard.Participant demographics are displayed in TableS1.Sequencing was performed on DNA purified from blood by the Broad Institute Genomics Platform on an Illumina sequencer to 30x average depth.Raw sequence reads were aligned to the GRCh38 reference genome.Variants were called with GATK version 4.1.8.0 7 in the form of single nucleotide variants (SNVs) and small insertions/deletions (indels).Variants were filtered at the site-level with GATK Variant Quality Score Recalibration (VQSR).Missense variant extraction and annotation.Missense variants identified by the Ensembl Variant Effect Predictor (VEP) 8 using MANE Select transcripts 9 were extracted from the GS data.Only variants with genotype quality ≥40, depth ≥10, and allele balance ≥0.2 were retained for analyses.Allele frequency (AF) thresholds of ≤5% and ≤1% global and population-max "popmax" AF in gnomAD v3.1.2genomes were applied (the highest allele frequency for nonbottlenecked populations).10 Precomputed scores from four in silico (meta)predictors that were able to reach PP3_Strong and BP4_Moderate in the Pejaver et al. calibration were included in the analysis.BayesDel (without minor allele frequency), Database (last accessed Jul 21, 2023) (3,424 genes -1,004 autosomal dominant only [AD-only], 1,903 autosomal recessive only [AR-only], 517 other [includes gene that are both AD and AR]) 16 The GS dataset included 300 probands with rare disease.Across protein-coding genes genome-wide, a median of 8,781.5 variants per proband (range 8,383-10,616) passing QC thresholds were detected in MANE Select transcripts.Applying a ≤1% AF threshold in the gnomAD v3 genomes dataset, we found 75,384 unique missense variants across 15,566 genes (median 321 per proband, range 244-847).Within GenCC Moderate, Strong, and Definitive disease-associated genes, the number of unique variants dropped to 17,789 across 2,899 genes, and a median of 75.5 variants per proband (range 53-186).Variant counts following each step in QC and AF filtering are displayed in TableS2.PP3/BP4 evidence strength of missense variants in disease-associated genes.A median of one variant (mean 0.8-1) per proband reached PP3_Strong per analyzed prediction tool, 3-5 variants (mean 3.4-4.9)reached PP3_Moderate, and 3-5 (mean 3.6-5.2) reached PP3_Supporting (Table
Table 1 .
Number of rare (≤1% AF) missense variants in disease-associated genes per proband by ACMG PP3/BP4 evidence strength within MANE Select transcripts | 2,685.2 | 2024-03-07T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Anomalous Magnetorheological Response for Carrageenan Magnetic Hydrogels Prepared by Natural Cooling
The effect of the cooling rate on magnetorheological response was investigated for magnetic hydrogels consisting of carrageenan and carbonyl iron particles with a concentration of 50 wt.%. For magnetic gels prepared via natural cooling, the storage moduli at 0 and 50 mT were 3.7 × 104 Pa and 5.6 × 104 Pa, respectively, and the change in the modulus was 1.9 × 104 Pa. For magnetic gels prepared via rapid cooling, the storage moduli at 0 and 50 mT were 1.2 × 104 Pa and 1.8 × 104 Pa, respectively, and the change in the modulus was 6.2 × 103 Pa, which was 1/3 of that for the magnetic gel prepared by natural cooling. The critical strains, where G′ is equal to G″ on the strain dependence of the storage modulus, for magnetic gels prepared by natural cooling and rapid cooling, were 0.023 and 0.034, respectively, indicating that the magnetic gel prepared by rapid cooling has a hard structure compared to that prepared by natural cooling. Opposite to this, the change in the storage modulus at 500 mT for the magnetic gel prepared by rapid cooling was 1.6 × 105 Pa, which was 2.5 times higher than that prepared by natural cooling. SEM images revealed that many small aggregations of the carrageenan network were found in the magnetic gel prepared by natural cooling, and continuous phases of carrageenan network with large sizes were found in the magnetic gel prepared by rapid cooling. It was revealed that magnetic particles in the magnetic gel prepared by rapid cooling can move and form a chain structure at high magnetic fields by breaking the restriction from the continuous phases of carrageenan.
Introduction
Stimuli-responsive soft materials [1][2][3][4][5] change their physical properties in response to stimuli such as light, temperature, and pH. Magnetic gels are among the stimuli-responsive soft materials, and their physical properties change in response to magnetic fields. Magnetic soft materials have attracted great attention as next-generation actuators because their physical properties change instantaneously and dramatically when a magnetic field is applied [6][7][8][9][10][11]. Magnetic hydrogels made of polysaccharides have been widely investigated thus far, and many functions and applications have been reported, such as recoverable adsorbent [12], drug delivery [13], and hyperthermia treatment [14]. These applications take advantage of the bio-and eco-friendly properties that are derived from natural products. In other words, magnetic gels made of natural products are harmless when they are taken into human body and do not cause pollution when diffused into the environment. When a magnetic field is applied to magnetic gels, the elastic modulus is higher compared to that without the magnetic field. This is called the magnetorheological effect (MR effect). The underlying mechanism is that magnetic particles come into contact with each other and form a chain-like structure.
Because magnetic particles strongly interact with the polymer network, they are subjected to compressive and tensile resistance forces from the polymer network when they are displaced in the gel. In other words, as this resistance force increases, it becomes more Gels 2023, 9, 691 2 of 10 difficult for the magnetic particles to move within the gel, and a chain structure of magnetic particles is not formed. As a result, the amplitude of the MR effect is reduced. Actually, the MR effect decreases as the elastic modulus of the matrix increases in magnetic gels [11,15]. Magnetic gels with a matrix of natural polymers, such as carrageenan and agar gels, show a significant change in elastic modulus with the magnetic field compared to those with a matrix of synthetic polymers [16]. This suggests that the network structure of natural polymers allows magnetic particles to move more easily than that of synthetic polymers. This feature is considered to be due to the cross-linking points of natural polymers formed by hydrogen (physical) bonds. Since the force that magnetic particles receive from the magnetic field is much stronger than the binding force, the cross-linking points are broken when the magnetic particles move.
In our previous investigations, measured at a low magnetic field and low concentration of magnetic particles, it was shown that the change in storage modulus for carrageenan magnetic hydrogels took a maximum at around 2.0 wt.% [17]. This is anomalous behavior because it is normal that the change in modulus increases with decreasing polymer concentration. The decrease in the change in storage modulus is caused by increasing the strength of the carrageenan network, which was proved by increasing the critical strain. Conversely, we can find the strength of the carrageenan network from the critical strain.
In addition to this, there is some research describing how the physical properties for polysaccharide gels are strongly affected by the gelation speed, i.e., cooling rate. A stiff and heterogeneous gel structure with a high storage modulus is formed at a low cooling rate, and meanwhile a weak and homogeneous gel structure with a low storage modulus is formed at a high cooling rate [18]. It was also reported that the size of the junction zone (cross-linking point) decreased when a carrageenan solution cooled rapidly [19][20][21]. The changes in the structure of cross-linking points or the strength of cross-links might affect the magnetic response for magnetic gels. In this study, we investigated the effect of the cooling rate on the viscoelastic properties of carrageenan magnetic gels and their magnetic responses to clear the influence of cross-linking structures on the mobility of magnetic particles within the gel. It was revealed that the storage moduli for magnetic gels prepared at different cooling rates exhibited completely different magnetic field responses. It can be considered that this result reflects the difference in the network structure depending on the cooling rate, as reported in the literature [18]. Figure 1a,b show the strain γ dependence of the storage modulus G and loss modulus G at 0 and 50 mT for magnetic gels prepared by natural cooling and rapid cooling, respectively. At 0 mT, regions of linear viscoelasticity were seen at strains below 1.7 × 10 −3 and 2.4 × 10 −3 in magnetic gels prepared by natural cooling and rapid cooling, respectively. The strains at which the G intersected with G were at 0.023 and 0.034 for magnetic gels prepared by natural cooling and rapid cooling, respectively. At γ = 1, the storage moduli for these magnetic gels showed the same value. At 50 mT, the G showed the linear viscoelastic region at strains below 1.3 × 10 −3 and 2.4 × 10 −3 for magnetic gels prepared by natural cooling and rapid cooling, respectively. The G crossed G at a strain of 0.016 for the naturally cooled magnetic gel and 0.052 for the rapidly cooled magnetic gel. An interesting behavior was observed at γ = 1, where the storage modulus for the rapidly cooled magnetic gel was higher than the naturally cooled one, which was the opposite of the result at low strains. This might be due to a structural change in large strains enabling the movement of magnetic particles. Figure 1c shows the strain dependence of G and G at 0 mT for carrageenan gels without magnetic particles (abbreviated as "carrageenan gel" hereafter) prepared by natural cooling and rapid cooling. Linear viscoelastic regions were observed at strains below 2.4 × 10 −3 and 1.7 × 10 −3 for carrageenan gels prepared by natural cooling and rapid cooling, respectively. The strains of G which crossed G were 0.014 and 0.013 for carrageenan gels prepared by natural cooling and rapid cooling, respectively. The parameters G (γ = 1)/G (γ = 10 −4 ), representing the amplitude of the Payne effect [22], cooling and rapid cooling, respectively. The strains of G′ which crossed G″ were 0.014 and 0.013 for carrageenan gels prepared by natural cooling and rapid cooling, respectively. The parameters G′(γ = 1)/G′(γ = 10 −4 ), representing the amplitude of the Payne effect [22], were 2.9 × 10 −4 and 3.2 × 10 −4 for carrageenan gels prepared by natural cooling and rapid cooling, respectively, and were almost the same. for magnetic gels and carrageenan gels prepared by natural cooling and rapid cooling. The storage modulus for the magnetic gel prepared by natural cooling was 3.7 ± 0.3 × 10 4 Pa, which is 2.3 times higher than that prepared by rapid cooling (=1.6 ± 0.1 × 10 4 Pa). The storage modulus for the carrageenan gel prepared by natural cooling was 2.4 ± 0.2 × 10 4 Pa, which is 1.8 times higher than that prepared by rapid cooling (=1.3 ± 0.0 × 10 4 Pa). Accordingly, this indicates that a secondary structure of the carrageenan network with low elasticity was formed by rapid cooling. A similar phenomenon has been observed for gellan gum [18]. The storage modulus for the naturally cooled magnetic gel was 1.5 times higher than that for the naturally cooled carrageenan gel. The storage modulus for the rapidly cooled magnetic gel was almost the same as that of the rapidly cooled one. The storage moduli at 0 mT for magnetic gels G′ can be written by following Einstein s equation [23]; G′ = G′matrix(1 + 2.5ϕ), where G′matrix is the storage modulus of the matrix at 0 mT and ϕ is the volume fraction of magnetic particles. Since ϕ was constant at 0.12 in this experiment, the storage modulus for the magnetic gel should be 1.3 times higher than that of the matrix irrespective of the cooling rate. The storage moduli for magnetic gels prepared by natural cooling and rapid cooling were 1.6 and 1.2 times higher than those for carrageenan gels, respectively. This suggests that the aggregations of magnetic particles were made in the naturally cooled magnetic gel, and fewer aggregations were made in the rapidly cooled magnetic gel. Figure 2b shows the storage modulus at 50 mT in the linear viscoelastic regime (γ = 10 −4 ) for magnetic gels prepared by natural cooling and rapid cooling. The storage modulus for the naturally cooled magnetic gel was 5.6 ± 0.4 × 10 4 Pa, which is 2.7 times higher than that prepared by rapid cooling (=2.1 ± 0.2 × 10 4 Pa). Figure 2c shows the change in the storage modulus for magnetic gels prepared by natural cooling and rapid cooling. The change in the storage modulus is the difference between the storage moduli at 0 mT G′0 and at 50 mT G′50 (ΔG′ = G′50 − G′0). The change in the storage modulus was 1.9 ± 0.1 × 10 4 Pa for the naturally cooled magnetic gel and 5.3 ± 1.0 × 10 3 Pa for the rapidly cooled one. Thus, an interesting result was obtained here, where the cooling rate affected the change in the storage modulus by the magnetic field; the modulus change for the naturally cooled magnetic gel was 3.6 times higher than that of the rapidly cooled one. The change in the storage modulus for the naturally cooled magnetic gel was consistent with the value we previously reported [17]. for magnetic gels and carrageenan gels prepared by natural cooling and rapid cooling. The storage modulus for the magnetic gel prepared by natural cooling was 3.7 ± 0.3 × 10 4 Pa, which is 2.3 times higher than that prepared by rapid cooling (=1.6 ± 0.1 × 10 4 Pa). The storage modulus for the carrageenan gel prepared by natural cooling was 2.4 ± 0.2 × 10 4 Pa, which is 1.8 times higher than that prepared by rapid cooling (=1.3 ± 0.0 × 10 4 Pa). Accordingly, this indicates that a secondary structure of the carrageenan network with low elasticity was formed by rapid cooling. A similar phenomenon has been observed for gellan gum [18]. The storage modulus for the naturally cooled magnetic gel was 1.5 times higher than that for the naturally cooled carrageenan gel. The storage modulus for the rapidly cooled magnetic gel was almost the same as that of the rapidly cooled one. The storage moduli at 0 mT for magnetic gels G can be written by following Einstein's equation [23]; G = G matrix (1 + 2.5φ), where G matrix is the storage modulus of the matrix at 0 mT and φ is the volume fraction of magnetic particles. Since φ was constant at 0.12 in this experiment, the storage modulus for the magnetic gel should be 1.3 times higher than that of the matrix irrespective of the cooling rate. The storage moduli for magnetic gels prepared by natural cooling and rapid cooling were 1.6 and 1.2 times higher than those for carrageenan gels, respectively. This suggests that the aggregations of magnetic particles were made in the naturally cooled magnetic gel, and fewer aggregations were made in the rapidly cooled magnetic gel. Figure 2b shows the storage modulus at 50 mT in the linear viscoelastic regime (γ = 10 −4 ) for magnetic gels prepared by natural cooling and rapid cooling. The storage modulus for the naturally cooled magnetic gel was 5.6 ± 0.4 × 10 4 Pa, which is 2.7 times higher than that prepared by rapid cooling (=2.1 ± 0.2 × 10 4 Pa). Figure 2c shows the change in the storage modulus for magnetic gels prepared by natural cooling and rapid cooling. The change in the storage modulus is the difference between the storage moduli at 0 mT G 0 and at 50 mT G 50 (∆G = G 50 − G 0 ). The change in the storage modulus was 1.9 ± 0.1 × 10 4 Pa for the naturally cooled magnetic gel and 5.3 ± 1.0 × 10 3 Pa for the rapidly cooled one. Thus, an interesting result was obtained here, where the cooling rate affected the change in the storage modulus by the magnetic field; the modulus change for the naturally cooled magnetic gel was 3.6 times higher than that of the rapidly cooled one. The change in the storage modulus for the naturally cooled magnetic gel was consistent with the value we previously reported [17]. Figure 3 shows the critical strain at 0 mT γc for magnetic gels and carrageenan gels prepared by natural cooling and rapid cooling. The critical strain is a yield point intersecting the G′ and G″ in Figure 1, which is the onset of a fluidlike response and has been related to the failure of the network structure [24][25][26]. Accordingly, it can be understood that the critical strain indicates the mechanical strength of the gel network. The critical strains were 0.023 ± 0.001 and 0.034 ± 0.004 for magnetic gels prepared by natural cooling and rapid cooling, respectively. This indicates that the network structure for the rapidly cooled magnetic gel is stronger than that of the naturally cooled one. The critical strain for naturally cooled carrageenan gel was almost the same as that of the rapidly cooled one (~0.012). This strongly indicates that magnetic particles strongly interacted with the carrageenan matrix for the rapidly cooled magnetic gel; it should be noted that rapid cooling does not make the carrageenan network reinforce itself. For the rapidly cooled magnetic gel, it can be considered that the strong interaction between magnetic particles and the carrageenan matrix reduced the change in the storage modulus by the magnetic field. Figure 4a exhibits the magnetic field dependence of the storage modulus at a strain of 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. The storage modulus for the magnetic gel prepared by natural cooling at 0 mT was 5.2 × 10 4 Pa and it increased with the magnetic field to 1.1 × 10 5 Pa at 500 mT. On the other hand, the storage modulus for the rapidly cooled magnetic gel at 0 mT was 1.3 × 10 4 Pa and it increased with the magnetic field to 1.8 × 10 5 Pa at 500 mT. An interesting behavior was that the differential coefficient dG′/dB for the magnetic gel prepared by natural cooling was high at low Figure 3 shows the critical strain at 0 mT γ c for magnetic gels and carrageenan gels prepared by natural cooling and rapid cooling. The critical strain is a yield point intersecting the G and G in Figure 1, which is the onset of a fluidlike response and has been related to the failure of the network structure [24][25][26]. Accordingly, it can be understood that the critical strain indicates the mechanical strength of the gel network. The critical strains were 0.023 ± 0.001 and 0.034 ± 0.004 for magnetic gels prepared by natural cooling and rapid cooling, respectively. This indicates that the network structure for the rapidly cooled magnetic gel is stronger than that of the naturally cooled one. The critical strain for naturally cooled carrageenan gel was almost the same as that of the rapidly cooled one (~0.012). This strongly indicates that magnetic particles strongly interacted with the carrageenan matrix for the rapidly cooled magnetic gel; it should be noted that rapid cooling does not make the carrageenan network reinforce itself. For the rapidly cooled magnetic gel, it can be considered that the strong interaction between magnetic particles and the carrageenan matrix reduced the change in the storage modulus by the magnetic field. Figure 3 shows the critical strain at 0 mT γc for magnetic gels and carr prepared by natural cooling and rapid cooling. The critical strain is a yield po ing the G′ and G″ in Figure 1, which is the onset of a fluidlike response a related to the failure of the network structure [24][25][26]. Accordingly, it can b that the critical strain indicates the mechanical strength of the gel network strains were 0.023 ± 0.001 and 0.034 ± 0.004 for magnetic gels prepared by na and rapid cooling, respectively. This indicates that the network structure fo cooled magnetic gel is stronger than that of the naturally cooled one. The crit naturally cooled carrageenan gel was almost the same as that of the rapid (~0.012). This strongly indicates that magnetic particles strongly interacted rageenan matrix for the rapidly cooled magnetic gel; it should be noted that does not make the carrageenan network reinforce itself. For the rapidly coo gel, it can be considered that the strong interaction between magnetic par carrageenan matrix reduced the change in the storage modulus by the magn modulus for the magnetic gel prepared by natural cooling at 0 mT was 5.2 × 10 4 Pa and it increased with the magnetic field to 1.1 × 10 5 Pa at 500 mT. On the other hand, the storage modulus for the rapidly cooled magnetic gel at 0 mT was 1.3 × 10 4 Pa and it increased with the magnetic field to 1.8 × 10 5 Pa at 500 mT. An interesting behavior was that the differential coefficient dG /dB for the magnetic gel prepared by natural cooling was high at low magnetic fields and it decreased with the magnetic field, which was the opposite of the behavior of typical magnetic soft materials, as seen in the rapidly cooled magnetic gel.
Results and Discussion
times larger than that of the naturally cooled one. It is very interesting that such a clear difference was observed for these magnetic gels even though the magnetic particle concentration and magnetic field strength were exactly the same. As described in the Introduction, it is usual that the amplitude of the MR effect decreased when increasing the storage modulus at 0 mT, as is the MR response of magnetic elastomers with various plasticizer content [11]. The influence of the cross-linking density on the MR response was reported several years ago [15]; the amplitude of the MR effect decreased with the crosslinking density for polymethyl siloxane gels cross-linked by boric acid. The MR effect of self-healing and printable elastomers, whose elastic modulus changed with temperature, increased with decreasing elastic modulus [27]. Such decreases in the MR response are caused by the reduction in the mobility of magnetic particles within the polymer network. Accordingly, the anomalous response is produced by the special structure of the carrageenan network, which is varied depending on the cooling rate. Figure 5a demonstrates the magnetic field response of the storage modulus at 50 mT and γ = 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. For both magnetic gels, the storage modulus change was synchronized with the on/off switching of the magnetic field. The storage modulus for the magnetic gel prepared by natural cooling at 0 mT was 4.6 × 10 4 Pa and it increased to 5.8 × 10 4 Pa at 50 mT. On the other hand, the storage modulus for the magnetic gel prepared by rapid cooling at 0 mT was 1.4 × 10 4 Pa and it increased to 1.7 × 10 4 Pa at 50 mT. By applying the magnetic field repeatedly, the storage modulus at 0 mT for both magnetic gels showed a trend of increasing gradually. This indicates that a certain structure was formed by the application of the magnetic field, and it remained after switching off the field. Figure 5b shows the magnetic field response of the storage modulus at 50 mT and γ = 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. Similar to the response at 50 mT, the storage moduli for both magnetic gels changed in synch with the on/off switching of a magnetic field of 500 mT. The storage Figure 4b shows the magnetic field dependence of the change in the storage modulus at a strain of 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. The change in the storage modulus of natural cooling demonstrated a curve with a steep slope at low magnetic fields and a gentle slope at high magnetic fields, as explained in Figure 4a, and reached 6.3 × 10 4 Pa at 500 mT. The change in the storage modulus for the rapidly cooled magnetic gel demonstrated a curve with a gentle slope at low magnetic fields and a steep slope at high magnetic fields, and reached 1.6 × 10 5 Pa at 500 mT. In other words, the change in the storage modulus at 500 mT for the rapidly cooled magnetic gel was 2.5 times larger than that of the naturally cooled one. It is very interesting that such a clear difference was observed for these magnetic gels even though the magnetic particle concentration and magnetic field strength were exactly the same. As described in the Introduction, it is usual that the amplitude of the MR effect decreased when increasing the storage modulus at 0 mT, as is the MR response of magnetic elastomers with various plasticizer content [11]. The influence of the cross-linking density on the MR response was reported several years ago [15]; the amplitude of the MR effect decreased with the cross-linking density for polymethyl siloxane gels cross-linked by boric acid. The MR effect of self-healing and printable elastomers, whose elastic modulus changed with temperature, increased with decreasing elastic modulus [27]. Such decreases in the MR response are caused by the reduction in the mobility of magnetic particles within the polymer network. Accordingly, the anomalous response is produced by the special structure of the carrageenan network, which is varied depending on the cooling rate. Figure 5a demonstrates the magnetic field response of the storage modulus at 50 mT and γ = 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. For both magnetic gels, the storage modulus change was synchronized with the on/off switching of the magnetic field. The storage modulus for the magnetic gel prepared by natural cooling at 0 mT was 4.6 × 10 4 Pa and it increased to 5.8 × 10 4 Pa at 50 mT. On the other hand, the storage modulus for the magnetic gel prepared by rapid cooling at 0 mT was 1.4 × 10 4 Pa and it increased to 1.7 × 10 4 Pa at 50 mT. By applying the magnetic field repeatedly, the Gels 2023, 9, 691 6 of 10 storage modulus at 0 mT for both magnetic gels showed a trend of increasing gradually. This indicates that a certain structure was formed by the application of the magnetic field, and it remained after switching off the field. Figure 5b shows the magnetic field response of the storage modulus at 50 mT and γ = 10 −4 for magnetic gels prepared by natural cooling and rapid cooling. Similar to the response at 50 mT, the storage moduli for both magnetic gels changed in synch with the on/off switching of a magnetic field of 500 mT. The storage modulus at 0 mT for the magnetic gel prepared by natural cooling was 4.5 × 10 4 Pa and it increased to 6.8 × 10 4 Pa at 500 mT. On the other hand, the storage modulus at 0 mT for the magnetic gel prepared by rapid cooling was 1.5 × 10 4 Pa and it gradually increased to 2.0 × 10 5 Pa at 500 mT. The storage modulus for the magnetic gel prepared by natural cooling apparently decreased with the first application of the magnetic field. This suggests that the original structure was destructed upon the first application of the magnetic field and the broken structure was not further destructed by the subsequent applications of the field. The values of the storage modulus for magnetic gels measured in the switching experiments were consistent with those measured in the strain dependence, as shown in Figure 2.
Gels 2023, 9, x FOR PEER REVIEW 6 of 10 modulus at 0 mT for the magnetic gel prepared by natural cooling was 4.5 × 10 4 Pa and it increased to 6.8 × 10 4 Pa at 500 mT. On the other hand, the storage modulus at 0 mT for the magnetic gel prepared by rapid cooling was 1.5 × 10 4 Pa and it gradually increased to 2.0 × 10 5 Pa at 500 mT. The storage modulus for the magnetic gel prepared by natural cooling apparently decreased with the first application of the magnetic field. This suggests that the original structure was destructed upon the first application of the magnetic field and the broken structure was not further destructed by the subsequent applications of the field. The values of the storage modulus for magnetic gels measured in the switching experiments were consistent with those measured in the strain dependence, as shown in Figure 2. Figure 6 displays the SEM photographs for magnetic gels prepared by natural cooling and rapid cooling in the absence of magnetic fields. A clear difference in the morphology between magnetic gels prepared by natural cooling and rapid cooling can be seen in photos at low magnification. The morphology for the magnetic gel prepared by natural cooling was heterogeneous, showing many dark parts; meanwhile, that for the magnetic gel prepared by rapid cooling was homogeneous without any dark parts. This feature can clearly be seen in Figure 6b,e. Figure 6e is an enlarged photo of the square indicated by the broken line of Figure 6d. The left half of Figure 6e is the dark part of Figure 6b. That is, the dark part is due not to the heterogeneity of magnetic particles, but to the inhomogeneity of the carrageenan network. Figure 6f clearly shows that the dark part is a continuous phase of the carrageenan in which magnetic particles are embedded, like a chocolate bar with nuts. The continuous phase of the carrageenan was also seen in Figure 6c; however, it was very small, connecting only a few particles. Thus, the area of the continuous phase of carrageenan for the magnetic gel prepared by rapid cooling was far larger than that prepared by natural cooling. It is considered that the movement of magnetic particles in the magnetic gel prepared by rapid cooling is prevented by the continuous phase of the carrageen, which may have resulted in small MR effects, as observed at 50 mT. In contrast, the MR effect at 500 mT for the magnetic gel prepared by rapid cooling was larger than that prepared by natural cooling. It is also considered that the continuous phase of the carrageenan was broken by the movement of magnetic particles on which a strong magnetic force acts. Figure 6 displays the SEM photographs for magnetic gels prepared by natural cooling and rapid cooling in the absence of magnetic fields. A clear difference in the morphology between magnetic gels prepared by natural cooling and rapid cooling can be seen in photos at low magnification. The morphology for the magnetic gel prepared by natural cooling was heterogeneous, showing many dark parts; meanwhile, that for the magnetic gel prepared by rapid cooling was homogeneous without any dark parts. This feature can clearly be seen in Figure 6b,e. Figure 6e is an enlarged photo of the square indicated by the broken line of Figure 6d. The left half of Figure 6e is the dark part of Figure 6b. That is, the dark part is due not to the heterogeneity of magnetic particles, but to the inhomogeneity of the carrageenan network. Figure 6f clearly shows that the dark part is a continuous phase of the carrageenan in which magnetic particles are embedded, like a chocolate bar with nuts. The continuous phase of the carrageenan was also seen in Figure 6c; however, it was very small, connecting only a few particles. Thus, the area of the continuous phase of carrageenan for the magnetic gel prepared by rapid cooling was far larger than that prepared by natural cooling. It is considered that the movement of magnetic particles in the magnetic gel prepared by rapid cooling is prevented by the continuous phase of the carrageen, which may have resulted in small MR effects, as observed at 50 mT. In contrast, the MR effect at 500 mT for the magnetic gel prepared by rapid cooling was larger than that prepared by natural cooling. It is also considered that the continuous phase of the Gels 2023, 9, 691 7 of 10 carrageenan was broken by the movement of magnetic particles on which a strong magnetic force acts. Gels 2023, 9, x FOR PEER REVIEW 7 of 10 Figure 6. SEM photographs of the surface for dried magnetic gels prepared by natural cooling (tops) and rapid cooling (bottoms) without magnetic fields. Figure 7 exhibits the schematic illustrations representing the surface morphologies of magnetic particles and the carrageenan network, and the MR effect for magnetic gels prepared by natural cooling and rapid cooling. The storage moduli at 0 mT for magnetic gels prepared by natural cooling and rapid cooling were higher and lower than those calculated from Einstein s equation, respectively. Therefore, magnetic particles form aggregations in the magnetic gel prepared by natural cooling and they are dispersed randomly in the magnetic gel prepared by rapid cooling. Large continuous phases of the carrageenan were observed in SEM photographs for the magnetic gel prepared by rapid cooling; some small aggregations of carrageenan were randomly distributed in the magnetic gel prepared by natural cooling. The parameters representing the amplitude of the Payne effect for carrageenan gels prepared by natural cooling and rapid cooling were almost the same, suggesting that the brittleness of the carrageenan network is not affected by the cooling rate. It can be considered under low magnetic fields that magnetic particles in the magnetic gel prepared by rapid cooling are difficult to move due to the obstruction from the continuous phase of the carrageenan. This may result in the MR effect at 50 mT for the magnetic gel prepared by natural cooling being higher than that prepared by rapid cooling. At high magnetic fields (~500 mT), magnetic particles in the magnetic gel prepared by rapid cooling can move and form a chain structure by breaking the restriction from the continuous phase of the carrageenan. The number density of the chains should be high for the magnetic gel prepared by rapid cooling, since magnetic particles are well dispersed as primary particles, which may contribute to a large change in the storage modulus. Figure 7 exhibits the schematic illustrations representing the surface morphologies of magnetic particles and the carrageenan network, and the MR effect for magnetic gels prepared by natural cooling and rapid cooling. The storage moduli at 0 mT for magnetic gels prepared by natural cooling and rapid cooling were higher and lower than those calculated from Einstein's equation, respectively. Therefore, magnetic particles form aggregations in the magnetic gel prepared by natural cooling and they are dispersed randomly in the magnetic gel prepared by rapid cooling. Large continuous phases of the carrageenan were observed in SEM photographs for the magnetic gel prepared by rapid cooling; some small aggregations of carrageenan were randomly distributed in the magnetic gel prepared by natural cooling. The parameters representing the amplitude of the Payne effect for carrageenan gels prepared by natural cooling and rapid cooling were almost the same, suggesting that the brittleness of the carrageenan network is not affected by the cooling rate. It can be considered under low magnetic fields that magnetic particles in the magnetic gel prepared by rapid cooling are difficult to move due to the obstruction from the continuous phase of the carrageenan. This may result in the MR effect at 50 mT for the magnetic gel prepared by natural cooling being higher than that prepared by rapid cooling. At high magnetic fields (~500 mT), magnetic particles in the magnetic gel prepared by rapid cooling can move and form a chain structure by breaking the restriction from the continuous phase of the carrageenan. The number density of the chains should be high for the magnetic gel prepared by rapid cooling, since magnetic particles are well dispersed as primary particles, which may contribute to a large change in the storage modulus.
Conclusions
The effect of the cooling rate on the magnetorheological response for carrageenan magnetic hydrogels was investigated using dynamic viscoelastic measurements and morphological observations. At a low magnetic field, the change in the storage modulus for the magnetic gel prepared by natural cooling was higher than that prepared by rapid cooling. Magnetic particles in the magnetic gel prepared by rapid cooling were difficult to move due to the obstruction from the continuous phase of carrageenan. On the other hand, at a high magnetic field, the change in the storage modulus for the magnetic gel prepared by rapid cooling was higher than that prepared by natural cooling. Magnetic particles in the magnetic gel prepared by rapid cooling could move and form a chain structure by breaking the restriction from the continuous phase of the carrageenan. These results coincided with the critical strain, where the magnetic gel prepared by natural cooling has a tough structure compared to that prepared by rapid cooling. The magnetic field dependence of the storage modulus for the magnetic gel prepared by natural cooling showed a curve with a steep slope at low magnetic fields and a gentle slope at high magnetic fields, which was the opposite of that in general magnetic gels. The mechanism should be useful for obtaining materials showing large elasticity changes through weak magnetic fields, and therefore it should be cleared in the future.
Preparation of Magnetic Gels
κ-carrageenan (Mw = 857 kDa, CS-530, San-Ei Gen F.F.I., Osaka, Japan) was used as a matrix of polysaccharides. The carrageenan was dissolved in pure water at 100 °C for 1 h to prepare the aqueous solution with a concentration of 2.0 wt.%. Carbonyl iron (CS Grade BASF SE., Ludwigshafen am Rhein, Germany) with a diameter of 7.0 µm was used as a magnetic particle. The carbonyl iron particles were dispersed in the carrageenan aqueous solution at 100 °C and then mixed using a mechanical mixer for several minutes to obtain
Conclusions
The effect of the cooling rate on the magnetorheological response for carrageenan magnetic hydrogels was investigated using dynamic viscoelastic measurements and morphological observations. At a low magnetic field, the change in the storage modulus for the magnetic gel prepared by natural cooling was higher than that prepared by rapid cooling. Magnetic particles in the magnetic gel prepared by rapid cooling were difficult to move due to the obstruction from the continuous phase of carrageenan. On the other hand, at a high magnetic field, the change in the storage modulus for the magnetic gel prepared by rapid cooling was higher than that prepared by natural cooling. Magnetic particles in the magnetic gel prepared by rapid cooling could move and form a chain structure by breaking the restriction from the continuous phase of the carrageenan. These results coincided with the critical strain, where the magnetic gel prepared by natural cooling has a tough structure compared to that prepared by rapid cooling. The magnetic field dependence of the storage modulus for the magnetic gel prepared by natural cooling showed a curve with a steep slope at low magnetic fields and a gentle slope at high magnetic fields, which was the opposite of that in general magnetic gels. The mechanism should be useful for obtaining materials showing large elasticity changes through weak magnetic fields, and therefore it should be cleared in the future.
Preparation of Magnetic Gels
κ-carrageenan (M w = 857 kDa, CS-530, San-Ei Gen F.F.I., Osaka, Japan) was used as a matrix of polysaccharides. The carrageenan was dissolved in pure water at 100 • C for 1 h to prepare the aqueous solution with a concentration of 2.0 wt.%. Carbonyl iron (CS Grade BASF SE., Ludwigshafen am Rhein, Germany) with a diameter of 7.0 µm was used as a magnetic particle. The carbonyl iron particles were dispersed in the carrageenan aqueous solution at 100 • C and then mixed using a mechanical mixer for several minutes to obtain the pre-gel solution. The concentration of magnetic particles was constant at Gels 2023, 9, 691 9 of 10 50 wt.%, which corresponds to a volume fraction of φ = 0.12. The volume fraction of magnetic particles was determined by the method described in our previous paper [17]. The concentration of magnetic particles in the preparation was the same as that at the final concentration. Immediately after mixing, the pre-gel solution was poured into molds consisting of a silicone spacer and glass plates with a temperature of 50 • C or 4 • C. In this paper, these gels prepared by the hot mold and cold mold are denoted as "magnetic gel of natural cooling" and "magnetic gel of rapid cooling", respectively. The molds were placed in the refrigerator to allow sufficient time for finishing the gelation. The gelation time for the magnetic gel prepared by natural cooling was within 1 min, and meanwhile the gelation time of that prepared by rapid cooling was within 5 s. It was clearly shown from SEM photographs that no precipitation of magnetic particles occurred for all magnetic gels studied here. The sample was 1 mm in thick and 20 mm in diameter. Carrageenan gels without magnetic particles were also prepared in a similar manner to those with magnetic gels. The diameter of the carbonyl iron was determined to be 7.4 µm ± 0.2 µm using a particle size analyzer (SALD-7000, Shimadzu Co., Ltd., Kyoto, Japan). The saturation magnetization was measured to be 218 emu/g with a SQUID magnetometer (MPMS, Quantum Design Inc., San Diego, CA, USA).
Dynamic Viscoelastic Measurement
The dynamic viscoelastic measurement for the magnetic gels was carried out at a temperature of 20 • C and a frequency of 1 Hz using a rheometer (MCR301, Anton Par Pty Ltd., Graz, Austria) with an electromagnetic system (PS-MRD) and a non-magnetic parallel plate (PP20/MRD). The strain varied from 10 −5 to 1. The normal force initially applied was approximately 0.3 N. For the strain-sweep measurement shown in Figure 1, a constant magnetic field of 50 mT was applied. The magnetic field was swept from 0 to 500 mT for the measurement of Figure 4. A pulsatile magnetic field of 0 and 50 mT was applied for the experiment in Figure 5. In a magnetic field of 50 mT, the magnetic force acting on magnetic particles is weak and the movement of magnetic particles is affected by the viscoelasticity of the matrix. In other words, the information relating to the viscoelasticity of the matrix can be obtained from the amplitude of elasticity changes by the magnetic field [17]. On the other hand, in a magnetic field of 500 mT, the magnetic force acting on magnetic particles is strong and the movement of magnetic particles is not influenced by the viscoelasticity of the matrix [16]. The magnetic field strength at the sample stage in the rheometer was measured using a Gauss meter (TM-601, Kanetec Co., Ltd., Nagano, Japan). The magnetic field strength changed from 0 to 500 mT when the excitation current was varied from 0 to 3.0 A. The mean values and standard errors of the storage and loss moduli for three different samples from one batch were evaluated.
Scanning Electron Microscope Observations
To observe the dispersion of magnetic particles and the morphology of the carrageenan network, the surface of dried magnetic gels was observed using a scanning electron microscope (SEM) (JCM-6000 Neoscope, JEOL Ltd. Tokyo, Japan) at an acceleration voltage of 15 kV. | 9,585.8 | 2023-08-28T00:00:00.000 | [
"Materials Science"
] |
Radio and Radial Radio Numbers of Certain Sunflower Extended Graphs
of , assignment color to vertices that d a, b ) + φ a φ ( b 1 + k , ∀ a, b ∈ V ( G ) where d ( a, b ) is between a and b in number in range called the radio − chromatic number of it symbolized by r ck ( φ . such radio − chromatic numbers of which called the radio − chromatic denoted by r ck ( G ) . For k � d and k � ρ , radio k − numbers are termed as the radio number ( rn ( G ) ) and radial radio number ( rr ( G ) ) of , respectively. In this research work, the relationship between the radio number and radial number is studied for connected graph. Then, several sunflower extended graphs are defined, and the upper bounds of the radio number and radial radio number are investigated
Introduction
e channel frequency assignment problem was first proposed by Griggs and Yeh [1] in 1992 for the amplitude modulation radio stations. Due to the cochannel interference, there is a challenge to fix the transmitters in a particular geographical area. erefore, studying the channel assignment problem in radio stations is NP-complete. However, Fotakis et al. [2] proved that even for graphs with diameter 2, the problem is NP-hard. Chartrand et al. [3] presented the theoretical graph definition for the radio-kchromatic number as follows.
Let G � (V, E) be a connected graph with diameter d and radius ρ. For any integer k, 1 ≤ k ≤ d, radio k− coloring of G is an assignment φ of color (positive integer) to the vertices of d(a, b) is the distance between a and b in G. e biggest natural number in the range of φ is called the radio k− chromatic number of G, and it is symbolized by r ck (φ). e minimum number is taken over all such radio k− chromatic numbers of φ which is called the radio k− chromatic number, denoted by r ck (G).
Cada et al. [4] proved that, for any distance graph D(t − 1, t), we have Recently, Bantva [5] improved this general lower bound. Based on different k values, the radio k− chromatic number is classified into different problems.
For k � d, the radio k− chromatic number is termed as the radio number problem, and it is symbolized by rn(G). It was introduced by Chartrand et al. [6] for the purpose of determining the maximum number of channels for frequency modulation (FM) radio stations by minimum utilization of spectrum bandwidth. e radio number problem has been studied by several researchers [7,8]. In 2017, Avadayappan et al. [9] brought in the concept of radial radio labelling. A mapping φ: V(G) ⟶ N ∪ 0 { } for a connected graph G � (V, E) is called a radial radio labelling if this satisfies the inequality |φ(a) − φ(b)| +d(a, b) ≥ ρ +1∀a, b ∈ V(G), where ρ is the radius of the graph G. Radial radio number of φ symbolized by rr(φ ) is the maximum number mapped under φ. e radial radio number of G, denoted by rr(G), is equal to min rr(φ) /φ is a radial radio labelling of G}. A few number of research articles [10,11] were published in the area of radial radio labelling. In this paper, we have studied a comparative relation between rn(G) and rr(G). Furthermore, we have defined and determined the radio and radial radio numbers of certain sunflower extended graphs such as SS(n, Ւ), CS(n, Ւ), and WS(n, Ւ).
Relation between the Radio Number and Radial Radio Number
is section deals with certain results which connect rn(G) with rr(G) for any connected graph G.
Definition 1.
e eccentricity of a vertex z, represented by e(z) in a connected graph G, is the maximum distance from z to any other vertex in G.
}. e maximum eccentricity of the vertices of G is called the diameter of the graph, and it is symbolized by d or di am(G). In addition, the radius of graph G, symbolized by ρ or ra d(G), is the minimum eccentricity of the vertices of G.
e following is a straight result from the definitions of the radio number and radial radio number.
Theorem 1. For any connected graph G, rn(G) ≥ rr(G).
Chartrand et al. [6] proved the following three theorems, which will be used to study the general results for the radial radio number.
Theorem 2.
If G is a connected graph of order n and diameter d, then n ≤ rn(G) ≤ (n − 1)d.
Theorem 4.
Every connected graph G of order n with rn(G) � n is self-centred.
Using eorem 5 and Definition 2, we have attained the equality of eorem 1 as follows.
Theorem 5.
A connected graph G of order n is self-centred if and only if rn(G) � rr(G) � n. Theorem 6. Let G � (V, E) be a complete k− partite graph of order n; then, rr(G) � k.
e radius of the complete k− partite graph is 1, and all the vertices in the sets U i , 1 ≤ i ≤ k, are at distance two. Hence, we can label the vertices in each set U i as i(i � 1, 2, . . . , k). Clearly, the radial radio labelling condition d(a, b) + |φ(a) − φ(b)| ≥ 2 is satisfied for any pair of vertices in G. Hence, rr(G) � k. □ Theorem 7. If G is a connected graph of order n > 1 and radius ρ, then 2 ≤ rr(G) ≤ (n − 1)ρ.
Proof. Given G is a connected graph that contains at least two vertices. erefore, the lower bound of the theorem attains in the particular case of eorem 6 which is for the complete bipartite graphs. Furthermore, the upper bound is obtained by replacing d by ρ in eorem 2. Consequently, 2 ≤ rr(G) ≤ (n − 1)ρ, n > 1.
Results and Discussion
In this section, we have defined and investigated the radial radio and radio number of some sunflower extended graphs such as star-sun graph SS(n, Ւ), complete-sun graph CS(n, Ւ), wheelsun graph WS(n, Ւ), and fan-sun graph FS(n, Ւ).
Definition 3.
A sunflower graph consists of a wheel with a centre vertex w n , n-cycle w 0 , w 1 , . . . , w n− 1 , and additional n vertices u 0 , u 1 , . . . , u n− 1 where u i is joined with edges to (w i , w i+1 ), i � 0, 1, 2, . . . n-1, and i + 1 is taken as modulo n. It is represented by SF n . e radius, diameter, and number of vertices of SF n are 2, 4, and 2n + 1, respectively.
Definition 4.
A star graph, denoted by S Ւ+1 , is defined as a complete bipartite graph of the form K 1,Ւ , Ւ > 1. In other words, S Ւ+1 is a tree having Ւ leaves and one internal vertex.
Definition 5.
A star-sun graph, denoted by SS(n, Ւ), is a graph obtained from the sunflower graph SF n and n copies of star graph S Ւ+1 by merging the internal vertex of the k th star graph S Ւ+1 and vertex u k− 1 of SF n , 1 ≤ k ≤ n, as shown in Figure 1(a).
Definition 7.
A wheel-sun graph, denoted by WS(n, Ւ), is a graph obtained from the sunflower graph SF n and n copies of wheel graph W Ւ+1 by merging the vertex u k− 1 of SF n and the centre vertex of the k th wheel, where 1 ≤ k ≤ n as shown in Figure 1(c).
Definition 8.
A fan-sun graph is a graph obtained from the sunflower graph SF n and n copies of fan graph F Ւ+1 � P Ւ + K 1 by merging K 1 of the k th fan and the vertex u k− 1 of SF n , 1 ≤ k ≤ n. It is denoted by FS(n, Ւ) as shown in Figure 1(d).
Radial Radio Number of Sunflower Extended Graphs.
e following theorems provide the upper bound for the radial radio number of S(n, Ւ), CS(n, Ւ), and WS(n, Ւ).
Since the radius of the graph is 3, we must verify φ satisfies the radial radio labelling condition d( Let us choose any two arbitrary vertices a and b in the sun-star graph. respectively. Also, a and b are at a distance two. Hence, the radial radio labelling condition becomes d( then a and b are at a distance at least 4. Hence, the radial radio labelling condition is trivially satisfied.
which trivially verifies the radial radio labelling condition.
Case 5: if a is the centre vertex of the wheel and b is any other star vertex, then the distance between them is exactly 3. Also, φ(a) � 0 and φ Case 6: let a and b be the vertices in the n-cycle of the sunflower graph.
, which is enough for verifying the condition.
Radio Number of Sunflower Extended Graphs.
is section provides the upper bound for the radio number of S(n, Ւ), SS(n, Ւ), and WS(n, Ւ).
Case 3: assume that a � u 2s+p and b � u 2t+q , . erefore, in both of them, the inequality is satisfied.
Case 7: let a be the centre vertex of the wheel and b be any other vertex in the graph.
Conclusion
In this paper, we have presented the relation between the radio number and radial radio number. We have also defined and investigated the bounds for the same problems for the graphs CS(n, Ւ), SS(n, Ւ), and WS(n, Ւ). For the graph fan-sun graph SS(n, Ւ), the problem is still considered as an open research problem that needs further investigation. Since the method of finding the radial radio number and radio number of the fan-sun graph is similar to the previous theorem, it is still open to the interested researchers to do a further research work that can extend our results to identify more relations between the radio number and radial number by studying the same problem for interconnection networks.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Authors' Contributions
Mohammed K. A. Kaabar contributed to actualization and initial draft, provided the methodology, validated and investigated the study, supervised the original draft, and edited the article. Kins Yenoke validated and investigated the study, provided the methodology, performed formal analysis, and contributed to the initial draft. Both authors read and approved the final version. | 2,536.8 | 2022-01-30T00:00:00.000 | [
"Mathematics"
] |
Prediction of Apoptosis Protein Subcellular Localization with Multilayer Sparse Coding and Oversampling Approach
The prediction of apoptosis protein subcellular localization plays an important role in understanding the progress in cell proliferation and death. Recently computational approaches to this issue have become very popular, since the traditional biological experiments are so costly and time-consuming that they cannot catch up with the growth rate of sequence data anymore. In order to improve the prediction accuracy of apoptosis protein subcellular localization, we proposed a sparse coding method combined with traditional feature extraction algorithm to complete the sparse representation of apoptosis protein sequences, using multilayer pooling based on different sizes of dictionaries to integrate the processed features, as well as oversampling approach to decrease the influences caused by unbalanced data sets. Then the extracted features were input to a support vector machine to predict the subcellular localization of the apoptosis protein. The experiment results obtained by Jackknife test on two benchmark data sets indicate that our method can significantly improve the accuracy of the apoptosis protein subcellular localization prediction.
Introduction
As a basic constituent of organisms, proteins play a critical role in life activities such as metabolism, breeding, growth, and development, especially for the apoptosis protein, which are crucial in the proteomics. Since the functions of an apoptosis protein are closely related to its subcellular location and different kinds of apoptosis proteins can only function in specific subcellular location, it is important to predict the subcellular location of certain apoptosis protein by existing methods, which could not only help us to understand the interactions and properties of apoptosis proteins but also realize the biological pathway involved [1][2][3]. With the application of high-throughput sequencing techniques and the explosion of sequence data volumes, developing an accurate and reliable computational method to predict apoptosis protein subcellular location has been a great challenge for bioinformaticians, accordingly promoting the development of machine learning in this field [4][5][6][7][8].
By the analysis of research status, the improved directions of using machine learning to predict apoptosis protein subcellular location in the past decade can be roughly categorized into two classes: sequence feature extraction and prediction model [5][6][7][8][9][10]. Currently the widely used methods for feature extraction are amino acid composition (AAC) [11,12], pseudo amino acid composition (PseAAC) [13,14], gene ontology (GO) [15,16], position specific scoring matrix (PSSM) [17,18], feature fusion [19,20], and so on. For example, Zhou et al. used the covariant discriminant function based on Mahalanobis distance and Chou's invariance theorem; combining with traditional AAC feature to predict apoptosis protein subcellular location, the prediction result by Jackknife test on data set ZD98 achieved about 72.5% [21]; Wan et al. proposed GOASVM algorithm based on the information of GO term frequencies and distant homologs to represent a protein in general form of PseAAC and got a high accuracy [22]; Chen et al. used the increment of diversity to fuse N-terminal, C-terminal, and hydrophobic features of apoptosis protein sequences, and the accuracies on ZD98 and CH317 were 90.8% and 82.7%, respectively [23]; Zhao et al. combined the bag of words model with PseAAC method, using K-Means to construct the dictionary of sequence features, and obtained a great predictive effect [24]. At the same time, there are also many efforts for the development of prediction model. For example, Wan et al. proposed an adaptive-decision support vector machine classifier through the annotation information 2 BioMed Research International of GO database and realized the prediction of membrane proteins as well as their multifunctional types [25]; Ali et al. extracted the PseAAC features of protein sequences, combining with location voting, k-nearest neighbor and probabilistic neural network to predict the subcellular localization of membrane proteins [26]. Besides, there are also some other prediction models used in this filed such as logistic regression, bayesian classifiers, and long short-term memory [27][28][29].
In the last decade or so, a recent review [30] pointed that a number of web-servers were developed for predicting the subcellular localization of proteins with both single site and multiple sites [31][32][33][34][35][36]. In general, proteins can simultaneously exist in multiple sites. In this study, given that the number of multilabel proteins in the existing apoptosis protein database is not large enough to construct a benchmark data set meaningfully in statistics and for the case of multiple locations, the sequence information is more complex and various than single locations, using oversampling approach to copy sequence feature may generate the inaccurate results, so here we did not consider the situation of multilabel proteins.
To summarize the previous research results, it is not difficult to find that the prediction accuracy is relatively low if merely using simple method such as AAC or PseAAC to extract sequence features for classification; as for the other feature extraction methods, like PSSM or feature fusion, though the prediction effect is better, the extraction process is too complicated and time-consuming for practical application. Given that many former researches have proved that support vector machine is one of the best classifiers for the prediction of protein subcellular localization [5,9,10,14,17,22], in this study, we focus on obtaining a higher prediction accuracy on the premises of simple feature extraction method and support vector machine to predict the subcellular localization of the apoptosis protein, therefore, finding an efficient approach to optimize the traditional sequence-based feature is the key problem to be solved in this paper.
In the study, we proposed a sparse coding method combined with traditional sequence feature extraction algorithm to extract low-level features of the apoptosis protein sequence, using multilayer pooling based on different sizes of dictionaries to integrate the local and holistic features of the sparse representation. Then the support vector machine was used to complete the final prediction. Given that our adopted benchmark data sets are unbalanced which may influence the classification effects of support vector machine [37], we used an oversampling approach to balance the data sets in the study. Compared with other experimental results with the same support vector machine classifier, the experimental results show that the proposed method can not only simplify the feature extraction process and reduce the time and space complexity of the classifier but also reflect the sequence features more comprehensively and improve the classification performance. The detailed descriptions are shown in the following sections.
Materials and Methods
. . Datasets. Two widely used benchmark data sets are adopted in this study: ZD98 and CH317, respectively. The data set ZD98 was constructed by Zhou and Doctor [21]. There are 98 apoptosis protein sequences divided into four kinds of subcellular locations, which are cytoplasmic proteins (Cy), mitochondrial proteins (Mi), membrane proteins (Me), and other proteins (Other). The data set CH317 was constructed by Chen and Li [23] and contains a total of 317 apoptosis protein sequences, in 6 classes of subcellular locations that are secreted proteins (Se), nuclear proteins (Nu), cytoplasmic proteins (Cy), endoplasmic reticulum proteins (En), membrane proteins (Me), and mitochondrial proteins (Mi). Considering that the above data sets are old, we update ZD98 and CH317 data sets with reference to Wang et al. [38] and remove some of the duplicates and error sequences. The specific method is not repeated here. After processing, there were 96 protein sequences remaining in ZD98 data set and 314 protein sequences remaining in CH317 data set. All protein sequences in the above two data sets are from the latest version of the UniProt database (Release 2018 12), and the number of protein sequences in each class of 2 data sets is shown in Table 1.
. . Feature Extraction. In order to set up a more accurate mapping relationship between each protein sequence and its corresponding feature vector, multilayer sparse coding was introduced in this study to find the most essential feature of original protein sequence based on simple feature extraction method. The algorithm mainly includes the following steps: local feature extraction, sparse coding, and pooling. And the process of sparse coding is divided into 2 sections: dictionary learning and sparse representation. Firstly, the protein sequence is segmented into some fragments, and the traditional protein feature extraction algorithm will be used to extract the features of these fragments, which could be applied for the step of dictionary learning. Then these local features are trained to construct a dictionary and the feature representation of original sequence would be sparsely reconstructed by it. The mean pooling is used to reduce the dimensions of the feature matrix, and finally the pooled vectors based on different dictionary sizes would be integrated as the ultimate features of protein sequences. The flow chart of extraction progress is shown in Figure 1.
. . . Local Feature Extraction. Before the step of sparse coding, it is necessary to extract the local features of protein sequence to constitute a training sample set for dictionary learning. Since every protein sequence has the different length and the critical features may be distributed in different positions of the sequence, in this paper, we adopted sliding window segmentation method inspired by Noor to cut all the protein sequences into pieces [39], generating a number of sequence fragments afterwards. The size of sliding window represents the segment length of each protein sequence, and the reference formula is where 1 , 2 , . . . , represent the length of each protein sequence in the whole data set, is the shortest sequence of it, and is the size of sliding window, which indicates that the value of segment length is between /2 and , and the exact value will be selected by the experimental experience.
After the step of segmentation, the existing sequence feature extraction method is used to statistically analyze the component information of sequence fragments and to transform the character sequences into numerical vectors as the local features of the protein. Effective feature extraction method can remarkably increase the final prediction accuracy. Nakashima and Nishikawa [47] firstly associated the amino acid composition (AAC) with the prediction of protein subcellular location in 1994. The AAC coding method was proposed to count the occurrence frequency of each amino acid in the protein sequence, described as follows: where 1 , 2 , 3 , . . . , 20 represent the number of each amino acid in the protein sequence, respectively and the specific explanation is represents the length of each protein sequence, that is, the total number of all the amino acid residues contained. Firstly, 20 amino acids are numbered from 1 to 20, and ( = 1, 2, 3, . . . , 20) describes the frequency of corresponding number appeared in the sequence.
represents each amino acid residue in original sequence, and ( ) represents the amino acid residue which corresponds to the number .
By using AAC to calculate the fragment features of protein sequence P, we can obtain a feature matrix for each original protein sequence constituted by all the AAC features of corresponding fragments. The matrix is shown in where represents the number of fragments cut by a protein sequence, is the feature dimension processed by AAC algorithm, and V represents the probability of occurrence of different amino acid residues. At this time, is 20. Each row of the matrix represents the feature vectors of different sequence fragments in a protein sequence. Generally we choose some of the fragment features as the local features to construct the dictionary, in this paper; since the number of fragments obtained is not very large, in order to get a better feature representation in spar coding, we chose the local features of all the sequence fragments to form a training sample set = [ 1 , 2 , 3 , . . . , ] for dictionary learning, where ∈ ( = 1, 2, 3, . . . , ), represents the feature vector of different protein sequences, that is, the vector in each row in , and is the number of fragments belonged to all of the protein sequences in the data set.
. . . Sparse Coding. Sparse coding is a branch of deep neural networks, and it contains 2 main steps: dictionary learning and sparse representation, respectively [48]. It can extract the detailed features of original data set and decompose the input sample set into a linear combination of multiple primitives. The coefficients of the primitives are the features of input sample. The description can be formulated as where is the matrix of training sample composed by fragment features; = [ 1 , 2 , 3 , . . . , ] ∈ * is the primitive matrix named the dictionary, represents the feature elements of dictionary, is the size of dictionary, is feature dimension processed by AAC algorithm,; = [ 1 , 2 , 3 , . . . , ] ∈ * is the sparse representation of original sample, and represents the sparse coefficient of the i-th feature block in the sparse feature space, that is, the projection of in sparse feature space. is the number of fragments belonged to all of the protein sequences in the data set. The solution of dictionary can be expressed as where ‖ • ‖ 2 represents 2 norm of a vector and ‖ • ‖ 0 is 0 norm of a vector. The constraint in formula above means that the number of nonzero elements in needs to be less than or equal to 0 , which is preset and related to the sparse rate. Equation (6) is essentially a nonconvex optimization problem. There are mainly two common solutions for it: the first is to transform it into a convex optimization problem to relax the constraint of equation and then transforms it into the following form: where is the balance factor and ‖ • ‖ 1 represents 1 norm of a vector. Equation (7) can usually be solved by regression algorithm, such as LASSO [49]. The second is to solve it by using the heuristic greedy algorithm [50]. The algorithms for the second solution are MOD and K-SVD [51]. In this study, in view of the efficiency and operability of the algorithm, we choose K-SVD as our solution to learn the dictionary; that is, the second solution.K-SVD is an expansion of Kmeans algorithm proposed by Aharon and Elad [52]. It adopts the method of iterative alternating learning and uses the singular value decomposition to perform times iterations to optimize the primitives of dictionary, which can better fit the original data. K-SVD is mainly divided into the following steps: (1) Initialize the dictionary , and set the terminal condition of iteration; (2) Fix , solve the sparse representation ; (3) Fix , solve the dictionary ; (4) Perform steps (2) and (3) alternately until the end of the iteration.
After obtaining the dictionary, the orthogonal matching pursuit (OMP) algorithm is used to complete the sparse representation of the fragment features of the original protein sequence [53]. The basic theory of OMP is to select one of the most matching primitives from the dictionary to perform a sparse approximation with the primitives of original samples and to obtain the residual between them. Then, it continues to select the next proper primitive which is best matched with this signal residual and iterates in this way over and over until the residual and sparse rate meets the fixed terminal conditions. Samples can be approximately presented by a linear combination of these derived primitives. All primitives selected in each process must be orthogonalized first, which would make the convergence speed faster [54]. Constituting the sparse features of all the encoded fragments, we can obtain an * sparse matrix to represent the feature of each protein sequence, where is the number of sequence segments in each sequence and is the size of dictionary, that is, the sparse representation of a protein sequence.
. . . Multilayer Pooling. The dimension of the feature matrix obtained by sparse coding is very high, if we expand it directly, the huge data volume will cause redundant space and time complexities of classification, and it is prone to overfitting. Therefore, it is necessary to reduce the dimensions of the feature matrix. The method of pooling can map the collection of feature vectors into a single vector. There are two different common pooling methods that are the max pooling and mean pooling, respectively. The aggregation statistics of features in different positions can extract the effective information and reduce the calculated amount of numerical matrix [55]. Max pooling takes the maximum value of the feature points in the neighborhood and retains the edge information of the feature matrix more, while mean pooling takes the average value of the feature points in the neighborhood and more to extract the background information [56]. Given that the characters of sequence data are different from images, we chose the mean pooling as the final dimension-reduced method. The formula is shown as follows: where = 1, 2, 3, . . . , being obtained by averaging the elements in the i-th row of the matrix . After being processed by mean pooling, each protein sequence is represented as a dimensional feature vector, is the size of dictionary.
In order to obtain a more overall feature representation of original protein sequence, multilayer pooling based on different sizes of dictionary is performed, and several pooling results will be integrated to help extract the local and holistic features severally. The specific description is as follows: in the process of sparse coding, the values of dictionary sizes are set to 1 , 2 , and 3 respectively; thus 3 different levels of dictionary could be obtained by K-SVD algorithm. Then the OMP algorithm is used to complete the sparse representation of fragment features based on different dictionary sizes, and the sparse features are combined to obtain the feature matrix of original sequence. Finally the sparse matrix will be mean pooled to extract different levels of feature vectors. The vectors in each pooled block are concatenated together to obtain a 1 + 2 + 3 dimensional vector as the final feature representation. In this paper, the values of were set to 30, 50, and 70, respectively, generating a 150 dimensional vector to be selected by principal component analysis (PCA) and sent to the classifier for prediction. The general descriptions of spare coding and pooling can be shown in Figure 2. . . Oversampling Method. Since the data sets used in this paper are not balanced, which may cause the low accuracy of prediction, we referred to some similar papers used the oversampling to balance the data set [16,30,43]. In order to further illustrate the effect of our method, a simple oversampling method called synthetic minority oversampling technique (SMOTE) was applied in the study to decrease the imbalance of our data set. SMOTE is a classical oversampling method proposed by Chawla et al. [57]. It is widely used for its good classification effect and simple operation. The basic principle of SMOTE algorithm is to synthesize new minority samples between a few neighbouring samples and to reduce the imbalance of the data distribution. The details are as follows: (1) For each sample in the class of smaller number of data set, calculate the Euclidean distance from other samples in the minority class to obtain the nearest neighbor samples.
(2) Assuming that the sampling magnification is , for each of the few classes of samples , ( > ) samples are randomly selected from their nearest neighbor samples and these samples are recorded as 1 , 2 , 3 , . . . , .
(3) According to the following, combine each sample with samples to perform random interpolation operations to synthesize interpolated samples : where rand(0, 1) represents a random number within (0, 1) and represents the i-th nearest neighbor sample of . (4) Finally, the interpolated sample is added to the original sample set to form a new sample set.
The imbalance degree of the data set determines the value of , and the imbalanced level (IL) between majority and minority of the data set is calculated according to where ( ) represents the value obtained by rounding up IL. Through the above interpolation operation, the majority and the minority samples can be effectively balanced to improve the accuracy of classification. In this study, the minority classes of 2 data sets are balanced by SMOTE, and the processed results after are as Table 2.
. . Classifier and Performance Measures. In order to facilitate the comparison with other feature extraction algorithms, we used support vector machine (SVM) as the classification model in this study. After the feature extraction of protein sequences, the universal package of LIBSVM developed by Lin was applied to construct the SVM multiclass classifier [58]. The Jackknife test was also adopted to examine the effectiveness of classifier in our experiment. Jackknife test has the least arbitrary that can always yield a unique result for a given benchmark dataset [59]. Furthermore, in order to have a more comprehensive evaluation, sensitivity (Se), specificity (Sp), Matthew's correlation coefficient (MCC), and the overall accuracy (OA) over the entire data set are applied as the evaluation index [20,21,60]. These parameters are detailed in = + where TP, TN, FP, and FN are the number of true positives, true negatives, false positives, and false negatives, respectively; is the total number of protein sequences and is the class number.
. . Parameters Selection. There are two key parameters in this study. One is the length of sequence fragment in the local feature extraction. The shortest protein sequence length in the data set is 50, and the fragment length is selected between 25 and 50. Figure 3 shows the prediction accuracy of the data set ZD98 and CH317, respectively, when taking different slice lengths.
As shown in Figure 3, when the sequence length is between 35 and 40, the prediction accuracies on the data sets ZD98 and CH317 are the highest and tend to be stable, and the current length is the optimal value. The optimal values for the two data sets used in this study are 35 and 40, respectively. When using PCA to select the final feature vectors, the setting of dimension D has an effect on the accuracy of the entire algorithm. The more dimensions are selected and the more features are included, but the training time of the classifier may be too long. The smaller the dimension is, the more likely it is to lose some truly meaningful features and affect the classification effect. Therefore, an optimal needs to be sought through experiments. Figure 4 shows the prediction accuracy corresponding to the different taken by the data sets ZD98 and CH317 during the feature selection of PCA.
As shown in Figure 4, when the dimension of the feature vector is low, the prediction accuracy of two data sets is relatively low. When the dimension is higher than a certain value, the prediction accuracy is also reduced. When the dimension is between 60 and 70, the prediction accuracies on the data sets ZD98 and CH317 are the largest and tend to be steady, and the current is the optimal value. The optimal values for the two data sets used are 60 and 65, respectively.
Result and Discussion
The prediction results of our experiments by Jackknife on the data sets ZD98 and CH317 are listed in Tables 3, 4, and 5. To further illustrate the effectiveness of our method, the prediction results in each subcellular location of two data sets are also listed in Tables 3-5, which are sensitivity, specificity, correlation coefficient, and overall accuracy, respectively.
It can be seen from Table 3 that the method has obtained good experimental results on both two data sets, and the total accuracies rates are 96.7% and 94.8%, respectively. The experiment proves that the method can effectively increase the accuracy of the prediction of protein subcellular localization. At the same time, in order to facilitate the comparison with other methods, we have listed some experimental results based on some improved algorithms of protein sequence feature extraction in the past several years.
In Tables 4 and 5, DCC SVM comes from Liang [40], by using detrended cross-correlation coefficient(2016); OF SVM comes from Zhang [41], by using -Order Factor and principal component analysis(2017); DE SVM comes from Liang [42], by fusing two different descriptors based on evolutionary information(2018); BOW SVM comes from Zhao [24], by using bag of words(2017); GA SVM comes from Liang [17], by using geary autocorrelation and DCCA coefficient(2017); OA SVM comes from Zhang [43], by using oversampling and pseudo amino acid composition(2018); IAC SVM comes from Zhang [44], by using integrating auto-cross correlation and PSSM(2018); EI SVM comes from Xiang [45], by using evolutionary information(2017); CF SVM comes from Chen [46], by using a set of discrete sequence correlation factors(2015); all the methods use SVM as the final classifier.
It can be seen from Table 4 that the result on the data set ZD98 has a maximum improvement of the overall prediction accuracy, increasing by about 6 to 8 percentage points compared with traditional protein sequence feature extraction algorithms such as DCC SVM, OF SVM, and DE SVM. In the subcellular class of cytoplasmic proteins, the prediction accuracy rate is 100%, which means that all the sequences in this class are predicted correctly, and the overall prediction accuracy is better than other methods as well. Compared the experimental results with other improved feature extraction algorithms such as BOW SVM, GA SVM, and OA SVM, the accuracy on the same data set is also improved by about 3 to 5 percentage points. Experiments show that the proposed method indeed provides a better source of information for protein sequences and have significant advantages than other similar feature extraction methods. From the comparison in Table 5, we can see that the prediction result on mitochondrial proteins of data set CH317 is up to 96.4%, which is about 4.1% to 14% higher than other algorithms. The accuracy rate in the class of Nuclear has also increased by 14.2% maximally, improving the total prediction accuracy by 3.3 to 4.3 percentage points compared with the improved algorithms such as IAC SVM, EI SVM, and CF SVM, which further demonstrates that the method can optimize the underlying features of the sequence and effectively improve the prediction accuracy of apoptosis protein subcellular localization. Compared with the traditional protein sequence feature extraction and their improved methods, the time complexity of our algorithm is not only low but can also achieve better results based on the simple AAC feature. The background information of the feature representation can also be extracted by mean pooling and comprehensively reflect the distribution of sequence features more, as well as improving the classification accuracy.
Conclusions
Prediction of apoptosis protein subcellular localization has always been the hotspot of bioinformaticians all over the world. Based on the traditional protein sequence feature extraction algorithm AAC, this paper introduced sparse coding to optimize sequence features and proposed a feature fusion method based on multilevel dictionary. The main contribution includes firstly using sliding window segmentation to extract the sequence fragments of protein sequences, and the traditional feature extraction algorithm was used to encode them. Then the K-SVD algorithm was used to learn the dictionary, and the sequence feature matrix was sparsely represented by the OMP algorithm. The feature representation based on different sizes of dictionaries is mean-pooled to help extract the overall and local feature information. Finally the SVM multiclass classifier is used to predict the subcellular location of the proteins. Experiments show that the proposed method can obtain better results in the prediction success rate of most subcellular classes and have important guiding significance for improving the feature expression of traditional apoptosis protein sequence feature extraction algorithms. Generally speaking, it is a relatively effective method for predicting the subcellular localization of apoptosis proteins.
Data Availability
The data used to support the findings of this study is available from the corresponding author upon request, and you can also find it from https://github.com/Multisc/Multi sc subloc. | 6,496 | 2019-01-30T00:00:00.000 | [
"Computer Science"
] |
Cathelicidin-derived antiviral peptide inhibits herpes simplex virus 1 infection
Herpes simplex virus 1 (HSV-1) is a widely distributed virus. HSV-1 is a growing public health concern due to the emergence of drug-resistant strains and the current lack of a clinically specific drug for treatment. In recent years, increasing attention has been paid to the development of peptide antivirals. Natural host-defense peptides which have uniquely evolved to protect the host have been reported to have antiviral properties. Cathelicidins are a family of multi-functional antimicrobial peptides found in almost all vertebrate species and play a vital role in the immune system. In this study, we demonstrated the anti-HSV-1 effect of an antiviral peptide named WL-1 derived from human cathelicidin. We found that WL-1 inhibited HSV-1 infection in epithelial and neuronal cells. Furthermore, the administration of WL-1 improved the survival rate and reduced viral load and inflammation during HSV-1 infection via ocular scarification. Moreover, facial nerve dysfunction, involving the abnormal blink reflex, nose position, and vibrissae movement, and pathological injury were prevented when HSV-1 ear inoculation-infected mice were treated with WL-1. Together, our findings demonstrate that WL-1 may be a potential novel antiviral agent against HSV-1 infection-induced facial palsy.
. Introduction
Herpes simplex virus 1 (HSV-1), a member of the family Herpesviridae, subfamily Alphaherpesvirinae, is an enveloped virus with a double-stranded (ds) DNA genome consisting of a non-spherical capsid, cortex, and capsule (Whitley and Roizman, 2001;Rechenchoski et al., 2017). HSV-1 is a ubiquitous but important human pathogen. Recent epidemiologic studies estimate that over half of the world's population is infected with HSV-1, making it a global health concern (James et al., 2020;Imafuku, 2023). HSV-1 initially prefers to infect the genital and oral mucosa. Painful blisters or ulcers at the site of infection are caused by contagious and long-lasting infections. The virus can then migrate to the sensory ganglia and enter the latent stage, preventing clearance by the immune system (McQuillan et al., 2018). In brief, HSV-1 infection begins with primary infection in the periphery, followed by lifelong latency in the peripheral nervous system, which can cause various clinical signs and symptoms, such as skin lesions, acute retinal necrosis, genital sores, and other pathologies (Khalesi et al., 2023). Furthermore, HSV-1 can cause fatal systemic infections or encephalitis, problems typically most associated with immune naive or immunocompromised patients (Imafuku, 2023). HSV-1 has been reported to be the most common cause of infectious blindness and fatal encephalitis worldwide. It can also cause . /fmicb. .
Bell's palsy when it infects the facial nerve, an acute spontaneous facial paralysis that accounts for 50-70% of all peripheral facial paralysis (Imafuku, 2023;Khalesi et al., 2023). Reactivation of latent HSV-1 infection in the geniculate ganglion is considered a major cause of Bell's palsy. Approximately 30% of patients with Bell's palsy are still at risk for continued facial paralysis and pain, while most people can recover fully on their own (Zhang et al., 2023). The majority of treatment drugs for HSV-1 are acyclovir, a nucleoside analog, and its derivatives, as these compounds are activated by the viral thymidine kinase (TK) and inhibit the viral polymerase, resulting in preventing the production of infectious virions (Aribi Al-Zoobaee et al., 2020;Stoyanova et al., 2023). The latent stage and the development of resistance are limitations to the use of these drugs. However, ACV can be responsible for renal cytotoxicity with acute renal failure requiring dialysis (Neto et al., 2007). Additionally, the drug-resistant HSV-1 strains, especially acyclovir-resistant strains, which are related to mutations of the viral TK or DNA polymerase genes, have been the major challenges facing HSV-1 treatment (Gu et al., 2014). Consequently, new cure strategies are needed, and to do so, it is necessary to develop new effective anti-HSV-1 drugs. Antimicrobial peptides have received increasing attention from the scientific community over the last 20 years due to the worldwide increase in antibiotic resistance among microorganisms. At present, more than 3,200 natural antimicrobial peptides have been reported in the antimicrobial peptide database (https://aps.unmc. edu). Antimicrobial peptides (AMPs) are molecules present in the innate immune system of almost all living organisms, invertebrates, and vertebrates and have been identified as potential agents with therapeutic potential as they exhibit marked antibacterial, antiviral, antiparasitic, and antifungal properties (Zasloff, 2002;Barashkova and Rogozhin, 2020;Memariani et al., 2020;De Angelis et al., 2021). Cathelicidin is a member of the AMPs family, which is a part of the immune system and can be produced by a variety of eukaryotic organisms (Chessa et al., 2020). They are well-conserved during genome evolution and have similar modes of action (Zhang et al., 2023). Cathelicidins have revealed potent antimicrobial activity against bacteria and viruses, which has gradually become an interesting and promising research topic (Cebrián et al., 2023). Cathelicidins are characterized by two functional domains, namely the conserved cathelin-like proregion and the N-terminal active domain region (Kościuczuk et al., 2012). Several studies have demonstrated that cathelicidins exhibit potent anti-endotoxin properties in vitro and in vivo, both by binding bacterial LPS and by intervening in TLR signaling mechanisms (Rosenfeld et al., 2006). Their antiviral mechanisms include the inhibition of viral entry, intracellular viral replication, and assembly and induction of the immune response. For instance, CRAMP, a cathelicidin identified in mouse, can affect the survival and replication of the influenza A virus (IAV) in host cells by directly disrupting the IAV envelope (Gallo et al., 1997). PG-1, an antimicrobial peptide of the cathelicidins family, has been reported to inhibit viral infection by blocking the adsorption of porcine reproductive and respiratory syndrome virus (PRRSV) on embryonic kidney cells of African green monkey (Guo et al., 2015). In addition, they show a lower tendency to induce resistance than conventional bacterial antibiotics (Mehmood Khan et al., 2023). As a consequence, they are a new generation of antiviral biomolecules with very low toxicity to human host cells and play a role in the treatment of a variety of diseases and symptoms, which can be considered an appropriate choice for the treatment of resistant pathogens in the future (Chen et al., 2013;AlMukdad et al., 2023;Baindara et al., 2023).
LL-37, a peptide derived from the human cathelicidin, has been shown to resist HSV-1 infection (Lee et al., 2014;Roy et al., 2019), whereas LL-37 is limited in its therapeutic use due to its length of 37 residues, resulting in high chemical synthesis costs. Herein, we designed a 16-amino acid peptide, WL-1, which is based on human cathelicidin LL-37 and its fragments (Li et al., 2006;He et al., 2018). Moreover, we displayed the antiviral activity of WL-1 against HSV-1 infection. Furthermore, the administration of WL-1 ameliorated the pathological symptoms of inflammation and facial paralysis induced by HSV-1 infection. In summary, this study shows that WL-1 may be a potent agent candidate against HSV-1 infection.
. Materials and methods . . Mice C57 female mice (8-week-old, weighing 17-19 g) were purchased from SPF (Beijing, China) Biotechnology Co., Ltd. Mice were group-housed at room temperature and a 12-h light/dark cycle, with free access to water and standard animal food. The animal care and experimental protocol were approved by the Animal Care Committee of Shanxi Agricultural University, approval number: SXAU-EAW-2022M.RX.00906001.
. . Cells, viruses, and peptides U251 cells and Vero cells were obtained from the Kunming Cell Bank, Kunming Institute of Zoology, Chinese Academy of Sciences. All cells were cultured in a DMEM medium (Gibco, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 µg/ml of streptomycin in 5% CO 2 at 37 • C. HSV-1 was stored at −80 • C in our laboratory. The peptide WL-1 (GWKRIKQRIKDKLRNL) was designed based on human cathelicidin LL-37 and the existing derived GF-17 (He et al., 2018). Three residues were redesigned (F2W, V6K, and F12K), and one residue was removed (V17) in combination with C-terminal amidation (-NH2) compared to GF-17. It was synthesized by GL Biochem (Shanghai) Ltd. (Shanghai, China). The peptide was analyzed by reversed-phase high-performance liquid chromatography (RP-HPLC) and mass spectrometry. The purity of WL-1 was over 98%.
(MTT) reduction assays. In detail, 20 µl of MTT (5 mg/mL) was added to each well. The MTT solution was then removed, and 200 µl dimethyl sulfoxide (DMSO) was added to solubilize the MTTformazan crystals in living cells. The absorbance at 570 nm of the resulting solution was measured.
. . Plaque-forming assays HSV-1 (MOI = 0.1) was first co-cultured with different concentrations of WL-1 (0, 2, 10, and 50 µM) or acyclovir (50 µM) for 2 h and then diluted to infect Vero cells. The maintenance medium containing 1% methylcellulose was used to replace the cell culture medium. After 3 days of incubation, 10% formalin was used to fix the cells, and 1% crystal violet was subsequently used to stain the cells. Finally, the plaque-forming units were counted.
. . Cell infection
Vero and U251 cells were seeded in a 12-well plate with 5 × 10 5 cells/well for 12 h and then infected with HSV-1 at 0.1 MOI combined with WL-1 (0, 2, 10, 50 µM) or acyclovir (50 µM) administration for 24 h. Cells were then lysed in a TRIzol reagent (TIANGEN, Beijing, China), and the total RNA was extracted under the manufacturer's instructions. The relative expression levels of HSV-1 (HSV-1, UL27, UL52, and UL54) were detected by RT-qPCR using GAPDH as a reference gene.
. . Quantitative real-time PCR (RT-qPCR)
For the RT-qPCR analysis, total RNA was isolated from cells, and cDNA was reverse-transcribed by using M-MLV reverse transcriptase (Promega, Madison, WI, USA). RT-qPCR was performed on the StepOnePlus Real-Time PCR Systems (Thermo, Waltham, MA, USA). The primers used are shown in Table 1 . Primer was designed to target the ICP0 gene of HSV-1 to represent virus replication.
. . Mice infection
For ocular scarification infection reported in other studies (Li et al., 2016), mice were divided into four groups: HSV-1 only as the model group, HSV-1 and WL-1 (5 mg/kg) as the experimental group, HSV-1 and acyclovir (5 mg/kg) as the positive control group, and PBS as the negative control group. The infection dose of HSV-1 was 4 × 10 4 plaque forming units (PFU). WL-1 was treated via tail vein injection after 1 h of HSV-1 infection. Mice were weighed and monitored daily over time. The survival rate of mice was observed by day 20. Mice were euthanized on day 5 after infection, and brains were harvested to determine the viral burden by plaque assay. In addition, enzyme-linked immunosorbent assay (ELISA) was used to detect the content of inflammatory factors in serum and brain.
Moreover, HSV-1 infection via ear inoculation was used to induce facial paralysis (Takahashi et al., 2001). Following anesthesia with an intraperitoneal injection of sodium pentobarbital (50 mg/kg), the posterior surface of the left auricle was scratched 20 times with a 27-gauge needle, and then, 25 µl of virus solution (1.0 × 10 6 PFU) was placed on the scratched area. The negative control group was injected with normal saline, and the positive control group and the experimental group were injected with HSV-1. Then, the experimental group was treated with WL-1 (5 mg/kg) via tail vein injection after 1 h of HSV-1 infection. Next, blink reflex, vibrissae movements, and nose tip position were carefully observed every 12 h so as to evaluate the facial paralysis. The blink reflex was performed by blowing air into the eye at a distance of 2 cm away from the eye by the use of a 5 ml syringe without a needle, and the movement and closure condition of the eyelid were observed. The time of vibrissa movement of the inoculated side in 30 s was counted, compared with that of the contralateral side. Meanwhile, the nose tip position was recorded. After 72 h, tissue from the facial nerve and its connection to the brain was cut and observed. We also performed an inflammatory score.
. . ELISA
The brains of infected mice as described above were excised and then homogenized in 0.9% NaCl with a glass homogenizer. The supernatant was collected after centrifugation at 2,000 g for 15 min. The serum was obtained by centrifugation of blood at 3,000 rpm for 10 min. The concentration of TNFα and IL-6 in the supernatant and serum harvested above was measured by using enzyme-linked immunosorbent assay kits (Dakewe, Beijing, China) according to the manufacturer's instructions.
. . Statistical analysis
Data are given as mean ± SEM. Statistical analysis was performed using a two-tailed Student's t-test and log-rank test. A p-value of < 0.05 was considered to be statistically significant.
. . Cytotoxicity of WL-
To distinguish antiviral activity from cellular toxicity, the cytotoxicity of WL-1 was tested on various HSV-1 host cells, including neuronal cell (U251) and epithelial cell (Vero; Figure 1). Cells were treated with different concentrations of WL-1 (0, 0.4, 2, 10, 50, and 250 µM) for 24 h, and the cell viability was analyzed by MTT assays. We found that the cell viability was reduced by no more than 10% by 50 µM WL-1 in all cell lines, which suggested that WL-1 did not exert toxic effects on cells up to 50 µM concentration. Therefore, a concentration of WL-1 below 50 µM was used for the subsequent research.
. . WL-inhibits HSV-infection in vitro
To determine whether WL-1 has direct anti-HSV-1 activity, HSV-1 was incubated with different concentrations of WL-1, and plaque assay was performed on Vero cells. As shown in Figure 2A, the viral plaque number was significantly decreased by WL-1 in a dose-dependent manner. Only half of the infectious particles were present in the cells treated with WL-1 at 10 µM concentration with a 50% antiviral activity at this dose, which suggested that WL-1 reduces the viable viral load. In order to further assess the antiviral activity of WL-1, epithelial cells Vero were infected with HSV-1 at 0.1 MOI for 24 h in the presence or absence of WL-1. Acyclovir, a major clinical drug used in the treatment of HSV-1 infection, was chosen as a positive control (Gurgel Assis et al., 2021). Notably, administration of WL-1 decreased the intracellular load of HSV-1 in Vero cells ( Figure 2B). To confirm these results, we also examined the antiviral effects of WL-1 on neuronal U251 cells, an HSV-1-sensitive cell line. Similarly, WL-1 significantly inhibited the replication of HSV-1. In addition, RT-qPCR analysis showed that WL-1 markedly reduced the expression of viral late gene UL27, early gene UL52, and immediate early gene UL54. In addition, the effect of HSV-1 inhibition became stronger with increasing WL-1 concentration ( Figure 2C). The antiviral activity of WL-1 was equivalent to or even better than that of acyclovir at an equal concentration on all experiment cells. Taken together, these results indicate the strong antiviral effect of WL-1 against HSV-1 infection, which depends on the dose of WL-1 within a certain concentration range.
. . WL-suppresses HSV-infectivity in mice
To define the role of WL-1 against HSV-1 infection in vivo, wild-type mice were infected with 4 × 10 4 PFU of HSV-1 to each eye (via ocular scarification) with or without WL-1 administration, and survival was monitored over time. Mortality of normal infected mice occurred at 3 days postinfection, while infected mice with WL-1 administration began to die at 4 days postinfection. In addition, the survival rate of HSV-1-infected mice in the presence of WL-1 was ∼90% by day 20, which was markedly higher than that of mice without WL-1 administration ( Figure 3A). To investigate whether the attenuated pathogenesis noted above was due to a decreased burden of HSV-1 in the mouse with WL-1 administration, we analyzed viral loads of HSV-1 in the brain after 2 days of infection. Plaque assay revealed that a lower level of virus titer was observed in the brain of HSV-1-infected mice following the administration of WL-1 than those mice in the absence of WL-1 ( Figure 3A). Moreover, we detected the content of inflammatory cytokines in the peripheral blood and brain, which is a hallmark of the immune response to pathogenic infection. Strikingly, the production of inflammatory cytokines IL-6 and TNF-α in mice induced by HSV-1 infection was substantially reduced in the presence of WL-1 or acyclovir in the peripheral blood ( Figure 3B). In line with these findings, the production of IL-6 and TNF-α in the brain was significantly decreased by the administration of WL-1 or acyclovir during infection with HSV-1 ( Figure 3C). Overall, HSV-1 infection in the presence of WL-1 administration resulted in increased survival and decreased replication and inflammation in .
/fmicb. . the host, suggesting that the WL-1 has the ability to suppress HSV-1 infection.
. . WL-prevents facial palsy induced by HSV-
Published studies report that HSV-1 can infect the host in various manners to cause different symptoms (Honda et al., 2002;Kastrukoff et al., 2012;Lee et al., 2015;Caliento et al., 2018). An animal disease model of HSV-1 infection has been developed to study Bell's palsy caused by HSV-1 reactivation. To examine whether the protective role of WL-1 on HSV-1 was specific to the infection method and whether WL-1 also had an effect on facial palsy, HSV-1 was infected via ear inoculation known to induce facial palsy with the administration of WL-1 and facial nerve function indicators were evaluated up to day 3. As reported, HSV-1 infection resulted in a series of facial nerve dysfunctions, including Frontiers in Microbiology frontiersin.org . /fmicb. .
WL-inhibits HSV-infection in vivo. (A)
Mice were infected via ocular scarification with × PFU HSV-and then administered with WL-( mg/kg) or acyclovir ( mg/kg; labeled as ACV) h after infection. Survival rates were checked daily until day , and brain virus loads were measured by plaque assay days after infection. (B, C) The production of IL-and TNF-α in serum (B) and the brain (C) days postinfection of HSV-was measured by ELISA. Data represent two independent experiments and are presented as mean ± SEM. *p < . ; **p < . ; ***p < .
loss of blink reflex, unnatural nose position, and weakness of vibrissae movement. There was no significant difference in the time to onset of abnormal blink reflex between the WL-1 administration and the control group. Instead, the ratio of the normal blink reflex in WL-1 treated mice was significantly higher than that in untreated mice. The nose tip position and vibrissae movement of HSV-1 alone infected mice were abnormal from the 36th h postinfection, but almost all the WL-1 treated mice were normal throughout the infection period. In addition, the administration of WL-1 also markedly increased the percentage of mice with normal nose position and vibrissae movement 3 days after HSV-1 infection ( Figure 4A). Histopathologic examination of the facial nerve displayed that mouse treated with WL-1 had a lower HSV-1-induced inflammation score, indicating that WL-1 reduced facial nerve injury ( Figure 4B). Taken together, our results indicated that WL-1 can resist viral infection and thereby prevent the occurrence of facial paralysis.
. Discussion
HSV-1 causes a wide range of infections from mild to life-threatening in the human population. There are effective treatments for HSV-1 infections, which are limited due to HSV-1 latency and the development of resistance to current therapeutics. It is urgent to develop new specific agents against HSV-1 infection due to the universality of HSV-1 infection and the limitations of existing clinical drugs (Sadowski et al., 2021). Antimicrobial peptides (AMPs) are widely distributed in many species and represent highly effective natural defenses that are central weapons for the host to resist infection (Nizet, 2006;Mookherjee and Hancock, 2007;Rossi et al., 2012). They have different mechanisms of action from traditional antibiotics, showing potent and broad-spectrum antibacterial, antifungal, and antiviral activities and are not easy to induce drug resistance (Neto et al., 2007;Aribi Al-Zoobaee et al., 2020). The cathelicidin family is one of the most important AMPs in the host immune defense system. So far, some of the cathelicidin-related drugs are undergoing clinical trials, indicating that cathelicidins have great potential to be developed as new antimicrobial drugs as alternatives to traditional antibiotics and medicines (Steinstraesser et al., 2011;Wang et al., 2019;Mookherjee et al., 2020;Díez-Aguilar et al., 2021;Bhattacharjya et al., 2022;Talapko et al., 2022). It has been reported that cathelicidins possess many important activities such as broad-spectrum and high antibacterial activity, anti-inflammation, and tissue damage .
/fmicb. . inhibition (Auvynet and Rosenstein, 2009;Nijnik and Hancock, 2009;Wuerth and Hancock, 2011;Choi et al., 2012;Kahlenberg and Kaplan, 2013). In fact, a few cathelicidins have been found to have inhibitory effects on various viral infections, including hepatitis C virus, vaccinia virus, and human immunodeficiency virus 1. In this study, we revealed that WL-1, an antiviral peptide derived from cathelicidins with low toxicity, inhibits the HSV-1 infection in vitro and in vivo, adding antiviral activity to the broad-spectrum function of cathelicidins. Specifically, WL-1 can not only restrain viral replication but also prevent the pathological symptoms induced by HSV-1 infection. In addition, the anti-HSV-1 effect of WL-1 is superior to acyclovir, the most clinically used nucleoside analog against HSV-1 infection, at the same concentration. Our data have demonstrated that WL-1 may have great potential to be optimized as an anti-HSV-1 compound. Furthermore, it remains to be evaluated whether WL-1 has similar effects on other HSV-1 strains and DNA viruses. Viral infection can be resisted in many ways by antimicrobial peptides, drugs, and other small-molecule inhibitors. For example, the antimicrobial peptide CATH-2 has been reported to modulate the inflammatory response by regulating the secretion of inflammatory cytokines and activation of the NLRP3 inflammasome, resulting in protection against IAV infection (Coorens et al., 2017;Peng et al., 2022). A defense peptide called An1a restricts dengue and Zika virus infection by inhibiting the viral NS2B-NS3 protease (Ji et al., 2019). Acyclovir inhibits the enzymatic activity of HSV thymidine kinase (TK), thereby interrupting viral DNA replication (Vashishtha and Kuchta, 2016;Sadowski et al., 2021). Moreover, Artemisia argyi leaf extract AEE destroys the membrane integrity of HSV-1 viral particles, resulting in impaired viral attachment and penetration . These results reveal various molecular mechanisms of host-defense peptides against viral infections, including direct viral killing, regulation of viral infection, and participation in host immune regulation against viral infections. We found that WL-1 treatment could reduce viral titer. So that we speculated that WL-1 may have the same anti-HSV-1 infection mechanism as LL-37, which has been described to damage the viral membrane envelope or inhibit HSV-1 adsorption to cells (Howell et al., 2004;Lee et al., 2014). Likewise, the administration of WL-1 obviously reduced the virus load and viral gene expression as well as the inflammatory factors production in cells and mice, which reminds us that WL-1 perhaps participates in the regulation of the viral replication cycle and inflammatory response caused by viral infection. In brief, the mechanism of WL-1 resistance against HSV-1 infection is more complicated and requires further research to discuss. Today, multiple treatments and drugs have no significant effect on Bell's palsy, the most common form of peripheral facial palsy (Gronseth and Paduga, 2012;Zandian et al., 2014;Newadkar et al., 2016;Zhang et al., 2019). Antivirals and steroids were the most commonly prescribed medications in the early days of Bell's palsy (Jalali et al., 2021). There was no benefit from antiviral therapy alone compared with placebo (Gallo et al., 1997). When researchers compared the effect of steroids with placebo, they found that the reduction in the proportion of patients who did not fully recover at 6 months was small and not statistically significant (Gu et al., 2014). Combined oral steroids and antivirals were associated with a lower rate of incomplete recovery when compared with oral steroids alone (Gallo et al., 1997). Another study showed that patients treated with steroids combined with acyclovir had a higher overall recovery rate than those treated with steroids alone, but the difference was not statistically significant (Yeo et al., 2008). Because acyclovir's bioavailability is relatively low (15-30%), new drugs are being investigated (Newadkar et al., 2016). Surprisingly, we found that WL-1 can not only reduce the viral load of HSV-1 but also prevent the occurrence of facial palsy induced by infection, indicating that WL-1 can be used as a potential drug for treating Bell's palsy. Certainly, further studies are needed to evaluate the effect of WL-1 on Bell's palsy using other models.
. Conclusion
In summary, the designed peptide WL-1 showed high antiviral abilities against HSV-1 infection and prevented the facial palsy occurrence induced by HSV-1 in mice. Therefore, it may be an excellent candidate or template for the development of a therapeutic agent to treat clinical infection of HSV-1. Much more effort should be made to understand the antiviral mechanism of WL-1.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by the Animal Care Committee of Shanxi Agricultural University, approval number: SXAU-EAW-2022M.RX.00906001. | 5,762.4 | 2023-06-05T00:00:00.000 | [
"Biology"
] |
Physiological Noise Filtering in Functional Near-Infrared Spectroscopy Signals Using Wavelet Transform and Long-Short Term Memory Networks
Activated channels of functional near-infrared spectroscopy are typically identified using the desired hemodynamic response function (dHRF) generated by a trial period. However, this approach is not possible for an unknown trial period. In this paper, an innovative method not using the dHRF is proposed, which extracts fluctuating signals during the resting state using maximal overlap discrete wavelet transform, identifies low-frequency wavelets corresponding to physiological noise, trains them using long-short term memory networks, and predicts/subtracts them during the task session. The motivation for prediction is to maintain the phase information of physiological noise at the start time of a task, which is possible because the signal is extended from the resting state to the task session. This technique decomposes the resting state data into nine wavelets and uses the fifth to ninth wavelets for learning and prediction. In the eighth wavelet, the prediction error difference between the with and without dHRF from the 15-s prediction window appeared to be the largest. Considering the difficulty in removing physiological noise when the activation period is near the physiological noise, the proposed method can be an alternative solution when the conventional method is not applicable. In passive brain-computer interfaces, estimating the brain signal starting time is necessary.
Introduction
In processing functional near-infrared spectroscopy (fNIRS) signals, a task-related hemodynamic signal cannot be identified if a physiological noise period is overlapped with the designed task period. This study proposes a novel method to identify physiological noises from the resting state and remove those noises during the task period using wavelet techniques and neural networks-based prediction. FNIRS is a brain-imaging technique that uses two or more wavelengths of light in near-infrared bands to measure changes in oxidized and deoxidized hemoglobin concentration in the cerebral cortex [1]. When a person moves, thinks, or receives an external stimulus, the nerve cells in cerebral cortical layers become excited. As the cells require more energy, the oxidized hemoglobin concentration around the nerve cells increases, and the deoxygenated hemoglobin concentration decreases [1]. Based on this principle, fNIRS can measure brain activity in real time. Because fNIRS is inexpensive, easy to use, and harmless to the human body, it has been used in brain disease diagnosis [2,3], brain-computer interface (BCI) [4], decoding sensory signals [5,6], child development [7], and psychology research [8].
An fNIRS channel consists of one source and one detector. When the light is emitted from a light source, photons pass through several layers, including the scalp, skull, cerebrospinal fluid, capillaries, and cerebral cortex, before returning to a detector. Through this process, the detected light contains various noises that make it challenging to know the hemodynamic responses. These noises include heartbeat, breathing, and motion artifacts [9]; more problematically, very low-frequency noise around 0.01 Hz has been reported [10][11][12].
In improving the accuracy of the measured signal, noise removal/reduction techniques are indispensable. Various techniques can remove physiological noises such as heartbeat, breathing, and Mayer waves. For instance, the superficial noise in the scalp can be removed using short separated channels [13], additional external devices, or applying denoising techniques such as adaptive filtering [14] and correlation analysis methods [15]. In addition, since the frequency bands of physiological noise are roughly known, a band-pass filter has become one of the most easily applied noise reduction techniques [16].
A general linear model (GLM) method has been widely used to find the task-related hemodynamic response in the fNIRS signal after preprocessing [17]. The desired hemodynamic response function (dHRF), which should be used for the GLM method, is designed considering the experimental paradigm. However, in the case that the essential frequency of the dHRF overlaps with a specific frequency of physiological noise, the conventional GLM method will not work and may result in mistaking noise for the hemodynamic response. Therefore, a new different denoising technique must be pursued.
A discrete wavelet transform (DWT) is a mathematical tool used to analyze signals in the time-frequency domain [18]. In fNIRS research, DWT has been used for denoising [19,20] and connectivity analysis [21,22]. The maximal overlap discrete wavelet transform (MODWT) is a type of DWT often used in signal processing and time series analysis [23]. It decomposes a signal into a series of wavelet coefficients at different widths and time locations. Unlike the usual DWTs, which use non-overlapping sub-signal windows to perform the wavelet decomposition, the MODWT uses overlapping sub-signal windows. This nested-window approach allows the MODWT to improve time-frequency localization and reduce the boundary effects that can occur in DWTs [24]. Due to this advantage, MODWT has been applied to a wide range of signals, including audio signals [25], weather information [26,27], and biomedical signals [28]. MODWT is powerful when the signals are abnormal or have complex frequency components.
Deep learning, a subfield of artificial intelligence, is based on artificial neural networks. In recent years, brain research has increasingly used it to analyze large, complex data sets, such as those generated by biomedical devices [29]. Research has been conducted to analyze health data such as magnetic resonance imaging (MRI) [30], electrocardiograms (ECG) [31] and electroencephalograms (EEG) [32,33], or to decode brain waves to control BCI [34,35]. Furthermore, analyzing brain neuroimaging data and identifying patterns associated with specific diseases can help with early diagnosis and personalized treatment.
Long short-term memory (LSTM) is a type of recurrent neural network (RNN) architecture designed to overcome the limitations of traditional RNNs in handling long-term dependencies in sequential data [36]. It has been used in a wide range of applications for time-series data classification and forecasting [37][38][39][40]. LSTMs are particularly useful in tasks that require modeling long-term dependencies in sequential data. LSTMs' ability to selectively remember and forget information over time is vital for accurate forecasting.
MODWT-LSTM-based prediction research has shown excellent results in predicting periodic data such as water level [41], ammonia nitrogen [42], weather [43], etc. In brain research, MODWT has been applied as a preprocessing method for EEG-based seizure detection [28,44], Alzheimer's diagnosis [45], and resting state network analysis of fMRI [46]. Since brain signals are measured in time series, active research on brain signal classification [47,48] uses LSTM. However, to our knowledge, this is the first study to predict the noise in fNIRS signals despite many of the noise components being periodic.
In this study, one thousand synthetic data are generated, assuming 600-s rest and 40-s task. Each data is decomposed into eight levels by the MODWT. Five wavelets containing low-frequency components from the 600-s data are used to train an LSTM network. The trained LSTM networks are used to predict the next 40 s, presumably the predicted signals of the low-frequency oscillations. The predicted signals are then subtracted from the task period data. For validation purposes, the predicted signal and original data are compared by calculating mean absolute errors (MAEs), and root mean square errors (RMSEs). Finally, the proposed method is demonstrated by analyzing the actual fNIRS data from humans. This paper is organized as follows: Section 2 describes the proposed method on the synthetic data, Section 3 demonstrates the proposed method with actual fNIRS data, Section 4 discusses the results of this study and its applications, and Section 5 presents conclusions.
Method Development
This section describes the development of the proposed method with the following four subsections. In the first subsection, the method of synthetic fNIRS data generation is described. The second and third subsections explain the operation of MODWT and LSTM, respectively. The fourth subsection describes the validation of the proposed method. The last subsection presents the results of the data analysis.
Synthetic fNIRS Data Generation
One thousand synthetic data are generated according to the method of Germignani et al. [49] with a sampling frequency of 8.138 Hz. For each data, thirty orders of autoregressive noise are added to the baseline noise [50]. The synthetic physiological noises include frequency ranges of 1 ± 0.1 Hz, 0.25 ± 0.01 Hz, and 0.1 ± 0.01 Hz for cardiac, respiratory, and Mayer waves, respectively. In addition, a sine wave with a frequency of 0.01 ± 0.001 Hz was generated for the very low-frequency component [11]. The amplitudes of five signals in a synthetic fNIRS signal were set randomly in the range of 0.01 to 0.03. In this paper, the resting period is set to 10 min, considering that the concerned low-frequency noise is near 0.01 Hz.
For five hundred data samples only, the desired hemodynamic function (dHRF) based on a 2-gamma function with 20 s of task and 20 s of resting state after the 10 min resting state were added. The amplitude of this signal was randomized between 0.1 and 0.35 and added to the previously generated noise. All data were set to zero at the starting point before processing the signals. Figure 1 depicts synthetic signals for various noises and the resultant HbO signal assumed.
Maximal Overlap Discrete Wavelet Transform
The discrete wavelet transform (DWT) is a signal processing technique that decomposes a signal into different frequency components at multiple levels of resolution. The DWT works by convolving the signal with a set of filters, called wavelet filters, which capture different frequency bands. The signal is decomposed into approximation and detail coefficients [19], which represent low-frequency components and high-frequency components, respectively. This decomposition is applied recursively to the approximation coefficients to obtain a multi-resolution representation. However, the DWT has several drawbacks, including the introduction of boundary artifacts due to the filtering process, the lack of shift invariance in the decomposition, and the potential loss of fine detail at higher decomposition levels. Zhang et al. (2018) [51] utilized the DWT in forecasting vehicle emissions and specifically compared four cases: The autoregressive integrated moving average (ARIMA) model, LSTM, DWT-ARIMA, and DWT-LSTM. They reported that adopting DWT improved the performance overall. Individually, between ARIMA and LSTM, LSTM performed better; between ARIMA and DWT-ARIMA, DWT-ARIMA generated improved results; between LSTM and DWT-LSTM, DWT-LSTM was superior; and between DWT-LSTM and DWT-ARIMA, DWT-LSTM demonstrated the best forecasting.
MODWT is a mathematical technique that transforms a signal into a multilevel wavelet and scaling factor. MODWT has several advantages over DWT. For example, the MODWT can be adequately defined for signals of arbitrary length, whereas the DWT is only for signals of integer length to the power of two.
Maximal Overlap Discrete Wavelet Transform
The discrete wavelet transform (DWT) is a signal processing technique that decomposes a signal into different frequency components at multiple levels of resolution. The DWT works by convolving the signal with a set of filters, called wavelet filters, which capture different frequency bands. The signal is decomposed into approximation and detail coefficients [19], which represent low−frequency components and high−frequency components, respectively. This decomposition is applied recursively to the approximation coefficients to obtain a multi−resolution representation. However, the DWT has several drawbacks, including the introduction of boundary artifacts due to the filtering process, the lack of shift invariance in the decomposition, and the potential loss of fine detail at higher decomposition levels. MODWT is a mathematical technique that transforms a signal into a multilevel wavelet and scaling factor. MODWT has several advantages over DWT. For example, the MODWT can be adequately defined for signals of arbitrary length, whereas the DWT is only for signals of integer length to the power of two.
where W j,t is the wavelet coefficient of the tth element of the jth level of the MODWT; V j,t is the scaling factor of the tth element of the jth level; and low ( ∼ g j,l ≡ g j,l /2 j 2 ) pass filters; h j,l and g j,l are the jth level DWT high-pass and lowpass filters, where L is the maximum decomposition level. The filters are determined by the mother wavelet as in the DWT [52]. The MODWT based multiresolution analysis is expressed as follows.
Bioengineering 2023, 10, 685 where A L is the approximation component and D j is the detail components (j = 1, 2, · · · , L). Figure 2 shows a scheme of MODWT-based multiresolution analysis. Figure 2 shows a scheme of MODWT−based multiresolution analysis. In this study, Sym4 was selected as the mother wavelet because it resembles the canonical hemodynamic response function. Let the number of data be N. Then, the maximum decomposition level becomes less than log2(N). Considering our case's shortest resting state of 60 s, the data size is 60 s × 8.13 Hz = 487.8. Therefore, the decomposition level in our work was selected by 8, which is the largest integer less than log2(487.8). The eight decompositions result in nine signals, of which only five signals belonging to low frequencies will be predicted.
Long Short−Term Memory
LSTM is a type of RNN architecture that addresses the vanishing gradient problem and allows for capturing long−term dependencies in sequential data. LSTM consists of memory cells that store and update information over time. The primary function of an LSTM is to use memory cells that can hold information for long periods. Memory cells can In this study, Sym4 was selected as the mother wavelet because it resembles the canonical hemodynamic response function. Let the number of data be N. Then, the maximum decomposition level becomes less than log 2 (N). Considering our case's shortest resting state of 60 s, the data size is 60 s × 8.13 Hz = 487.8. Therefore, the decomposition level in our work was selected by 8, which is the largest integer less than log 2 (487.8). The eight decompositions result in nine signals, of which only five signals belonging to low frequencies will be predicted.
Long Short-Term Memory
LSTM is a type of RNN architecture that addresses the vanishing gradient problem and allows for capturing long-term dependencies in sequential data. LSTM consists of memory cells that store and update information over time. The primary function of an LSTM is to use memory cells that can hold information for long periods. Memory cells can selectively forget or remember information based on input data and past states. This allows the network to learn and remember important information while ignoring irrelevant or redundant information. An LSTM network has three gates (input gate, forget gate, and output gate) that control the flow of information into and out of the memory cells. The input gate i(t) determines which information is stored in the memory cell c(t), the forget gate f (t) determines which information is discarded, and the output gate o(t) controls the output of the memory cell ( Figure 3) [53]. The LSTM model is represented by the following equations: The LSTM model is represented by the following equations: where c(t − 1) and c(t) are the cell states at t − 1 and t, and at each gate σ is a sigmoid function, tanh is a hyperbolic tangent activation function, and × denotes the cross product of two vectors. In this study, three LSTM layers were utilized, with the number of hidden units set to [128, 64,32], and a dropout layer was employed between the LSTM layers with a probability of 0.2 to prevent overfitting ( Figure 4). To train the LSTM network, the Adam optimizer was used with a maximum epoch of 100 and a minibatch size of 128. All data were normalized before training. For the synthetic data, nine hundred data were randomly selected from the thousand data to train the network, and then one hundred data were tested. For actual fNIRS data, a leave−one−out method was used to avoid splitting data from the same person for training and testing. For example, to train an LSTM network to predict the 48 channels of a subject, a total of 432 channels (nine subjects × 48 channels) were used. Since the data was only 600 s long, 570 s of data were used for training, and the trained LSTM network predicted the next 30 s.
Validation
To determine the accuracy of the signal predicted by the LSTM, the mean absolute error (MAE) and root mean squared error (RMSE) were calculated compared to the original signal, divided by the signal with and without dHRF. The data was segmented, analyzed, and predicted to find the required resting−state length to achieve optimal predic- For the synthetic data, nine hundred data were randomly selected from the thousand data to train the network, and then one hundred data were tested. For actual fNIRS data, a leave-one-out method was used to avoid splitting data from the same person for training and testing. For example, to train an LSTM network to predict the 48 channels of a subject, a total of 432 channels (nine subjects × 48 channels) were used. Since the data was only 600 s long, 570 s of data were used for training, and the trained LSTM network predicted the next 30 s.
Validation
To determine the accuracy of the signal predicted by the LSTM, the mean absolute error (MAE) and root mean squared error (RMSE) were calculated compared to the original signal, divided by the signal with and without dHRF. The data was segmented, analyzed, and predicted to find the required resting-state length to achieve optimal prediction accuracy, as shown in Figure 5. MAE and RMSE can be calculated using the following equations.
where y i is the original signal,ŷ i is the predicted signal, i is the timestep, and n is the number of data. The calculated MAEs and RMSEs of the signal with and without the dHRF were compared using a two-sample t-test.
Synthetic Data Analysis
The synthetic data were decomposed into nine components using MODWT, and the components used for prediction were the fifth through ninth. The frequency of the fifth wavelet was between 0.13 and 0.26 Hz, the sixth between 0.067 and 0.13 Hz, the seventh between 0.035 and 0.067 Hz, the eighth between 0.017 and 0.035 Hz, and the ninth consisted of signals below 0.017 Hz. Figure 6 shows the prediction results of the signal with and without dHRF. The signal with dHRF showed a significant fluctuation during the task period in the low−frequency signals of Wavelets 6-9, and the predicted signal did not follow this fluctuation.
Synthetic Data Analysis
The synthetic data were decomposed into nine components using MODWT, and the components used for prediction were the fifth through ninth. The frequency of the fifth wavelet was between 0.13 and 0.26 Hz, the sixth between 0.067 and 0.13 Hz, the seventh between 0.035 and 0.067 Hz, the eighth between 0.017 and 0.035 Hz, and the ninth consisted of signals below 0.017 Hz. Figure 6 shows the prediction results of the signal with and without dHRF. The signal with dHRF showed a significant fluctuation during the task period in the low-frequency signals of Wavelets 6-9, and the predicted signal did not follow this fluctuation. Figure 7 and Table 1 show the calculated MAEs and RMSEs. In all conditions, the MAEs and RMSEs of the signal with dHRF corresponding to Wavelets 6-9 and the signal without dHRF were statistically significantly different. The only statistically significant difference between with and without dHRF was found in the RMSE of Wavelet 5 when the MODWT-LSTM analysis was performed with 300 s of data (Figure 7c). To compare the prediction results for each condition, MAEs and RMSEs for all conditions are shown in Figure 8. The error of the dHRF signal was the largest in Condition 2 (MODWT-LSTM at 600 s) and the smallest in Condition 6 (MODWT-LSTM with 60 s of data). In particular, the difference in prediction accuracy between with and without dHRF signals of Wavelet 8 was the largest in all conditions. The synthetic data were decomposed into nine components using MODWT, and the components used for prediction were the fifth through ninth. The frequency of the fifth wavelet was between 0.13 and 0.26 Hz, the sixth between 0.067 and 0.13 Hz, the seventh between 0.035 and 0.067 Hz, the eighth between 0.017 and 0.035 Hz, and the ninth consisted of signals below 0.017 Hz. Figure 6 shows the prediction results of the signal with and without dHRF. The signal with dHRF showed a significant fluctuation during the task period in the low−frequency signals of Wavelets 6-9, and the predicted signal did not follow this fluctuation. Table 1 show the calculated MAEs and RMSEs. In all conditions, the MAEs and RMSEs of the signal with dHRF corresponding to Wavelets 6-9 and the signal without dHRF were statistically significantly different. The only statistically significant difference between with and without dHRF was found in the RMSE of Wavelet 5 when the MODWT−LSTM analysis was performed with 300 s of data ( Figure 7c). To compare the prediction results for each condition, MAEs and RMSEs for all conditions are shown in Figure 8. The error of the dHRF signal was the largest in Condition 2 (MODWT−LSTM at 600 s) and the smallest in Condition 6 (MODWT−LSTM with 60 s of data). In particular, the difference in prediction accuracy between with and without dHRF signals of Wavelet 8 was the largest in all conditions. For the 600 s data prediction results, MAEs and RMSEs were calculated for 1 s, 3 s, 5 s, 10 s, 15 s, and 30 s (Figure 9). In all cases, there were statistically significant differences in Wavelets 6-9 between with and without dHRF. Especially for Wavelet 7, with the most significant difference at 1 s and a decrease after that, but for Wavelet 8, the difference started at 10 s and was most extensive at 15 s. For the 600 s data prediction results, MAEs and RMSEs were calculated for 1 s, 3 s, 5 s, 10 s, 15 s, and 30 s (Figure 9). In all cases, there were statistically significant differences in Wavelets 6-9 between with and without dHRF. Especially for Wavelet 7, with the most significant difference at 1 s and a decrease after that, but for Wavelet 8, the difference started at 10 s and was most extensive at 15 s. For the 600 s data prediction results, MAEs and RMSEs were calculated for 1 s, s, 10 s, 15 s, and 30 s (Figure 9). In all cases, there were statistically significant differe in Wavelets 6-9 between with and without dHRF. Especially for Wavelet 7, with the significant difference at 1 s and a decrease after that, but for Wavelet 8, the differ started at 10 s and was most extensive at 15 s.
Human Data Application
In this section, actual fNIRS data from human subjects were used to validate the proposed method. The actual fNIRS data were obtained in the authors' previous study, but only resting state data were used [2]. In the first subsection, the fNIRS data acquisition is briefly described. The second subsection describes the results of the application of the proposed method.
fNIRS Data Acquisition
Resting state data with a data length of 10 min were selected from ten healthy subjects. The selected subjects are five males and five females (age: 68 ± 5.95 years). Prior to the experiment, each subject was fully informed about the purpose of the study. Written informed consent was obtained from each subject. The entire experiment was approved by the ethics committee of Pusan National University Yangsan Hospital (Institutional Review Board approval number: PNUYH-03-2018-003).
Hemodynamic responses in PFC were measured with a portable fNIRS device (NIRSIT; OBELAB, Seoul, Republic of Korea) equipped with 24 sources (laser diode) and 32 detectors (a total of 204 channels, including short channel separation) at a sampling rate of 8.138 Hz. NIRSIT uses two wavelengths of near-infrared light (780 nm and 850 nm) to measure concentration changes of HbO and HbR. Only 48 channels with 3 cm of channel distance out of 204 channels were used for this study.
Human Data Analysis
The prediction results for the actual HbO data are shown in Figure 10. Unlike the synthetic data, the amplitude of the ninth wavelet was significantly lower than the other wavelets. A spike appeared in all the wavelets at a particular time, presumably a motion artifact. The MODWT results differed at both ends of the wavelets for the 570 s data and the 600 s data.
Human Data Application
In this section, actual fNIRS data from human subjects were used to validate th posed method. The actual fNIRS data were obtained in the authors' previous stud only resting state data were used [2]. In the first subsection, the fNIRS data acquisi briefly described. The second subsection describes the results of the application proposed method.
fNIRS Data Acquisition
Resting state data with a data length of 10 min were selected from ten health jects. The selected subjects are five males and five females (age: 68 ± 5.95 years). Pr the experiment, each subject was fully informed about the purpose of the study. W informed consent was obtained from each subject. The entire experiment was app by the ethics committee of Pusan National University Yangsan Hospital (Institution view Board approval number: PNUYH−03−2018−003).
Hemodynamic responses in PFC were measured with a portable fNIRS device SIT; OBELAB, Seoul, Republic of Korea) equipped with 24 sources (laser diode) a detectors (a total of 204 channels, including short channel separation) at a samplin of 8.138 Hz. NIRSIT uses two wavelengths of near−infrared light (780 nm and 850 n measure concentration changes of HbO and HbR. Only 48 channels with 3 cm of ch distance out of 204 channels were used for this study.
Human Data Analysis
The prediction results for the actual HbO data are shown in Figure 10. Unli synthetic data, the amplitude of the ninth wavelet was significantly lower than the wavelets. A spike appeared in all the wavelets at a particular time, presumably a m artifact. The MODWT results differed at both ends of the wavelets for the 570 s dat the 600 s data. Table 2 shows the results of calculating the mean and standard deviation of the and RMSEs of the predictions on the real HbO data. Among them, the average va plotted for easy comparison (Figure 11). The ninth wavelet had the slightest error b most significant standard deviation across all cases. The fifth and sixth wavelets sh increasingly significant errors until 3 s and 5 s, respectively, then decreased. The se and eighth wavelets had more significant errors as the time window increased. Table 2 shows the results of calculating the mean and standard deviation of the MAEs and RMSEs of the predictions on the real HbO data. Among them, the average value is plotted for easy comparison (Figure 11). The ninth wavelet had the slightest error but the most significant standard deviation across all cases. The fifth and sixth wavelets showed increasingly significant errors until 3 s and 5 s, respectively, then decreased. The seventh and eighth wavelets had more significant errors as the time window increased.
Discussion
In fNIRS studies, cognitive tasks are used to evaluate cognitive abilities such as working memory, conflict processing, language processing, emotional processing, and memory encoding and retrieval [54]. For example, N−back, Stroop, and verbal fluency tasks evaluate working memory, conflict processing, language processing, etc. Such cognitive tasks are also often used to detect brain diseases such as schizophrenia, depression, cognitive impairment, attention−deficit hyperactivity disorder, etc. [55].
Cortical activations caused by cognitive tasks are investigated by a t−map, a connectivity map, or extracted features from HbO signals [2,3]. The t−map is reconstructed with t−values from the GLM method, indicating the dHRF's weight at each channel. The connectivity map is an image map of correlation coefficients between two channels, which reflects how those two channels are interrelated. Hemodynamic features such as the mean, slope, and peak value have also been used to diagnose brain diseases. Cognitive task analysis can identify activated/deactivated regions and differences between healthy and non−healthy people.
The proposed method was validated in two ways: (i) By comparing synthetic data with and without dHRF, and (ii) by predicting the resting state data. In the synthetic data, the proposed method showed statistically significant differences in the prediction errors between with and w/o dHRF. The prediction errors in human resting state data also
Discussion
In fNIRS studies, cognitive tasks are used to evaluate cognitive abilities such as working memory, conflict processing, language processing, emotional processing, and memory encoding and retrieval [54]. For example, N-back, Stroop, and verbal fluency tasks evaluate working memory, conflict processing, language processing, etc. Such cognitive tasks are also often used to detect brain diseases such as schizophrenia, depression, cognitive impairment, attention-deficit hyperactivity disorder, etc. [55].
Cortical activations caused by cognitive tasks are investigated by a t-map, a connectivity map, or extracted features from HbO signals [2,3]. The t-map is reconstructed with t-values from the GLM method, indicating the dHRF's weight at each channel. The connectivity map is an image map of correlation coefficients between two channels, which reflects how those two channels are interrelated. Hemodynamic features such as the mean, slope, and peak value have also been used to diagnose brain diseases. Cognitive task analysis can identify activated/deactivated regions and differences between healthy and non-healthy people.
The proposed method was validated in two ways: (i) By comparing synthetic data with and without dHRF, and (ii) by predicting the resting state data. In the synthetic data, the proposed method showed statistically significant differences in the prediction errors between with and w/o dHRF. The prediction errors in human resting state data also showed concordance with the results of synthetic data without dHRF. The agreement between the synthetic data without dHRF and the human resting state data demonstrates that the task-related response can also be differentiated from the proposed method.
Since the hemodynamic signal in this study consisted of 20 s of task and 20 s of rest and had a frequency of 0.025 Hz, it was expected that the eighth wavelet would show a significant difference with and without dHRF. As shown in Figure 6, the wavelet decomposition of the signal with dHRF was different from the signal without in the sixth through ninth wavelets. As expected, a statistically significant difference was found in the eighth wavelet, but the sixth, seventh, and ninth wavelets also showed significant differences. This is likely due to the decomposition of the dHRF into multiple levels when performing the MODWT.
The LSTM results show that the difference between with and without dHRF is more pronounced when the number of training data points increases. (Figure 7a,b). In addition, the smaller the number of training data points, the smaller the prediction error of the signal with dHRF and the larger the prediction error of the signal without dHRF. This is not surprising, since sufficient data is required for practical training of the LSTM.
To investigate whether the occurrence of hemodynamic signals can be predicted early, MAEs and RMSEs were estimated by dividing the predicted data into 1 s, 3 s, 5 s, 10 s, 15 s, and 30 s, and the difference in error between the seventh wavelet with and without dHRF was significant early. The difference between the eighth wavelet with and without dHRF was significant at 15 s because it took more than 10 s for the dHRF to rise to the maximum, since it takes time for the dHRF to rise.
When the proposed method was applied to real data, the error was similar to that of the synthetic data without dHRF. The lowest error occurred in the ninth wavelet, which seems to be due to the lowest signal strength of the ninth wavelet. Initially, wavelets with higher frequencies produced relatively higher errors, but the opposite was true as the prediction time increased. This suggests that as the data length varies, the results of the MODWT change as well, as this is more pronounced at both ends of the data.
Methods to estimate the hemodynamic response and remove noise from fNIRS signals include Kalman filtering [56], Bayesian filtering [57], block averaging [58], general linear models [59], and adaptive filtering [14,60,61]. In addition, initial-dip detection has also been studied for early detection of hemodynamic responses [62,63]. However, these methods rely heavily on the desired hemodynamic function as a reference signal ( Table 3). The hemodynamic signal is designed by gamma functions [64], the balloon model [65], the finite element method [66], the state-space method [67,68], etc. These hemodynamic signals are not suitable for use in unknown areas because they depend on the brain region or task being measured. However, the proposed method is differentiated from existing methods in that it does not require a reference signal and can be applied without external devices.
Conclusions
The following three implications are made: (i) Alleviating the dHRF's trap: In the conventional methods (i.e., general linear model [59], recursive estimation method [60], etc.), the brain signal is identified by comparing HbO signals with a dHRF. If the correlation coefficient between two signals is high, the measured HbO is attributed to the task. The dHRF computed by convolving a gamma function with the task period contains multiple frequencies, not a single frequency. For example, for a 20 s task followed by a 20 s rest, the dHRF has 0.025 Hz (=1/40 s), and all other components are considered noises. Such multiple frequencies are also seen from the synthetic data analysis, showing that the added 0.025 Hz dHRF affected neighboring frequency bands, see Figure 6. Therefore, if the brain signal is identified with only the dHRF, the neighboring signals are unwillingly included (which could be noises). Hence, the proposed method can alleviate the dHRF's trap.
(ii) Can handle an unknown task period: In neuroscience, fNIRS has been used to identify brain regions associated with specific tasks and to understand how neural networks function. In particular, regular examinations in daily life are essential for the early detection of cognitive decline due to brain disease or aging. Research on the classification of cognitive decline and brain disease diagnosis using fNIRS is being actively conducted. However, it is challenging to establish classification criteria because hemodynamic signals vary depending on various factors such as age and gender. In particular, it is necessary to compare behavioral data and fNIRS signals for classification, and the duration of cognitive function tests belonging to neuropsychological tests should be pre-designed. Thus, the proposed method can be used when the task period to be observed is unknown or very long.
(iii) Starting time estimation for passive BCI: Recently, passive BCI has become essential for fault-free automotive cars, pilots, etc. In this case, the brain signal's starting time has to be identified. To estimate the starting time, a moving-window approach can be adopted. If the prediction error becomes large while moving the window, the instance of a significant error can be considered as the starting time of a passive brain signal, and we can generate a BCI command.
The proposed method can overcome the variability in the resting state, which varies from person to person, by predicting the subsequent signal. The predicted signal ought to be removed from the measured signal, and the remaining signal should be analyzed for brain activity. Although the proposed method has some limitations, e.g., large volumes of training data and computation time to train the model for the first time, it is expected to play a significant role in improving the temporal resolution of fNIRS in the future. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data and code that support the findings of this study are openly available in https://github.com/sohyeonyoo/MODWT-LSTM. | 8,234.8 | 2023-06-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Quantitative Assessment of Fundus Tessellated Density and Associated Factors in Fundus Images Using Artificial Intelligence
Purpose This study aimed to quantitative assess the fundus tessellated density (FTD) and associated factors on the basis of fundus photographs using artificial intelligence. Methods A detailed examination of 3468 individuals was performed. The proposed method for FTD measurements consists of image preprocessing, sample labeling, deep learning segmentation model, and FTD calculation. Fundus tessellation was extracted as region of interest and then the FTD could be obtained by calculating the average exposed choroid area of per unit area of fundus. Besides, univariate and multivariate linear regression analysis have been conducted for the statistical analysis. Results The mean FTD was 0.14 ± 0.08 (median, 0.13; range, 0–0.39). In multivariate analysis, FTD was significantly (P < 0.001) associated with thinner subfoveal choroidal thickness, longer axial length, larger parapapillary atrophy, older age, male sex and lower body mass index. Correlation analysis suggested that the FTD increased by 33.1% (r = 0.33, P < .001) for each decade of life. Besides, correlation analysis indicated the negative correlation between FTD and spherical equivalent (SE) in the myopia participants (r = −0.25, P < 0.001), and no correlations between FTD and SE in the hypermetropia and emmetropic participants. Conclusions It is feasible and efficient to extract FTD information from fundus images by artificial intelligence–based imaging processing. FTD can be widely used in population screening as a new quantitative biomarker for the thickness of the subfoveal choroid. The association between FTD with pathological myopia and lower visual acuity warrants further investigation. Translational Relevance Artificial intelligence can extract valuable clinical biomarkers from fundus images and assist in population screening.
Introduction
Fundus tessellation, defined as the visibility of large choroidal vessels at the posterior fundus pole outside of the peripapillary region, is the only simple way to observe the choroidal vascular structure under direct vision. 1,[2][3][4][5] Previous studies have found that fundus tessellation is closely related to age and myopic refractive error and has been treated as one of the important incipient manifestations of pathological myopia. [3][4][5] Besides, fundus tessellation may also be associated with various ocular diseases, such as angle closure glaucoma, age-related macular degeneration (AMD), pathological myopia, central serous chorioretinopathy, choroidal neovascularization, and uvitis, among others. 1,[6][7][8][9][10][11][12][13][14] All the factors that affect the visibility of choroidal vessels can be reflected in the fundus tessellation qualitatively, including the atrophy of choroidal capillaries, the density of choroidal pigment, the distribution of choroidal vessels, and more. 6 Although morphological characteristics of fundus tessellation could be observed visually, traditional imaging analysis are limited in measurement accuracy. 1,15 Because of technical limitations, it is impossible to quantitatively extract effective indicators of fundus tessellation from abnormal fundus images for analysis. 7 Recently, with the development of artificial intelligence image processing technology, computer vision and region of interest (ROI) extraction can effectively and efficiently identify the texture nuance that cannot be distinguished by human eyes. 8 Artificial intelligence, involved convolutional neural network and computer vision, developed to implements the computational systems to extract representations directly from huge numbers of images without designing explicit hand-crafted features. 16,17 The applications of artificial intelligence techniques trained on fundus images can automatically detect various ophthalmic diseases with competitive or closeto-expert performance. 18,19 This study assesses the distribution of fundus tessellated density (FTD) and its associations with other ocular and systemic factors and ocular diseases by population-based epidemiological research, which establishes a new quantitative index of fundus tessellation on the basis of artificial intelligence image processing technology simply to extract the exposed choroidal area on the fundus image.
Study Population
The Beijing Eye Study is a population-based study conducted in northern China. It was performed in five communities in the urban district of Haidian in northern central Beijing and in three communities in the village area in the Daxing district south of Beijing. An age of no less than 50 years was the only inclusion criterion. The study has been described in detail. 10,11 The Medical Ethics Committee of Beijing Tongren Hospital approved the study protocol, and all participants gave informed consent according to the Declaration of Helsinki. The current study population was derived in 2011, when participants were invited for the second five-year follow-up examination, at which time the enhanced depth imaging spectral-domain optical coherence tomography was performed on the participants that can acquire choroidal information. Of a total population of 4403 individuals aged ≥ 50 years, 3468 (response rate, 78.8%) individuals (1963 female, 56.6%) participated in the examinations. The study was divided into a rural part (1633 subjects, 47.1%; 943 female, 57.7%) and an urban part (1835 subjects, 52.9%; 1020 female, 55.6%). The mean age was 64.6 ± 9.8 years (median, 64 years; range, 50-93 years).
Ophthalmic and General Examinations
All examinations were carried out in schoolhouses or common houses of the eight communities included. Trained research technicians asked questions of the study participants from a standardized questionnaire on demographic variables such as age, gender, level of education, occupation, eye disease and systemic disease history, lifestyle, cognitive function, depression, and more. After obtaining informed consent, fasting blood samples were taken for measurement of blood lipids, creatinine, hemoglobin, C-reactive protein, and glucose. Blood pressure and body height and weight were measured and recorded. The ophthalmic examination included measurement of presenting visual acuity, uncorrected visual acuity, and best corrected visual acuity, slit lamp-assisted biomicroscopy of the anterior segment of the eye, biometry applying optical low-coherence reflectometry (Lensstar 900; Optical Biometer, Koeniz, Switzerland), and fundus photographs (nonstereoscopic photograph of 45°of the central fundus; fundus camera type CR6-45NM; Canon Inc., Tokyo, Japan).
Automatic refractometry (Auto Refractometer AR-610; Nidek Ltd, Tokyo, Japan) was performed on all the participants. If uncorrected visual acuity was 1.0 (i.e., 5/5), subjective refractometry was also performed. The spherical equivalent (SE) was calculated according to the format: SE = spherical degrees + (cylindrical degrees/2). Myopia was defined if SE was < −0.25 D; hypermetropia was defined if SE was > +0.25 D. All the participants with undilated pupils were imaged with a Heidelberg Spectralis (Heidelberg Engineering, Heidelberg, Germany; wavelength: 870 nm; scan pattern: enhanced depth imaging) with the instrument positioned close enough to the eye to produce an inverted image.
Extraction and Quantification of FTD by Artificial Intelligence
In this study, we extracted the exposed choroid from the fundus through artificial intelligence based image processing technology, then calculated the average exposed choroid area of per unit area of fundus, named it FTD. Figure 1 indicates the flow chart of the proposed algorithm. The process of obtaining FTD is composed of preprocessing, sample labeling, deep learning segmentation model, and FTD calculation. The image preprocessing (Fig. 3) involves four steps: ROI establishment, denoising, normalization, and enhancement. The sample labeling is a semiautomatic part including automatic sample labeling and manual label correction. The deep learning segmentation model consists of model training and feature segmentation and extraction. The process of calculating FTD includes the following parts:
Image Preprocessing
• ROI establishment: ROI establishment is to extract the effective area in the fundus image and remove the invalid areas such as the background, then the interference with the exposed choroid extraction been reduced. We first perform channel separation of the color fundus image, where the background area is dark in the red channel. And then use the threshold segmentation method to segment the red channel image to obtain the ROI candidate area by the average gray value and the area ratio of the dark area. Finally, we screen the ROI candidate area by the morphological features and its location, then ROI is established. • Denoising: Denoising is to reduce noise interference during shooting and imaging. We realize it by low-pass filtering method, which converts the image from the spatial domain to the frequency domain. The denoising can be achieved by removing the low-frequency part. • Normalization: We adjust the color, brightness and size of each image to a uniform range through average calibration, to reduce the difference between images and the deviation of brightness and color. The brightness normalization is achieved by converting the image from color space to LAB space, calibrating the average value of the L space, and the transfer back to the RGB space. • Enhancement: In the ROI area, we use the Contrast-Limited Adaptive Histogram Equalization (CLAHE) algorithm to enhance the image. It divides the image into different small blocks, and performs gray-scale restriction enhancement processing on each small block, then performs gray-scale interpolation between adjacent small blocks to eliminate the gray-scale difference between the boundaries of small blocks.
Sample Labeling
Sample labeling includes two parts: automatic sample labeling and manual label correction, which is semi-automatic. Automatic labeling obtained by channel separation of color fundus images and channel subtraction. After that, we manually correct the samples obtained by automatic labeling. And then we produce the final sample image.
Deep Learning Segmentation Model
• Model Training: We use the labeled samples as training samples, based on the deep learning semantic segmentation network (ResnetFCN) training model, first extract high-level features through Resnet18, and then deconvolution to obtain the segmentation area, output the leopard spot confidence map, and obtained the confidence probability of each pixel on the fundus belongs to exposed choroid, and finally the exposed choroid of fundus was obtained through threshold segmentation. • Feature segmentation and extraction: Finally, we use the trained model to extract the exposed choroidal area (Fig. 4).
FTD Calculation
Based on the extracted choroidal exposed area, we calculate the average choroidal exposed area of per unit area on fundus to obtain FDT(ρ). ρ = S1/S, where S1 is the extracted area of the exposed choroid and S is the area of the ROI obtained by preprocessing.
Performance Evaluation of Segmentation Model
To test the performance of the segmentation model, three general indicators of accuracy, sensitivity, and specificity have been calculated to evaluate the results of model. The accuracy was 0.9652, the sensitivity was 0.7247, and the specificity was 0.9605.
Statistical Analysis
Statistical analysis was performed using a commercially available statistical software package (SPSS for Windows, v. 26.0; IBM-SPSS, Chicago, IL, USA). The data of all the right eyes was included into the current analysis. First, we examined the mean values (presented as mean ± standard deviation) of FTD. Variance analysis was applied to compare the difference of the FTD among the different age groups. Second, we performed a univariate linear regression analysis with FTD as a dependent parameter while ocular and systemic parameters as independent parameters. Pearson correlation and multiple regression analysis were demonstrated for variation in FTD relative to age and refractive error. Third, we performed a multivariate linear regression analysis using the stepwise method. FTD defined as a dependent parameter, and the explanatory variables that were significantly associated with FTD in univariate analysis been appropriately selected as independent parameters. All P values were two-sided and considered statistically significant when the values were <0.05; 95% confidence intervals were presented.
Baseline Characteristics
Among 3468 eyes of 3468 participants, FTD was available for 3074 individuals (88.6%) (1733 female [56.4%]). There was no FTD measurements for the eyes with opacities of the optic media due to no fundus photographs could be taken or the insufficient quality of the images. Any ocular disease, involved lesions of the optic nerve or macula, was no reason to be excluded if the quality of fundus image was assessable. The mean age was 64.1 ± 9.7 years (median, 63 years; range, 50-93 years); the mean SE was −0. 13
The FTD was increased with aging (F = 179.71, P < 0.001) ( Table 3). If the whole study population was stratified into age groups of 10 years each, correlation analysis showed the FTD is positively correlated with age (r = 0.33, P < 0.001) (Fig. 2). Regression analysis suggested that the FTD increased by 33.1% for each decade of life. The FTD of male (0.15 ± 0.08) was significantly greater than females (0.13 ± 0.08) (t = 7.12, P < 0.001) ( Table 3).
The regressions of the associations of FTD with axial length or FTD with refractive error showed a curvilinear course. Correlation analysis showed there was negative correlation between FTD and SE in the whole participants (r = −0.25, P < 0.001). For the myopia population, the mean SE was −2.16 ± 2.40 D. Correlation analysis showed there was negative correlation between FTD with SE in the myopia participants (r = −0.25, P < 0.001). There were no correlations between SE and FTD in the hypermetropia and emmetropic participants (P > 0.05), as shown in Table 4.
Discussion
To our knowledge, this study is the first quantitative measurements study of FTD from color photographs through artificial intelligence image processing technology. There were qualitative or semiquantitative studies on FTD in the past. 12,15,[20][21][22] Spaide 20 reported that OCT examination showed fundus tessellation in 28 cases of choroidal thinning; Yoshihara and colleagues 21 observed 100 cases with an average age of 25.8 ± 3.9 years and found that the degree of fundus tessellation was significantly correlated with the thickness of subfoveal choroid. The high correlation between fundus tessellation and choroidal thickness has also been confirmed by hospital-based clinical studies, including AMD, glaucoma, and high myopia patients 12,15,22 Parallel to our study, with a population-based recruitment of participants and a relatively large sample extends the findings of the previous mostly hospital-based investigations, the FTD was not associated with open-angle (P = 0.95) or angle-closure glaucoma (P = 0.10). Although the FTD is related to intermediate (b = 0.09, P < 0.001) or late (b = 0.24, P < 0.001) AMD in univariate analysis, the multivariate model to remove the variable of AMDs and SFCT remain the most significant association.
The advantage of this study is that it is the first quantitative study of FTD by latest artificial intelligence extraction quantitative technology. Thus there is no comparable study to assess the precision of FTD. In addition, the sample size of this study is large. Compared with previous studies, our results are consistent with a series of previous qualitative or semiquantitative studies; that is, FTD can be used as a quantitative biomarker to evaluate the choroidal thickness in the general population. In the large-scale epidemiological investigation, we can extract the FTD information from the fundus images of people with different ages and diopters to evaluate SFCT and screen people with abnormal SFCT preliminarily. In high myopia patients, we can quantitatively evaluate the changes of SFCT through regular follow-up of FTD. With higher popularity in screening, FTD obtained by color fundus image may be expected to replace SFCT (measured by EDI-OCT) as a new quantitative index for choroidal analysis.
In our population-based study, we found that mean FTD was 0.14 ± 0.08, ranging from 0 to 0.39. In the multivariate analysis, FTD was significantly greater in elderly male individuals with lower body mass index. The ocular parameters with strong statistically associations involved thinner SFCT, longer axial length, and larger parapapillary atrophy (P < 0.001).
Several studies, including the current study, found consistently that the fundus tessellation was aggravated significantly with aging. 1,2,20 In our recent quantitative assessment, the mean FTD was 0.12 ± 0.06 of the 50-59 age group; 0.14 ± 0.08 of the 60-69 age group; 0.13 ± 0.08 of the more than 70 age group. Regression analysis suggested that the FTD increased by 33.1% for each decade of life. It revealed that the FTD may be used as a visual indicator of aging.
Besides SFCT and age, the present multivariate analysis also showed that FTD was significantly associated with longer axial length and larger parapapillary atrophy, which considered as important indexes of early phase of myopic development. 5,23,24 Further stratified analysis showed the negative correlation between FTD and SE in the myopia participants; but the correlation was not found in the hypermetropic and emmetropic participants (Table 4). And the FTD of myopia population (0.16 ± 0.09) was significantly (P < 0.001) larger than hypermetropic (0.13 ± 0.08) and emmetropic groups (0.13 ± 0.08). On account of the larger quantity of sample in our proposed research, the association between FTD and the SE performed more significantly than other studies. 11,24,25 The associations can be explained by the atrophy of choroid in the myopic eye, especially in the high myopic eyes.
Potential limitations should be mentioned. First, differences between participants and nonparticipants may have led to a selection artifact with a reasonable response rate of 78.8% in the Beijing Eye Study 2011. Second, the FTD was assessed only in the right eye of each individual, so that inter-eye differences could not be analyze. Third, it is important to assess the reproducibility for a new technology of quantitative measurement. In a recent separate study, the FTD measurements showed a repeatability for 10 reexaminations with an intraclass coefficient of 1.00 (own data).
Forth, our investigation included all eligible subjects from the study region for the population-based study. Thus cases with diseases may have affected the FTD, especially either in relation to choroidal thickening or thinning.
In conclusion, the elderly Chinese population has an FTD with a mean of 0.14 ± 0.08. After adjusting for ocular and systemic parameters, FTD is also related to thinner SFCT, longer axial length, larger parapapillary atrophy, older age, male sex, and lower body mass index. Because color fundus imaging has high popularity in screening and is easy to assess, FTD may be expected to replace SFCT (measured by EDI-OCT) as a novel quantitative index for choroidal analysis. Its association with pathological myopia and lower visual acuity warrants further investigation. | 4,074 | 2021-08-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Peptide Epimerization Machineries Found in Microorganisms
D-Amino acid residues have been identified in peptides from a variety of eukaryotes and prokaryotes. In microorganisms, UDP-N-acetylmuramic acid pentapeptide (UDP-MurNAc-L-Ala-D-Glu-meso-diaminopimelate-D-Ala-D-Ala), a unit of peptidoglycan, is a representative. During its biosynthesis, D-Ala and D-Glu are generally supplied by racemases from the corresponding isomers. However, we recently identified a unique unidirectional L-Glu epimerase catalyzing the epimerization of the terminal L-Glu of UDP-MurNAc-L-Ala-L-Glu. Several such enzymes, introducing D-amino acid resides into peptides via epimerization, have been reported to date. This includes a L-Ala-D/L-Glu epimerase, which is possibly used during peptidoglycan degradation. In bacterial primary metabolisms, to the best of our knowledge, these two machineries are the only examples of peptide epimerization. However, a variety of peptides containing D-amino acid residues have been isolated from microorganisms as secondary metabolites. Their biosynthetic mechanisms have been studied and three different peptide epimerization machineries have been reported. The first is non-ribosomal peptide synthetase (NRPS). Excellent studies with dissected modules of gramicidin synthetase and tyrocidine synthetase revealed the reactions of the epimerization domains embedded in the enzymes. The obtained information is still utilized to predict epimerization domains in uncharacterized NRPSs. The second includes the biosynthetic enzymes of lantibiotics, which are ribosome-dependently supplied peptide antibiotics containing polycyclic thioether amino acids (lanthionines). A mechanism for the formation of the D-Ala moiety in lanthionine by two enzymes, dehydratases catalyzing the conversion of L-Ser into dehydroalanine and enzymes catalyzing nucleophilic attack of the thiol of cysteine into dehydroalanine, was clarified. Similarly, the formation of a D-Ala residue by reduction of the dehydroalanine residue was also reported. The last type of machinery includes radical-S-adenosylmethionine (rSAM)-dependent enzymes, which catalyze a variety of radical-mediated chemical transformations. In the biosynthesis of polytheonamide, a marine sponge-derived and ribosome-dependently supplied peptide composed of 48 amino acids, a rSAM enzyme (PoyD) is responsible for unidirectional epimerizations of multiple different amino acids in the precursor peptide. In this review, we briefly summarize the discovery and current mechanistic understanding of these peptide epimerization enzymes.
We next examined whether XOO_1319 had UDP-MurNAc-L-Ala-L-Glu epimerase activity. Recombinant XOO_1319 was incubated with UDP-MurNAc-L-Ala-L-Glu (6) in the presence of Mg 2+ and ATP and the chirality of the terminal Glu of the product was examined. We confirmed that the terminal L-Glu of the substrate was converted into D-Glu, demonstrating that XOO_1319 was a novel type of glycopeptidyl-glutamate epimerase. We thus designated XOO_1319 as UDP-MurNAc-L-Ala-L-Glu epimerase, MurL. We also examined whether MurL catalyzed the D→L reaction with UDP-MurNAc-L-Ala-D-Glu (5) as the substrate. However, no epimerase activity was detected. Interestingly, MurL required ATP and Mg 2+ for its activity and AMP was generated as a side product, suggesting that the substrate was activated by adenylation. However, the detailed reaction mechanism of this enzyme is not clear at this stage, because MurL lacks known conserved domains and cofactorbinding domains.
MurL orthologs are distributed among bacteria such as Gammaproteobacteria, actinobacteria, and Alphaproteobacteria. Orthologs with low similarity are also found in a plant pathogenic fungus, cyanobacteria, and an amoeba. Gerlt et al. (2012) discovered "enolase superfamily" enzymes catalyzing at least 11 different reactions such as mandelate racemase, galactonate dehydratase, glucarate dehydratase, muconate-lactonizing enzymes, N-acylamino acid racemase, β-methylaspartate ammonia-lyase, and o-succinylbenzoate synthase (Babbitt et al., 1996). High resolution X-ray structures showed that the reactions catalyzed by these enzymes are initiated by a common reaction; Mg 2+ -assisted general base-catalyzed abstraction of the α-proton of a carboxylic acid and stabilization of an enolate anion intermediate. The fate of the intermediate is determined by the active site of each enzyme to produce the specific product.
L-Ala-D/L-Glu EPIMERASE
In their study of "enolase superfamily" enzymes, Gerlt et al. discovered that E. coli and Bacillus subtilis possess orthologs of these enzymes, YcjG and YfkB, respectively . To predict the functions of these enzymes, they referred to genes located in the flanking regions of each gene. In the case of ycjG, two ORFs, an ortholog of endopeptidase catalyzing hydrolysis of the amide bond of D-Glu-meso-DAP in the peptidoglycan component and an ortholog of dipeptidyl peptidase, were identified. In B. subtilis, YkfC, which was homologous to the same endopeptidase, was located next to YktB. Moreover, there were no reports of a peptidase catalyzing the cleavage of L-Ala-D-Glu (8) and "enolase superfamily" enzymes catalyze reactions initiated by abstraction of the α-proton to form an enolate anion intermediate. Taking these observations together, they hypothesized that YcjG and YkfB were L-Ala-D/L-Glu epimerases, which convert L-Ala-D-Glu (8) into L-Ala-L-Glu (9) for degradation and recycling of peptidoglycan. Both recombinant enzymes were incubated with L-Ala-L-Glu (9) in D 2 O. After the reaction, epimerization of the L-Glu residue was confirmed by NMR and MS analysis. They also investigated substrate specificity with dipeptides composed of different amino acids. YcjG showed broad substrate specificity against dipeptides with L-Ala at the N-terminus but narrow specificity against dipeptides with L-Glu at the C-terminus. In contrast, YkfB had a narrow substrate specificity against both N-and C-terminal substrates. The kinetic parameters suggested that L-Ala-D/L-Glu was the intrinsic substrate for both enzymes (Figure 2).
The crystal structures of YcjG (apo form) and YkfB (apo form and the L-Ala-D/L-Glu complex) revealed that their overall structures were mostly identical to those of other enolase superfamily enzymes (Gulick et al., 2001;Klenchin et al., 2004). The structures comprised an N-terminal capping domain and a C-terminal (β/α) 7 β-barrel, and the active site was located in the barrel domain. As expected for the epimerization reaction, the Mg 2+ ion formed a bidentate interaction with the α-carbonyl group of the Glu of the substrate and the α-carbon center to be epimerized was located between two conserved lysine residues (K162 and K268 in YkfB). Together with the fact that mutation of either of the two lysine residues reduced the epimerization activity, a two-base mechanism for the epimerization reaction was proposed (Figure 2). It is worth noting that these two lysine residues are also conserved among enolase superfamily enzymes and have been shown to be important for the abstraction of the α-proton of a carboxylic acid.
Orthologs of YkfB have been found in several microorganisms such as Bacillus halodurans, Clostridium acetobutylicum, Clostridium difficile, and Thermotoga maritima. Of these, the ortholog in Thermotoga maritima was characterized and shown to have high epimerase activity against L-Ala-D/L-Phe, L-Ala-D/L-Tyr and L-Ala-D/L-His (Kalyanaraman et al., 2008).
NON-RIBOSOMAL PEPTIDE SYNTHETASE (NRPS)
Non-ribosomal peptide synthetases (NRPSs) are modular-type large enzymes and a typical module consists of an adenylation (A) domain, a peptidyl carrier protein (PCP) domain and a condensation (C) domain (Crosa and Walsh, 2002;Finking and Marahiel, 2004;Marahiel, 2016). The A domain activates the carboxylic acid of an amino acid with ATP by adenylation FIGURE 2 | Reaction of L-Ala-D/L-Glu epimerases. A two-base reaction mechanism typical for enolase superfamily enzymes was postulated by X-ray crystallography and mutagenesis studies. The residue numbers are for YkfB. and determines the amino acid to be selected and activated. Unlike ribosomes, NRPSs can utilize non-proteinogenic amino acids as building blocks. The C-domain catalyzes peptide bond formation between an upstream peptidyl PCP and a downstream amino acyl PCP. Besides the basic domains, additional domains catalyzing N-methylation, epimerization, and thiazoline/oxazoline formation, etc. have been reported.
The epimerization mechanism was investigated in the pioneering and excellent studies by Walsh et al. with gramicidin S (10) synthetase and tyrocidine (11) synthetase (Figure 3). They used the first module of gramicidin S synthetase for analysis, which is composed of three domains in the following order; an A domain for recognition and adenylation of L-Phe, a PCP domain for tethering L-Phe, and an E domain for epimerization of L-Phe (A-PCP-E domain) (Stachelhaus and Walsh, 2000;. By analysis of single-turnover catalysis using rapid chemical quench techniques, they showed that the reaction proceeded in the following order; disappearance of the substrate L-Phe, transient appearance and disappearance of L-Phe-AMP and formation of L-Phe-PCP (13) and D-Phe-PCP (14). They also showed that the C2 domain immediately downstream of the E1 domain is D-specific for the peptidyl donor and L-specific for the aminoacyl acceptor ( D C L ) (Clugston et al., 2003).
Tyrocidine (11) is a cyclic decapeptide produced by Bacillus strains and its amino acid sequence is D-Phe-L-Pro-L-Phe-D-Phe-L-Asn-L-Gln-L-Tyr-L-Val-L-Orn-L-Leu. Walsh et al. also investigated the epimerization mechanism of the D-Phe in the 1st and 4th residues (Luo et al., 2002). Although the mechanism of the 1st D-Phe formation and condensation with the 2nd amino acid (L-Pro) was the same as for gramicidin S synthetase as described above, the 4th D-Phe was revealed to be introduced after condensation. They demonstrated that the C4 domain in the TycB subunit of the tyrocidine synthetase utilized D-Phe-L-Pro-L-Phe as the peptidyl donor and L-Phe as the aminoacyl acceptor ( L C L ), showing that the two acyl units tethered to PCP domains were condensed before the epimerization (Figure 3). Then, the E4 domain acts on D-Phe-L-Pro-L-Phe-L-Phe-PCP (15) to yield a 1/1 mixture of D-Phe-L-Pro-L-Phe-D-Phe-PCP (16) and D-Phe-L-Pro-L-Phe-L-Phe-PCP (15). Subsequently, the downstream C5 domain selectively utilizes only D-Phe-L-Pro-L-Phe-D-Phe-PCP (16) as a substrate. These excellent studies brought us to the general conclusion that we can estimate the timing of epimerization by the relative positions of the E and C domains.
In the case of pyochelin (12), yersiniabactin, and micacocidin biosynthesis, all of which have a benzoyl moiety at the N-terminus and D-thiazolines derived from cyclodehydration of the N-acylcysteinyl precursor at the second position, the typical E domains are absent in their NRPSs. Walsh et al. examined the epimerization mechanism and showed that recombinant enzymes of pyochelin synthetase (PchE subunit) possesses the epimerase activity by in vitro experiments utilizing D 2 O and MS analysis (Patel et al., 2003). They confirmed that the PchE subunit did not epimerize a Cys-PCP intermediate, but epimerized N-benzoyl-Cys-PCP, an intermediate after the condensation between upstream benzoyl-PCP and Cys-PCP. Based on these results and mutagenesis studies, they suggested that a methyl transferase-like domain in the PchE subunit catalyzed the epimerization.
ENZYMES FOR LANTIBIOTIC BIOSYNTHESIS
In 1994, Nes et al. cloned the structural gene of bacteriocin lactocin S and found that the codons corresponding to all three D-Ala residues in lactocin S were serine, suggesting that the D-Ala residues were biosynthesized by post-translational modifications of serine residues in the primary translated peptide product (Skaugen et al., 1994). After 10 years, van der Donk et al. solved this mystery by in vitro studies (Xie et al., 2004). They prepared recombinant LctA and LctM, which are 51-amino acid prepeptides and the probable multifunctional enzymes catalyzing dehydration and cyclization reactions, respectively, to form lanthionine in lacticin 481 (class II lanthipeptides) biosynthesis. When LctA and LctM were incubated in the presence of ATP and Mg 2+ , they detected a series of products with the reduced mass spectrum corresponding to the elimination of one to four water molecules from LctA, which resulted in the formation of dehydroalanine (Dha, 17) and dehydrobutyrine (Dhb, 18) from serine and threonine, respectively. They also detected ADP in the reaction mixture, suggesting that the substrate was activated by phosphorylation, which was later confirmed with a synthetic phosphorylated substrate (Chatterjee et al., 2005). Then, several experiments to support the formation of the lanthionine structure were performed because the mass of the dehydrated compound and the product with a lanthionine structure formed by adding cysteine residues to Dha/Dhb were the same. Finally, they confirmed the formation of the lanthionine structure using mass spectrometric studies together with biochemical analysis and bioassays.
Recently, van der Donk et al. reported that substrates (peptides) controlled the stereoselectivity of the enzymecatalyzed Michael-type addition during the biosynthesis of class II lantipeptides, cytolysin and haloduracin (Tang et al., 2015). During the biosynthesis of lanthionine, the thiol of cysteine attacks dehydroalanine and usually results in the formation of DL-form lanthionine (D configuration at the α carbon originating from Ser and L configuration at the α carbon of Cys). However, in the case of cytolysin L, which has three lanthionine structures (A to C rings), they showed that the methyllanthionine (A ring) and lanthionine (B ring) residues have the LL configuration, whereas the C ring has the DL configuration, by heterologous co-expression of a precursor peptide with CylM catalyzing lanthionine formation in E. coli. They also showed that two consecutive dehydro amino acids in Dhb-Dhb-X-X-Cys motif (X represents amino acids other than Dha, Dhb, and Cys) in the precursor peptide was a key sequence for the formation of the LL configuration based on the sequence homology of the rings containing LL-methyllanthionine residues in cytolysin and haloduracin, suggesting that the sequence of the substrate peptide determines the stereoselectivity of lanthionine and methyllanthionine formation. Moreover, the two consecutive Dhb-Dhb sequences were shown to be important for the reaction by replacing the second Dhb of haloduracin with Ala. Thus, the stereochemistry of the Michael-type additions is unusually controlled by the substrate.
In the case of class I lantipeptide biosynthesis, two separate enzymes, LanB and LanC, which are a dehydratase and a cyclase, respectively, are employed for thioether linkage. Unlike LctMtype enzymes, LanB-type enzymes activate the hydroxyl group of Ser/Thr residues using glutamyl-tRNA (Garg et al., 2013;Ortega et al., 2015). Nair et al. succeeded in reconstituting the cyclization process of NisC in nisin biosynthesis in vitro (Li et al., 2006;Li and van der Donk, 2007). Moreover, its mechanism-an active site zinc ion bound by a cysteine-cysteine-histidine triad activates the thiol of cysteine for nucleophilic attack to Dha/Dhb-was also clarified by X-ray crystal structures.
Beside class I and II lantipeptide biosynthetic enzymes, class III lantipeptide biosynthetic enzymes composed of trifunctional domains, an N-terminal lyase domain, a central kinase domain, and a putative C-terminal cyclase domain, have been reported (Meindl et al., 2010). However, the cyclase domain lacks many of the conserved active-site residues found in class I and II enzymes. Recently, a class IV synthetase (LanL) containing an N-terminal lyase, a kinase domain, and a C-terminal cyclase domain similar to LanC was also reported (Goto et al., 2010(Goto et al., , 2011. However, in both cases, the detailed reaction mechanisms of lanthionine structure (D-Ala) formation remain unknown.
To date, three enzymes that catalyze D-Ala formation without the thioether linkage from Dha have been reported. Hill et al. identified an enzyme, LtnJ, which was responsible for the conversion of Dha to D-alanine in lacticin 3147 biosynthesis (Cotter et al., 2005). Vederas et al. recently identified CrnJ in the carnolysin biosynthetic gene cluster from Carnobacterium maltaromaticum (Lohans et al., 2014). By heterologous expression of the carnolysin cluster, they showed that D-alanine and D-aminobutyrate were formed from serine and threonine, respectively, by CrnJ. A NADPH-dependent Dha reductase catalyzing the conversion of Dha into D-Ala was reported by van der Donk et al. (Yang and van der Donk, 2015). A gene cluster for probable lantibiotic biosynthesis found in the cyanobacterium Nostoc punctiforme contains a gene encoding a prepeptide with several Ser/Thr-rich residues, which can be dehydrated into Dha and Dhb by a dehydratase also encoded in the cluster. However, there are no Cys residues in the prepeptide, which are usually used for thioether linkage. Therefore, they hypothesized that the cluster produced linear D-amino acid-containing peptides. Through an in vitro study with recombinant enzymes and MS analysis, they confirmed that NpnJ A , a dehydroalanine reductase, catalyzed the conversion of Dha into D-Ala. Recently, they also characterized a CrnJtype reductase, BsjJ B from Bacillus cereus, in vitro (Huo and van der Donk, 2016). The formation mechanisms of D-amino acid residues discovered in the biosynthesis of lanthipeptides are summarized in Figure 4.
RADICAL S-ADENOSYLMETHIONINE (rSAM) ENZYMES
Polytheonamides (19) are marine sponge-derived peptides composed of 48 amino acids ( Figure 5A) (Hamada et al., 2005). Uniquely, they contain many modified amino acids such as eight tert-leucine, three β-hydroxyvaline, and six γ-N-methylasparagine residues. Moreover, 18 amino acids are D-form. Considering its complicated structure, polytheonamide was proposed to be biosynthesized by a NRPS. In 2012, however, Piel et al. succeeded in isolating its biosynthetic cluster (Freeman et al., 2012). They hypothesized that polytheonamide was biosynthesized by ribosomes and post-translationally modified (RiPPs) because polytheonamide is larger than other known peptides synthesized by NRPSs. They carried out PCR with primers that were designed using the amino acid sequences of a hypothetical precursor peptide composed of L-amino acids with the metagenome of the sponge as a template [later, a symbiotic microorganism was shown to produce polytheonamide (Wilson et al., 2014)]. They successfully amplified a specific DNA fragment encoding an ORF whose C-terminal amino acid sequence was perfectly consistent with the unprocessed polytheonamide precursor. In the flanking region, however, only 11 putative ORFs were found even though 48 post-translational modifications including 18 epimerizations are necessary for the maturation of polytheonamide. To examine the function of each of these genes, they expressed the individual ORFs in E. coli. No recombinant PoyA, which is the prepeptide of polytheonamide, was obtained but they found that the yield and solubility of PoyA was dramatically improved when PoyD, a rSAM enzyme, was co-expressed. Because the isolated PoyA had the same mass spectrum as calculated, they hypothesized that PoyD was an epimerase catalyzing the D-amino acid introduction, which results in no mass change. After acid hydrolysis and a derivatization to examine the chirality of the product, they showed that majority of the 18 amino acids of the precursor polytheonamide were epimerized, indicating PoyD is an epimerase and that epimerization from L-to D-configuration is unidirectional.
They then suggested that PoyD catalyzed all epimerizations in polytheonamide biosynthesis by co-expressing PoyA and fulllength PoyA or truncated PoyA variants (Morinaka et al., 2014). Moreover, they showed that a leader region of PoyA, which is homologous to the α-subunit of nitrile hydratase proteins and composed of more than 100 amino acids, had an important function in the epimerization reaction. They utilized OspA and OspD, which are orthologs of PoyA and PoyD, respectively, from Oscillatoria sp. and confirmed that OspD introduced two epimerizations into OspA. They constructed OspA mutants by Ala replacement and co-expressed them in E. coli together with ospD. Although most of the mutants kept their epimerization activities, some mutants showed strongly reduced epimerization efficiencies.
Recently, another rSAM epimerase found in the yyd operon, YydG, was characterized in vitro (Benjdia et al., 2017). The yyd operon in B. subtilis was originally identified in the course of studying the cell wall stress response of firmicutes, and showed positive regulation upon cell-envelope stress signals, activating gene expression to maintain cell-wall integrity (Butcher et al., 2007). Because this operon contained genes encoding a putative precursor peptide (YydF), a rSAM enzyme (YydG), a protease (YydH) and ABC transporters (YydI and YydJ), a modified peptide was predicted as the biosynthetic product. However, the type of modification was not clear, because the corresponding metabolites were not identified and YydG did not have significant homology with any known rSAM enzymes. To characterize the function of YydG, Berteau et al. expressed the protein in E. coli. After incubating YydG with the full-length YydF peptide in the presence of DTT, they identified 5 -deoxyadenosine (5 -dA) and a modified YydF as the reaction products, which have identical molecular weights to the YydF peptide. Further analysis utilizing D 2 O and LC-MS/MS demonstrated that YydF epimerized the Cα of two residues (Val 36 and Ile 44) in the YydF peptide. They also showed that the purified and anaerobically reconstituted protein contained two [4Fe-4S] clusters.
They then further investigated the reaction mechanism of YydG. Considering that most rSAM enzymes initiate the reaction by generating a 5 -deoxyadenosyl radical (5 -dA), the reaction was envisioned to proceed by Cα hydrogen atom abstraction followed by the introduction of a hydrogen atom from the opposite side. Indeed, they showed that the 5deoxyadenosyl radical directly abstracted the Cα hydrogen atom using isotopically labeled YydF substrates. They also revealed that the hydrogen atom used to quench the Cα radical was provided by the thiol in the Cys 233 residue in YydG ( Figure 5B).
CONCLUDING REMARKS
In this review, we summarized the mechanisms of peptide epimerization found in prokaryotic microorganisms. To date, several enzymes that post-translationally introduce D-amino acids into peptides have been identified in eukaryotes (Ollivaux et al., 2014). The first such enzyme was discovered from the venom of the web spider. This epimerase exhibits homology to serine proteases, particularly in the region of the conserved catalytic triad, and acts on an amino acid residue near C-terminus of the peptide substrate. The second example is platypusvenom epimerase which showed similarity in its active site with aminopeptidases. Like aminopeptidases, this enzyme acts on an amino acid residue near the N-terminus of the substrate. In addition, a peptide epimerase that has weak homology N-terminal domain of the human IgG-Fc binding proteins was also identified in frog skin secretion. Although these enzymes are completely distinct proteins and have different active site residues, a two-base mechanism, a deprotonation/reprotonation of the α-carbon of the amino acid to be isomerized, is proposed for all cases. Conversely, several different machineries have been identified in microorganisms. Thus, prokaryotes and eukaryotes employ different strategies for the epimerization of peptides and microorganisms have evolved to have more divergent machineries.
AUTHOR CONTRIBUTIONS
YO and TD wrote the manuscript; YO prepared the figures. All authors approved the final manuscript.
FUNDING
This work was supported in part by Grants-in-Aid for Research on Innovative Areas from MEXT, Japan (JSPS KAKENHI Grant Number 16H06452 to TD) and Grants-in-Aid for Scientific Research from JSPS (15H03110 to TD and 16K18692 to YO). | 4,949 | 2018-02-06T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Size-Tunable Nanoneedle Arrays for Influencing Stem Cell Morphology, Gene Expression and Nuclear Membrane Curvature.
High-aspect-ratio nanostructures have emerged as versatile platforms for intracellular sensing and biomolecule delivery. Here, we present a microfabrication approach in which a combination of reactive ion etching protocols were used to produce high-aspect-ratio, nondegradable silicon nanoneedle arrays with tip diameters that could be finely tuned between 20 and 700 nm. We used these arrays to guide the long-term culture of human mesenchymal stem cells (hMSCs). Notably, we used changes in the nanoneedle tip diameter to control the morphology, nuclear size, and F-actin alignment of interfaced hMSCs and to regulate the expression of nuclear lamina genes, Yes-associated protein (YAP) target genes, and focal adhesion genes. These topography-driven changes were attributed to signaling by Rho-family GTPase pathways, differences in the effective stiffness of the nanoneedle arrays, and the degree of nuclear membrane impingement, with the latter clearly visualized using focused ion beam scanning electron microscopy (FIB-SEM). Our approach to design high-aspect-ratio nanostructures will be broadly applicable to design biomaterials and biomedical devices used for long-term cell stimulation and monitoring.
This means that by definition cells aligned to the x-axis are mapped to the same value (0), hence why the peak looks roughly twice the height of those at 90. During imaging for the image-based cell profiling there was a slight uncertainty in the registration of the nanoneedle array to the microscope camera/image frame, hence a slight broadening in peaks in some cases (e.g. the difference in height between the substrates iv and v may not be significant). Representative fluorescent images of stained lipid (red) in hMSCs cultured on flat, nanopillar and nanoneedle substrates, and quantification of extracted stain using absorbance measurements (mean ± SD, N = 2). This shows that all substrates were able to support chemically stimulated adipogenesis, and there was no evidence of material-driven differentiation under basal conditions. Scale bars = 400 µm.
(b) Alizarin Red S staining used to visualize calcium deposits after 21 days of osteogenic differentiation. Representative digital camera images of stained calcium (red) deposited by hMSCs cultured on flat, nanopillar and nanoneedle substrates, and quantification of extracted stain using absorbance measurements (mean ± SD, N = 2). This shows no evidence of material-driven differentiation under basal conditions, while flat and nanopillar substrates were able to support osteogenesis. Nanoneedles, meanwhile, showed greatly reduced calcium staining, suggesting that this material substrate impairs differentiation down this lineage. Scale bars = 4 mm.
EXPERIMENTAL METHODS
Stiffness of nanopillars and nanoneedles. The theoretical stiffness and deformation profile of nanopillars and nanoneedles were solved using Euler-Bernoulli beam theory. The governing equation relates the applied moment (M), the elastic modulus (E) of the material, and the second moment of the area (I), to the deflection (v) of the beam: Each of the parameters (M, E, and I) can vary as a function of position (x, y, z) along the beam. Since the system was modeled for a concentrated load at the apex of the structure, a position-dependent moment is developed, i.e. M(x). The moment is evaluated as: ( ) = • , where F is the concentrated load and x is the distance from the applied load. The elastic modulus (E) of silicon was assumed to remain constant throughout the etching process; thus, E is taken to be 129.5 GPa, which is the provided value from the manufacturer. Finally, the second moment of the area, which is a geometric factor, changes along the axis of symmetry, i.e. I(x). For a solid conical structure, I(x) is a function of the tip diameter (A), base diameter (B), and length (L): Implementing these conditions, the governing equation for deflection of a solid conical structure becomes: The following boundary conditions were applied: Eq. 4 states that at x = L the slope of deflection is equal to 0. Furthermore, Eq. 5 states that the deflection at x = L is equal to 0, i.e. no deformation at the base of the structure. Following integration and implementation of the boundary conditions, the solution for the deflection of a solid conical beam is: The solution was validated using 3 different methods. The first step was to set A = B, i.e. a cylindrical beam, and to compare the beam deflection of Eq. 6 to the well-known solution [1] : Figure S12 demonstrates perfect agreement between the solutions. The second validation step was to compare this generalized solution for deflection at any point along the length of a conical beam to the solution of McCutcheon [2] , which only gives the apical deflection of a solid cone.
The subscript (A) denotes properties of the tip, i.e. tip deflection (vA) and apical second moment of the area (IA). Figure S13 again demonstrates perfect agreement with the known solution. The final validation step was to demonstrate that the solution is accurate for any point along the length of the beam and thus gives a deflection profile of the entire beam. To achieve this the beam was modeled, meshed, and solved using FEBio [3] , an open-source finite element solver. Each beam was meshed with 100 elements in the longitudinal axis (x) and 64 radial elements. The material was modeled as an isotropic linear elastic solid with a Poisson's ratio of 0.3, and physical dimensions: L = 1 mm, A = 0.1 mm, B = 0.2 mm.
A traction force (F = 0.01 N) was imposed at the apex of each cone with the base remaining fixed (zero displacement and rotation). Deformation profiles for each element along the longitudinal axis are plotted in Figure S14 and demonstrate good agreement. It should be noted that discrepancies do exist between the two solutions due to the modeling assumptions, e.g. Euler's solution assumes infinitesimal strains and small rotations. Therefore Eq. 6 and any solutions based on Euler's beam theory are only valid for small deflections. While it is not proven here, we assume that the hMSCs produce small traction forces and thus small beam deflections allowing us to use Eq. 6. High-content screening of cell images and modeling of cell features. The following describes both the image analysis, subsequent data handling, and then modeling work. This was carried out broadly in two sections: 1. Automated image analysis using a custom image-analysis pipeline in CellProfiler (CellProfiler 3.0.0, Broad Institute). [4] 2. ata analysis and modeling using custom scripts written in R (R 3.5.2, R Core Team: http://www.Rproject.org/).
Image-based cell profiling has been conducted as stated below: Pre-processing of images: Each microscope image was pre-processed using ImageJ, to split each combined field-of-view into separate image files for each channel, using a macro written in ImageJ. [5] Pipeline construction: Immunofluorescent microscopy images were imported into CellProfiler. Initially, a small test set of images were manually pre-selected, representative of each of the experimental conditions to manually tune the pipeline parameters. In the following, parameters were tuned interactively using CellProfiler's graphical user interface.
Summary of pipeline (bold text refers to binary objects generated by the pipeline): File import: o Metadata is assigned to image files, channels are mapped to their respective fields-of-view.
o Pixel intensities are rescaled to the range 0 and 1 for calculation purposes (default approach for CellProfiler).
Primary segmentation of nucleus o Cell nuclei were identified from the image of DAPI fluorescence. The thresholding method was a two-class adaptive Otsu threshold. An adaptive thresholding approach, using a window size of 50 pixels, was qualitatively observed to be more robust across a wide variety of substrate types.
o Objects smaller than 4.1 µm and larger than 20.3 µm were considered to be a segmentation error and discarded.
Secondary segmentation of cell body o The cell body for each cell was identified by using the nucleus as a kernel to perform a secondary segmentation on the tubulin channel image.
o A global minimum cross-entropy thresholding approach, combined with a propagation algorithm to help distinguish between clumped objects, was found to give the best results.
o The threshold of this step was tuned so that the resulting binary object captured the majority of the cell body region, but not necessarily all protrusions.
Secondary segmentation of cell body + cell protrusions o Using the cell body as a kernel, another secondary segmentation was performed on a maximum projection of the actin, tubulin , and YAP channels. Prior to the projection, all channels were normalized to their maximum/minimum values to maximize the contribution of each to the projection. This projection was only used for this segmentation, not for any subsequent intensity measurements.
o Segmentation was performed using an adaptive Otsu algorithm, two-class, with a windowsize of 10 pixels.
o This object incorporates both the cell body and total protrusion area. o In addition, the required threshold level was also recorded. were measured for each of the binary objects defined above. Additional measurements made by the pipeline, but not included in analysis: Perinuclear area determination: o Commonly, the perinuclear area, as defined by a fixed-width ring surrounding the nucleus, is used to normalize for cell thickness variations. [6] o For highly rounded or elongated cells on nanoneedles, in particular where the cell nucleus is frequently collocated with the cell membrane in highly elongated cells, it was found particularly challenging to consistently define a peri-nuclear without introducing segmentation artifacts, hence this approach was not used here. o Initial tests identified that while many features, such as intensity texture parameters, provided good class separation during linear discriminant analysis, these features were also convoluted with the appearance of the nanoneedle array in the underlying image data.
Hence it was unclear if the signal being measured was from the cell texture, from the pattern of nanoneedles, or most likely, a mixture of both.
Batch analysis:
Due the number of images involved, pipeline processing was sub-divided into smaller batches using a custom script written in Microsoft Powershell. Each batch was called as single command-line call to a new CellProfiler instance, which was automatically assigned asynchronously to individual logical processor cores of a server (Microsoft Windows Server 2012 R2, Intel Xeon CPU E5-2630 @ 2.6 GHz with 12 logical cores, 96 GB RAM). Analysis took approximately 19.5 hours, after which the individual data files were concatenated using another PowerShell script. This resulted in a single csv file for each binary object, plus additional folders and files containing image and pipeline metadata.
General note on data plots derived from measurements from image-based cell profiling: Data shown in the manuscript was exported from R as csv files and plotted using OriginPro. Other plots were generated using the package ggplot2 [7] and ggcorrplot (https://CRAN.R- o ggplot2 (3.1.0), [7] o dplyr (0. caret (6.0-81), [9] ggcorrplot (0. o Any cell where the median intensity of either the nucleus or cytoplasm, on any of the actin, tubulin , or YAP channels, was less than double the channel background intensity were removed due to the poor signal-to-noise ratio.
o Finally, all object datasets were cross-referenced to remove any cell that did not have a complete set of measurements across all features and objects, to ensure one-to-one mapping between datasets in the analysis.
Image and cell numbers: The measurement modules included in CellProfiler produce a very large number of data fields per cell (> 1,500 for the pipeline used here). Not all of these are useful, and some features are highly correlated, which can cause significant problems in the proper modeling of the data. Some features are discrete, or cannot be appropriately transformed for the modeling undertaken here (for example the Euler number of the binary objects). These features were excluded from the analysis. Hence, the number of features was reduced by selecting those which form a complementary set of data about each cell. Table S1 details the features selected.
Data transformation and normalization:
Many of the measured features, in particular the shape parameters, have highly skewed distributions. Such distributions violate the assumption of normally distributed data for many statistical techniques.
Measurements were transformed to become more normal-like by applying a generalized logarithm function, as described by Laufer et al. [10,11] : where x is the data point to be transformed and c is a scaling factor. The scaling factor is the third percentile of the empirical distribution function of the measured variable. At = 0 this equation is the same as a normal logarithm, but for values of > 0 this function transforms the output for values of < 0 , avoiding infinity errors in the subsequent logarithm.
In addition, a robust Z-score was used to normalize batch-to-batch variations in the three technical replicates included in the image-based cell profiling. Using the flat substrate of each replicate as the control, a robust Z-score for each feature was calculated as: Robust Z-score = (Value of data point − Median of the same variable on a flat control) (Median absolute deviation of the same variable on the flat control) The robust Z-score uses the median and median absolute deviation, rather than the mean and standard deviation typically used in a Z-score, to minimize the effect of outliers.
Identification and removal of highly-collinear features:
Before the data can be modeled, highly correlated features should be removed. Manual feature selection (described above) helps to reduce the risk of multicollinearity between features during modeling. However, this alone may not identify all strongly correlated features. Failing to remove these features can result in highly unstable models, making their interpretation meaningless. Multiple techniques exist for tackling this problem, here we use an approach discussed by Caicedo et al. [10] The Spearman's rank correlation coefficient was calculated for all pairs of features, an example is shown in Figure S13 Table S1 for the full list of input features, and Supplementary Table 2 for list of which features were kept in each model.
Linear discriminant analysis model construction and validation:
We are most interested in the ultimate behavior of cells on the substrate, so the modeling here considered only cells from the 72-hour timepoint. The following analysis uses the cleaned, transformed and normalized data, as described above.
Linear discriminant analysis (LDA) was used here, as we have a number of non-metric classes (different sharpness nanoneedles and a flat control) with a large number of metric variable measurements, plus the resulting model can be interpreted to infer information about the underlying features. [12] LDA acts to maximize the separation between data points belonging to different classes (in this case different substrates). It does this by creating one or more discriminant functions, that can be used to score each cell. Each function is a linear combination of variables, where each variable here represents one of the features that was included in the model (e.g. cell area, nucleus solidity, etc.). Each variable is weighted by a coefficient, which represents the degree to which that variable contributes to the discriminant function. For a given cell, the discriminant function scores represent the class the cell either belongs too (during supervised training) or has been assigned to (during model validation).
To build each model, the relevant dataset was randomly split into two halves. Fifty percent of the cells were designated as a training dataset. The remainder were designated as the test (also known as holdout) dataset. An LDA based classifier was trained on the training dataset, using the MASS R package. [9] The trained classifier was then used to predict the class of each cell in the test dataset. This approach allows the specificity and sensitivity of the model to be directly assessed, by measuring how many cells were correctly and incorrectly classified, as we ultimately know which substrate they belonged to. As a further validation, each model was trained again on a dataset with randomized class labels, which resulted in a model with no significant separation of the test dataset. The stability of the model was assessed by running the script three times, using different seed values for the random number generator in R. The random number generator is used to sample data values to create the training and test datasets, so this is a check to make sure that our interpretation of the results is valid for more than just a single training/test dataset. The overall accuracy of the five-, three-and two-class models, remained within 2 % agreement for all three iterations, i.e. building the model from a sample of the data doesn't overly affect the final model accuracy. Figure 2c in Two-class model -incorporating cells from two substrate types (sharp nanoneedles, flat substrate). In each case, the optimum number of discriminant functions for each model was one minus the number of classes, i.e. four, two and one respectively, as checked using the caret R package. [9] Figure S15 shows the confusion matrices for each of the three models. These matrices represent how many cells were correctly identified, or if misclassified, into which class they were incorrectly assigned. The three models together illustrate how it is relatively easy to identify cells on flat, nanopillar and sharp nanoneedles (the three extreme cases), classification becomes considerably harder for sharp, and blunt types of nanoneedles. This is consistent with flat, nanopillar and sharp nanoneedles being the most distinctly different substrates. Further details about the specificity and sensitivity (figures of merit derived from the confusion matrix) for each model and each class is shown Tables S3 to S5). Figure S15. Confusion matrices (heatmaps) for the five-, three-and two-class models respectively.
The number and color fill of each square indicates the number of cells that were classified to a given class.
Interpretation of the LDA model: In order to simplify the interpretation of the discriminant functions, discriminant loadings were calculated. For each model, the scores assigned to each cell in the test dataset were correlated with the original measurements of each feature to produce the discriminant loading. These loadings represent the strength of the correlation between the discriminant function and the underlying data.
The five-and three-class models are described by more than one discriminant function. To further aid the interpretation of these models, each discriminant loading was weighted by the relative contribution of that discriminant function. For example, in the three-class model, the first discriminant function represents accounts for roughly 90% of the between-class variance, hence the discriminant loadings attributed to this function contribute more to the overall class separation.
Here we use a weighted approach reported by Hair et al. referred to as the potency value: [12] Potency value of variable on function = (Discriminant loading ) × Eigenvalue of discriminant function Sum of all eigenvalues across all significant functions which can be combined to form a composite potency index for each variable: Composite potency index of variable =
Sum of potency values of variable across all significant discriminant functions
The composite potency index is a relative measure of the importance of a given variable to the overall discriminant functions. Table S1. Description of main image-based cell profiling features included as initial inputs to the LDA model Calculated as:
× Cell body area perimeter
The form factor equals 1 for a circle, and less for irregularly shaped cells. Cell mean radius Calculated as the mean of all the distances between every pixel inside the cell body and the nearest pixel outside of the cell body.
Cell perimeter
Perimeter of the cell body.
Cell major axis length An equivalent ellipse, is fitted to the cell body. The properties of this ellipse are used to determine a number of parameters, including major and minor axis length, and the orientation of the cell with respect to the image frame.
These values provide information about the size, elongation, and relative orientation of cells.
Note: measures of orientation are not included in modeling due to uncertainty in absolute orientation, but the description is included here for completeness.
Cell minor axis length
Cell orientation
Cell solidity
The convex hull of the cell body is calculated. Here, the convex hull can be visualized by imaging a rubber band stretched around the cell, defining a region that fully encloses the cell body. The solidity is then the ratio: Cell body area Cell body convex hull A solidity of 1 represents a cell where the edge does not fold back in on itself. Irregularly shaped cells have solidities of less than 1. Nucleus morphology
Nuclear area
As above, but for the nucleus. Nuclear compactness Nuclear eccentricity Nuclear extent Nuclear form factor Nuclear mean radius Nuclear major axis length Nuclear minor axis length Nuclear perimeter Nuclear solidity Cell body + cell protrusions morphology
Maximum radius
The maximum distance from any pixel inside the cell body + cell protrusions to a pixel outside of the object. Here, it represents a measure of maximum cell size. Cell protrusion ratio As defined above.
Cell body intensity CTCF actin intensity
Corrected total cell fluorescence, as defined above. CTCF tubulin intensity CTCF YAP intensity Mass displacement actin For a given channel, this represents the distance between the center of mass of the greyscale intensities inside the cell body, and the geometric center of the cell body.
A mass displacement of zero indicates the majority of the measured intensity is clustered perfectly in the center of the cell. Larger values suggest the intensity is clustered towards the edge of the cell body.
Mass displacement tubulin Mass displacement YAP
Nucleus intensity YAP ratio As defined above.
Cytoplasm intensity Actin/ tubulin ratio As defined above.
Voronoi cell area Local cell density As defined above.
Fiveclass
Cell eccentricity, cell major axis length, cell mean radius, cell minor axis length, cell solidity, local cell density, cell protrusion ratio, cell CTCF actin, cell CTCF tubulin , cell mass displacement actin, cell mass displacement tubulin , cell mass displacement YAP, cytoplasm ratio of actin to tubulin , nuclear extent, nuclear form factor, nuclear minor axis length, nuclear solidity, nuclear/cytoplasm YAP ratio Threeclass Cell eccentricity, cell major axis length, cell mean radius, cell minor axis length, cell solidity, local cell density, cell protrusion ratio, cell CTCF actin, cell mass displacement actin, cell mass displacement tubulin , cell mass displacement YAP, cytoplasm ratio of actin to tubulin , nuclear extent, nuclear form factor, nuclear minor axis length, nuclear solidity, nuclear/cytoplasm YAP ratio Twoclass Cell eccentricity, cell extent, cell form factor, cell major axis length, cell mean radius, cell minor axis length, cell local cell density, cell protrusion ratio, cell CTCF actin, cell mass displacement actin, cell mass displacement tubulin , cell mass displacement YAP, cytoplasm ratio of actin to tubulin , nuclear extent, nuclear form factor, nuclear minor axis length, nuclear solidity, nuclear/cytoplasm YAP ratio | 5,243 | 2020-04-24T00:00:00.000 | [
"Engineering",
"Materials Science",
"Biology"
] |
N-Doped Graphene-Decorated NiCo Alloy Coupled with Mesoporous NiCoMoO Nano-sheet Heterojunction for Enhanced Water Electrolysis Activity at High Current Density
N-doped graphene-coated structure and mesoporous nano-sheet can efficiently boost active sites and stability for hydrogen and oxygen evolution reaction. NiCo@C-NiCoMoO/NF exhibits low overpotentials for HER (266 mV) and OER (390 mV) at ± 1000 mA cm−2. For water electrolysis, it can hold at 1000 mA cm−2 for 43 h in 6.0 M KOH + 60 °C condition. N-doped graphene-coated structure and mesoporous nano-sheet can efficiently boost active sites and stability for hydrogen and oxygen evolution reaction. NiCo@C-NiCoMoO/NF exhibits low overpotentials for HER (266 mV) and OER (390 mV) at ± 1000 mA cm−2. For water electrolysis, it can hold at 1000 mA cm−2 for 43 h in 6.0 M KOH + 60 °C condition. Developing highly effective and stable non-noble metal-based bifunctional catalyst working at high current density is an urgent issue for water electrolysis (WE). Herein, we prepare the N-doped graphene-decorated NiCo alloy coupled with mesoporous NiCoMoO nano-sheet grown on 3D nickel foam (NiCo@C-NiCoMoO/NF) for water splitting. NiCo@C-NiCoMoO/NF exhibits outstanding activity with low overpotentials for hydrogen and oxygen evolution reaction (HER: 39/266 mV; OER: 260/390 mV) at ± 10 and ± 1000 mA cm−2. More importantly, in 6.0 M KOH solution at 60 °C for WE, it only requires 1.90 V to reach 1000 mA cm−2 and shows excellent stability for 43 h, exhibiting the potential for actual application. The good performance can be assigned to N-doped graphene-decorated NiCo alloy and mesoporous NiCoMoO nano-sheet, which not only increase the intrinsic activity and expose abundant catalytic activity sites, but also enhance its chemical and mechanical stability. This work thus could provide a promising material for industrial hydrogen production.
Introduction
Water electrolysis (WE) can convert renewable sources (i.e., solar, wind) into H 2 with clean and high energy density, but the sluggish kinetics of hydrogen and oxygen evolution reaction (HER and OER) at cathode and anode will hinder its efficiency [1][2][3][4]. Although Pt-/Ir-/Ru-based materials are the best choice to accelerate these two half-reactions, the large-scale hydrogen production is still limited by its shortage and high price [5][6][7]. Therefore, developing highly efficient non-precious metal materials to replace the noble metals for reducing cost and improving the performance of WE are necessary [8,9].
Recently, 3D transition metal-based (TMB) catalysts are regarded as prospective alternative to noble metals, due to their abundance and low cost [10][11][12][13][14]. However, the 3D TMB catalysts are unstable under strong alkaline conditions. To address this problem, some researchers reported a novel strategy to construct the 3D TMB catalysts with N-doped graphene-encapsulated to improve the stability and catalytic activity [15]. Deng et al. prepared the ultrathin graphene layer encapsulating FeNi alloy and efficiently optimizing its surface electronic structure [16], and it obtains a low overpotential (280 mV) at 10 mA cm −2 for OER and can keep for 24 h. Mu et al. fabricated a Mo 2 C@C nanoball with hollow porous, which displayed low overpotentials for HER in 1.0 M KOH (115 mV) and 0.5 M H 2 SO 4 (129 mV) solution at − 10 mA cm −2 [17]. Furthermore, other investigators also use the N-doped carbon-encapsulated 3D TMB catalysts, which can optimize the distribution of electrons on the metal surface and prevent metal dissolution under strong alkaline conditions to enhance the catalytic performance [18][19][20], while most of them are focusing on studying the catalytic performance at low current density and also need high potential to drive the WE. Therefore, it is deserved to develop 3D TMB materials with excellent WE catalytic activity at high current density [21][22][23][24].
Mesoporous-based materials are studied for enhancing the WE performance, because it has large specific surface area to expose abundant catalytic activity sites, increase the contact area with electrolyte, and prompt the gas and electrolyte diffusion at high current density [25][26][27][28][29]. Du et al. reported Co 4 N-CeO 2 porous nanosheet self-supported on graphite plate (Co 4 N-CeO 2 /GP), which shows low overpotentials for HER (24 mV) and OER (239 mV) at ± 10 mA cm −2 [30]. It can work at 500 mA cm −2 for 50 h as cathode and anode, exhibiting long-term durability. Ren et al. synthesized ternary 3D Ni 2(1−x) Mo 2x P nanowire with mesoporous structure; at − 500 and − 1000 mA cm −2 , it exhibits low overpotentials for HER (240 and 294 mV) under 1.0 M KOH solution [31]. Although researchers synthesized many mesoporous materials with better electrocatalytic performance, the activity and durability at high current density still cannot meet the demand of industry WE. In addition, most of these catalysts are used only for HER or OER instead of overall water splitting.
In this work, we synthesize a highly efficient N-doped graphene-decorated NiCo alloy coupled with mesoporous NiCoMoO nano-sheet grown on 3D nickel foam as bifunctional catalyst (NiCo@C-NiCoMoO/NF) for WE. At ± 1000 mA cm −2 , it exhibits excellent catalytic activity with low overpotentials for HER and OER (266 and 390 mV). More importantly, under 6.0 M KOH solution and 60 °C, it needs ultralow voltage of 1.90 V to reach 1000 mA cm −2 and can maintain for 43 h as anode and cathode.
Synthesis of NiCo@C-NiCoMoO/NF Nano-Sheet
All reagents come from Aladdin Reagent Co., Ltd. without purification. First, nickel foam (NF, 2.0 × 4.0 cm 2 ) was treated in ethanol, 3.0 M hydrochloric acid, and ultrapure water with ultrasonication, respectively. Second, the cleaned NF was put in an 30 mL mix solution (ethylene glycol and ultra-pure water) with sodium molybdate dihydrate, urea, and nitrate hexahydrate. Third, the mix solution was put into steel autoclave for 12 h at 180 °C. After cooling to 25 °C, the NF was washed by ethanol and ultrapure water, and dried overnight under vacuum at 80 °C. Finally, the dried sample was treated at 450 °C under 5% H 2 + 95% Ar atmosphere for 2 h (3 °C min −1 , the obtained sample named as NiCo@C-NiCoMoO/NF), and its mass load on NF was ≈ 10.5 mg cm −2 . Besides, the dried sample was also heated at 350 and 550 °C. NiCo-NiCoMoO/NF nano-sheet was prepared in ultra-pure water with Ni, Co, and Mo source.
Electrochemical Tests
All the electrochemical tests [linear sweep voltammetry (LSV), chronopotentiometry (CP), and electrochemical impedance spectroscopy (EIS)] used the standard threeelectrode system [counter electrode: graphite bar; working electrode: the as-prepared samples (the test area is 0.5 cm 2 ); reference electrode: reversible hydrogen electrode] by electrochemical workstation (Germany, ZAHNER) under 1.0 M KOH solution containing saturated N 2 . EIS was tested by the three-electrode system from 100,000 to 0.1 Hz; the test potential was − 0.2 and 1.5 V for HER and OER (the amplitude is 5 mV). The following formula was used for iR correction potential (E corr ): (1) E corr = E mea − iR s , which was actually measured potential (E mea ) and the solution resistance (R s ). Besides, the same condition was used for two-electrode system. The equation [(2) η = blog|j|+ a] was used to assess the Tafel plots; the Tafel slope and the current density were b and j. The turnover frequency (TOF) and mass activity (MA) of catalyst for HER and OER were calculated based on the reported literatures [32][33][34][35].
Besides, 20 wt% Pt/C (anode) and 40 wt% IrO 2 /C (cathode) were used as noble metal ink (bought from Aladdin with no further treatment). Ethanol (0.96 mL) and 5.0 wt% Nafion (40.0 μL) mixed solution was applied to disperse this noble metal catalyst; then, it was dropped on NF (0.5 cm 2 ) and named as Pt/C/NF and IrO 2 /C/NF.
Physicochemical Characterization
N-doped graphene-decorated NiCo alloy coupled with mesoporous NiCoMoO nano-sheet grown on 3D nickel foam was synthesized via the facile two-step methods (Fig. 1). Figure S1a, b displays the SEM images of NiCo@C-NiCoMoO/NF (annealed at 450 °C), which shows that the nanoparticles are uniformly anchored on the self-supported mesoporous nano-sheet, and it is different from the NiCo-MoO nano-sheet precursors with smooth surfaces (Fig. S1c, d). The XRD images in Fig. S2 show that the three diffraction peaks belong to the (111), (200), and (220) planes of NiCo, respectively [36]. Furthermore, the other peaks can be assigned to Ni 2 Mo 3 O 8 (PDF#37-0855) and Co 2 Mo 3 O 8 (PDF#34-0511). The XRD results indicate that it composed of NiCo alloy, Ni 2 Mo 3 O 8 , and Co 2 Mo 3 O 8 .
The TEM, high-angle annular dark field scanning TEM (HAADF-STEM), and high resolution TEM (HRTEM) images ( Fig. 2a- [37]. In Fig. 2d, the nanointerface existing between NiCo alloy and NiCoMoO can facilitate the redistribution of electrons to form the electron-rich and electron-poor species, which can optimize H* and H 2 O/OH − absorption energy to enhance the performance for HER and OER [22,[38][39][40]. Furthermore, the NiCo alloy is obviously coated by graphene carbon (~ 4 layers) in Fig. 2e, which can efficiently optimize the distribution of electrons on the catalyst's surface to improve the catalytic activity for WE and prevent the metal dissolution under strong alkaline condition [41].
The graphene carbon of NiCo@C-NiCoMoO/NF is also evaluated by Raman in Fig. S3, and the ratio of area D and G is 1.36 at 450 °C, which is larger than the ones prepared at 350 and 550 °C (1.16 and 1.28), suggesting a larger number of structural defects to enhance the catalytic activity for WE. The EDS elemental mappings ( Fig. 2h-m) demonstrate that the Ni, Co, Mo, O, C, and N elements are evenly distributed on NiCo@C-NiCoMoO/NF nano-sheet.
Furthermore, it also exhibits mesoporous structure (2-15 nm) in Figs. 2b, c and S4a-c, which can be obtained by HAADF-STEM (Figs. 2f and S4d, e). To further study the mesoporous structure of NiCo@C-NiCoMoO/NF nano-sheet, the pore volume/size (0.18 cm 3 g −1 /6.83 nm) and specific surface area (102.96 m 2 g −1 ) are characterized by N 2 absorption/desorption measurements, and the most parts range of mesoporous peaks is 1.0-14.0 nm (Fig. S5a, b and Table S1). This mesoporous nano-sheet possesses a large specific surface area to expose more catalytic active sites and enhance the activity for WE. Additionally, it can increase the contact area with electrolyte to accelerate the release of H 2 /O 2 bubbles and improve the performance for WE at high current density [26,28].
Meanwhile, the NiCoMoO nano-sheet precursors also annealed at 350 and 550 °C to study the effect of post-treatment at different temperatures on the crystal structure, morphology, pore volume/size, and specific surface area ( Fig. S5 and Table S1). When the precursors annealed at 350 °C, the XRD peak intensities of Co 2 Mo 3 O 8 , Ni 2 Mo 3 O 8 , and NiCo are too weak (Fig. S2), the nano-sheets are smooth (Fig. S6a, b), and the size of mesoporous is mainly concentrated on 11.1 nm (Fig. S5c, d). As annealed at 550 °C, the XRD peak intensities of Co 2 Mo 3 O 8 , Ni 2 Mo 3 O 8 , and NiCo are strong, the nano-sheets are broken (Fig. S6c, d), and the material has macroporous structure (Fig. S5e, f). Thus, temperature plays an important effect on the formation of this novel structure.
Subsequently, the electron interaction and elemental status of Ni, Co, Mo, O, C, and N elements in NiCo@C-NiCoMoO/NF (Figs. 3 and S7) are proved by XPS. Interestingly, for NiCo@C-NiCoMoO/NF, the Ni 2p peaks of Ni show a ≈ 0.4 eV positive shift compared with that of NiCo-NiCoMoO/NF (Fig. 3a). For Co 2p, the peaks of NiCo@C-NiCoMoO/NF also show a ≈ 0.5 eV positive shift as against to that of NiCo-NiCoMoO/NF (Fig. 3b). This is because of different electronegativity between Ni/Co (1.91/1.88), C (2.55), and N (3.04). So, the N-doped carbon can efficiently optimize the electron structure on the surface of NiCo alloy, which could be beneficial to enhance the performance for WE.
Particularly, the redistribution of electrons could lead to the charge transfer from NiCo to N-doped carbon, forming electron-rich N-doped graphene and electron-poor NiCo species, which can optimize the absorption energy of H*, H 2 O, and OH − for HER and OER [38,39]. The high-resolution XPS (HRXPS) spectra of Mo 3d are fitted into six mainly peaks at Mo 6+ (235.1/232.0), Mo 5+ (233.3/230.2), and Mo 4+ (232.5/229.4), respectively (Fig. 3c). Besides, the surface of catalyst is oxidized when exposed to air, resulting in high valence state of Ni, Co, and Mo ( Fig. 3a-c). As shown in Fig. 3d, the C 1s has three peaks at C=O (288.4 eV), C-N (286.2 eV), and C-C (284.8 eV) that further prove the existence of the N-doped graphene. Moreover, the N 1s is located at 401.3 and 398.5 eV in Fig. 3e, assigned to graphitic-N and pyridinic-N, which can result in an important effect on catalytic activity for HER and OER [42]. Therefore, we can draw the concision that the N-doped graphene-decorated NiCo alloy coupled with mesoporous NiCoMoO nano-sheet is successfully prepared. Combining SEM, BET, XRD, and Raman (Figs. S1-S6) results, the following formation mechanism can be proposed: different adsorption enthalpies of Ni, Co, and Mo can lead to the part of Ni and Co atoms segregated from precursors to form NiCo alloy [43,44]. When the precursors annealed at [37]. Besides, NiCo alloy can catalyze the organic carbon to form N-doped graphene [45]. The formation of mesoporous is caused by the dehydration from the precursors during the high-temperature calcination process, and the pore size is related to the temperature. However, when the precursors are annealed at 350 °C, the NiCo alloy cannot be reduced from the precursors, and the surface organic carbon cannot form more N-doped graphene. This will decrease the intrinsic activity for WE. It can also be seen from SEM and BET pictures (Figs. S6a, b and S5c, d), the material cannot be dehydrated to form a mesoporous nano-sheet structure due to the low temperature that cannot provide enough specific surface area for exposing more active sites. When the precursors annealing at 550 °C, the Ni and Co atoms will be quickly reduced to form strong NiCo alloy and NiCoMoO (Fig. S2), and the nano-sheet is quickly dehydrated to form the macroporous structure and almost broken as nanoparticles (Fig. S6c, d).
It will lower the intrinsic activity for WE and not provide large specific surface area for exposing more active sites. In summary, the N-doped graphene-decorated NiCo alloy, mesoporous NiCoMoO nano-sheet, and heterostructures are formed at 450 °C, which have the highest intrinsic activity and specific surface area. The heterostructures owe good electrochemical activity for HER and OER that is confirmed by LSV, Tafel, and EIS characterization (Figs. S11 and S22).
HER Catalytic Performance of NiCo@C-NiCoMoO/NF
The HER electrocatalytic activity of the samples is evaluated by a three-electrode system under 1.0 M KOH solution containing saturated N 2 . Obviously, the NiCo@C-NiCoMoO/ NF only acquires low overpotentials of 39 and 266 mV at − 10 and − 1000 mA cm −2 (Figs. 4a and S8), which is lower than that of NiCo-NiCoMoO/NF (η −10 = 75 mV; η −1000 = 303 mV). Thus, the HER activity of NiCo@C-NiCoMoO/NF is significantly improved after the NiCo alloy coated by N-doped graphene, especially at high current density, which could be attributed to the N-doped graphene structure, optimized the surface electronic distribution, and improved the activity and conductivity of catalysts. Furthermore, the overpotentials of NiCo@C-NiCo-MoO/NF are lower than that of precursors (η −10 = 141 mV; η −1000 = 443 mV), NF (η −10 = 223 mV; η −1000 = 561 mV) and closed to Pt/C/NF (η −10 = 30 mV; η −1000 = 231 mV). The overpotential at − 1000 mA cm −2 is better than most of the reported literatures (Fig. 4b), which indicates that it could meet the demand of catalytic activity at high current density for industrial-scale. Figure S9 displays the LSV curves of NiCo@C-NiCo-MoO/NF with/without iR correction for HER. Tafel slope obtained from LSV curve is carried out to further research the kinetic of HER (Fig. 4c). It shows that Tafel slope of NiCo@C-NiCoMoO/NF is only 63.50 mV dec −1 , outperforming NiCo-NiCoMoO/NF (98.62 mV dec −1 ), precursors (117.19 mV dec −1 ), NF (159.99 mV dec −1 ), and similar with Pt/C/NF (42.24 mV dec −1 ). The smaller value of Tafel suggests that NiCo@C-NiCoMoO/NF can readily overcome the kinetics process of HER. As shown in Table S3, TOF and MA values of NiCo@C-NiCoMoO/NF at the overpotentials of 50, 100, 150, and 200 mV also indicate its high catalytic activity for HER and better than most reported results in the literatures (Table S4). Fig. 3 a,
3
The EIS is used to estimate the kinetics of HER. In Fig. S10, it displays that NiCo@C-NiCoMoO/NF has the smallest charge transfer resistance (R ct ) compared with another samples, revealing its best electron transfer rate. For precursors annealed at different temperatures, the LSV curves, Tafel slope, and EIS of HER are displayed in Fig. S11, which demonstrate that the precursors annealing at 450 °C exhibits the best activity.
The electrochemical active surface area (EASA) is evaluated by the double-layer capacitance (C dl ) that is obtained by cyclic voltammetry (CV) methods under no Faradic regions (Fig. S12). NiCo@C-NiCoMoO/NF has the largest C dl value (28.81 mF cm −2 ); it is better than NiCo-NiCoMoO/NF (17.60 mF cm −2 ), indicating that N-doped carbon-decorated NiCo alloy can effectively boost the intrinsic activity and speed up the HER process. The LSV curves are normalized by EASA in Fig. S13; apparently, the intrinsic catalytic activity of NiCo@C-NiCoMoO/NF is better than NiCo-NiCoMoO/NF.
In Fig. 4d, we research the HER durability of NiCo@C-NiCoMoO/NF under 1.0 M KOH solution by CP measurement at − 1000 mA cm −2 , which displays excellent stability after continuous work 340 h, and the potential has only changed 18 mV. Furthermore, the value of overpotential at − 1000 mA cm −2 and R ct at − 0.2 V (vs RHE) after stability test is negligibly changed, indicating its good stability. The SEM (Fig. S15) and HRTEM (Fig. S16) images of NiCo@C-NiCoMoO/NF after HER stability test showed that it maintains the pristine morphology. In addition, the HRXPS (Fig. S17) spectra of Mo, Ni, and Co for NiCo@C-NiCoMoO/ NF show no obvious change. These results confirm that the NiCo@C-NiCoMoO/NF exhibits an outstanding HER durability in 1.0 M KOH solution. The reason could be due to that the N-doped graphene-decorated NiCo alloy framework can prevent metal from dissolving in the strong alkaline solution, thus improving the chemical stability. Furthermore, the selfsupporting mesoporous nano-sheet has large specific surface area to increase wettability of the catalyst, facilitate the release of H 2 bubbles, avoid the use of binder to improve the electron transfer efficiency, and prevent the active substance from spalling to enhance the mechanical stability. [31,[47][48][49][50][51][52]. c Corresponding Tafel curves. d CP curve
OER Catalytic Performance of NiCo@C-NiCoMoO/NF
We evaluate OER catalytic performance of NiCo@C-NiCo-MoO/NF under the same solution. Figures 5a and S18 show their iR corrected LSV curves. Similar to HER performance, NiCo@C-NiCoMoO/NF has the low overpotentials (260 and 390 mV) at 10 and 1000 mA cm −2 , which is smaller than that of NiCo-NiCoMoO/NF (280 and 459 mV), indicating that the N-doped graphene can efficiently adjust the surface electronic of the catalyst to enhance the intrinsic activity of catalyst. In addition, it outperforms precursors (η 10 = 320 mV; η 1000 = 554 mV), IrO 2 /C/NF (η 10 = 290 mV; η 1000 = 476 mV), and NF (η 10 = 340 mV; η 1000 = 624 mV). Importantly, the overpotential at 1000 mA cm −2 is better than most of the reported literatures as shown in Fig. 5b. Furthermore, the LSV curves of OER for NiCo@C-NiCo-MoO/NF with/without iR correction are shown in Fig. S19, and the LSV curves of OER for NiCo@C-NiCoMoO/NF and NiCo-NiCoMoO/NF are normalized by EASAs in Fig. S20 similar to HER performance. We obtain the Tafel slopes from the LSV curves to evaluate the kinetics of OER (Fig. 5c) Figure S21 displays that NiCo@C-NiCo-MoO/NF has the smallest R ct compared with another samples, revealing it possesses the best electron transfer rate. The above results also suggest that N-doped graphene-decorated NiCo alloy coupled with mesoporous NiCoMoO nano-sheet can effectively speed up the OER process. Additionally, in Fig. S22 [8,47,50,[53][54][55][56]. c Corresponding Tafel curves. d CP curve for OER (Table S5) display its fast OER kinetics, which is higher than most reported literatures (Table S6).
The stability is also essential to evaluate the performance of catalyst, especially at high current density. As shown in Fig. 5d, NiCo@C-NiCoMoO/NF can keep for 340 h at 1000 mA cm −2 , and the change of potential is 12 mV, displaying an outstanding stability. Furthermore, we also study the catalytic activity after stability test by LSV and EIS curves (Fig. S23), and it shows ignorable change. The outstanding durability of NiCo@C-NiCo-MoO/NF could be assigned to the N-doped graphenedecorated NiCo alloy, which can avoid corrosion in the harsh alkaline environment, thus improving the chemical stability. Besides, the self-supporting mesoporous nanosheet can enhance the mechanical stability, since it has large specific surface area to increase the contact area with electrolyte and prompt the release of O 2 bubbles.
After the durability tests, SEM images of NiCo@C-NiCoMoO/NF maintain the pristine morphology (Fig. S24), and HRTEM images show that the mesoporous nano-sheet structure keeps well (Fig. S25), suggesting its excellent stability. In addition, the peak of the Ni 0 and Co 0 disappeared (Fig. S26a, b), which indicates that the surface of catalyst is oxidized during the OER process, and it could form the Ni/CoOOH [46]. As displayed in Fig. S26c, the XPS spectra of Mo 4+ are also oxidized to Mo 6+ and Mo 5+ , further suggesting the surface oxidation.
WE Catalytic Performance of NiCo@C-NiCoMoO/ NF
Based on the excellent performance of NiCo@C-NiCoMoO/ NF toward HER and OER in alkaline solution, the two-electrode system is used to evaluate the WE performance by using it as bifunctional catalyst (Fig. 6a). In Fig. 6b, at 100 mA cm −2 , under 1.0 M KOH solution at 30 °C, WE performance of NiCo@C-NiCoMoO/ NF (1.71 V) is better than that of the Pt/C/NF‖IrO 2 /C/NF (1.80 V) couple; it is smaller than most of the reported datum as shown in Fig. 6c. Interestingly, the NiCo@C-NiCoMoO/NF only requires a low potential of 2.01 V to deliver 1000 mA cm −2 and perform for 295 h with negligible change (50 mV, Fig. 6d), indicating that it is promising for industrial hydrogen production.
Additionally, as exhibited in Fig. S27, the amount of H 2 and O 2 is acquired by water drainage method at 0, 25, 50, 75, 100, and 125 min operating at ± 10.0 mA. Figure S27a, b shows the volume ratio of H 2 and O 2 is about 2:1, which is consistent with theoretical values, suggesting the closely 100% Faradic efficiency for WE.
Subsequently, NiCo@C-NiCoMoO/NF is tested in 6.0 M KOH + 60 °C (Fig. 6b, e); it only needs 1.90 V at 1000 mA cm −2 and can keep for 43 h without obvious attenuation. Therefore, NiCo@C-NiCoMoO/NF with excellent performance provides a promising material for WE to hydrogen production.
Conclusions
In summary, NiCo@C-NiCoMoO/NF, a unique N-doped graphene-encapsulated structure and self-supported mesoporous nano-sheet, is prepared by solvothermal method and annealing treatment. As bifunctional catalyst, it displays outstanding HER and OER performance in 1.0 M KOH solution, which only needs overpotentials of 266 and 390 mV at ± 1000 mA cm −2 , and shows superior stability for 340 h with no evident activity decrease. More importantly, when applied as anode and cathode in 6.0 M KOH + 60 °C, it exhibits a low potential of 1.90 V at 1000 mA cm −2 and can work for 43 h without obvious attenuation, exhibiting performance close to actual application. Therefore, this work may provide a promising catalyst with high catalytic activity and stability for industrial water electrolysis.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 5,789 | 2021-02-19T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Engineering"
] |
Bifurcation Analysis in a Kind of Fourth-Order Delay Differential Equation
A kind of fourth-order delay differential equation is considered. Firstly, the linear stability is investigated by analyzing the associated characteristic equation. It is found that there are stability switches for time delay and Hopf bifurcations when time delay cross through some critical values. Then the direction and stability of the Hopf bifurcation are determined, using the normal form method and the center manifold theorem. Finally, some numerical simulations are carried out to illustrate the analytic results.
Introduction
Sadek 1 has considered the following fourth-order delay differential equation: By constructing Lyapunov functionals, it was given a group of conditions to ensure that the zero solution of 1.1 is globally asymptotically stable when the delay τ is suitable small, but if the sufficient conditions are not satisfied, what are the behaviors of the solutions? This is a interesting question in mathematics. The purpose of the present paper is to study the dynamics of 1.1 from bifurcation. We will give a detailed analysis on the above mentioned question. By regarding the delay τ as a bifurcation parameter, we analyze the distribution of the roots of the characteristic equation of 1.1 and obtain the existence of stability switches and Hopf bifurcation when the delay varies. Then by using the center manifold theory and normal form method, we derive an explicit algorithm for determining the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions.
Discrete Dynamics in Nature and Society
We would like to mention that there are several articles on the stability of fourth-order delay differential equations, we refer the readers to 1-8 and the references cited therein.
The rest of this paper is organized as follows. In Section 2, we firstly focus mainly on the local stability of the zero solution. This analysis is performed through the study of a characteristic equation, which takes the form of a fourth-degree exponential polynomial. Using the approach of Ruan and Wei 9 , we show that the stability of the zero solution can be destroyed through a Hopf bifurcation. In Section 3, we investigate the stability and direction of bifurcating periodic solutions by using the normal form theory and center manifold theorem presented in Hassard et al. 10 . In Section 4, we illustrate our results by numerical simulations. Section 5 with conclusion completes the paper.
Stability and Hopf Bifurcation
In this section, we will study the stability of the zero solution and the existence of Hopf bifurcation by analyzing the distribution of the eigenvalues. For convenience, we give the following assumptions: with φ and f are both continuous functions and those three-order differential quotients at origin are existent. We rewrite 1.1 as the following form: x y, y u,
2.1
It is easy to see that 0, 0, 0, 0 is the only trivial solution to the system 2.1 and the linearization around 0, 0, 0, 0 is given byẋ y,
Proof. When τ 0, 2.3 becomes By Routh-Hurwitz criterion, all roots of 2.4 have negative real parts if and only if The conclusion follows from H 1 and H 2 .
Let iω ω > 0 be a root of 2.3 , then we have Separating the real and imaginary parts gives
Direction and Stability of the Hopf Bifurcation
In this section, we will study the direction, stability, and the period of the bifurcating periodic solution. The method we used is based on the normal form method and the center manifold theory presented by Hassard et al. 10 .
We first rescale the time by t → t/τ to normalize the delay so that system 2.1 can be written as the formẋ τy,
3.1
The linearization around 0, 0, 0, 0 is given bẏ 3.2 and the nonlinear term is The characteristic equation associated with 3.2 is Comparing 3.4 with 2.3 , one can find out that γ τλ, and hence, 3.4 has a pair of imaginary roots ±iτ j k ω k , when τ τ j k for k 1, 2, 3, 4, j 0, 1, 2, . . ., and the transversal condition holds. For By the Riesz representation theorem, there exists a matrix whose components are bounded variation functions η θ, μ in θ ∈ −1, 0 such that In fact, we choose
3.8
Discrete Dynamics in Nature and Society 9 For ϕ ∈ C 1 −1, 0 , C 4 , define 3.9 Hence, we can rewrite 3.1 in the following form:
By direct computation, we obtain that is the eigenvector of A 0 corresponding to iτ 0 ω 0 , and 14 is the eigenvector of A * corresponding to −iτ 0 ω 0 . Moreover, 3.16 Using the same notation as in Hassard et al. 10 , we first compute the coordinates to describe the center manifold C 0 at μ 0. Let w t be the solution of 3.1 when μ 0. Define On the center manifold C 0 , we have z and z are local coordinates for center manifold C 0 in the direction of q * and q * . Note that W is real if w t is real. We consider only real solutions. For solution w t in C 0 of 3.1 , since μ 0,
3.20
We rewrite this asż t iτ 0 ω 0 z t g z, z , 3.21 Nature and Society 11 where
Discrete Dynamics in
3.23 Compare the coefficients of 3.20 and 3.21 , noticing 3.23 , we have By 3.10 and 3.21 , it follows thaṫ
3.29
Thus 3.30 and we have Discrete Dynamics in Nature and Society
13
Then we have
3.32
So we only need to find out W
3.33
Comparing the coefficients with 3.26 , we get
3.35
Then we can get
3.36
Notice that
14
Discrete Dynamics in Nature and Society We obtain where Discrete Dynamics in Nature and Society 15 Consequently, from 3.32 ,
3.40
Substituting g 20 , g 11 , g 02 , and g 21 into 3.41 we can obtain Re C 1 0 . Then we obtain the sign of
3.42
By the general theory due to Hassard et al. 10 , we know that the quantity of β 2 determines the stability of the bifurcating periodic solutions on the center manifold, and μ 2 determines the direction of the bifurcation; and we have the following. ii If β 2 < 0 > 0 , then the bifurcating periodic solutions of system 1.1 are asymptotically stable (unstable).
An Example and Numerical Simulations
In this section, we give an example and present some numerical simulations to illustrate the analytic results. Example 4.1. Consider the following equation: Clearly,
According to the formula given in Section 3, we can obtain that −0.217 0.098i, g 02 g 11 g 02 E * 0.
Conclusion of 4.1
The zero solution of system 4.1 is asymptotically stable when τ ∈ 0, 0.061 ∪ 0.158, 0.596 . The Hopf bifurcation at the origin when τ 0 τ 0 k is supercritical, and the bifurcating periodic solutions are asymptotically stable.
The following is the results of numerical simulations to system 4.1 .
i We choose τ 0.4 ∈ 0.158, 0.596 , then the zero solution of system 4.1 is asymptotically stable, as shown in Figure 2.
ii We choose τ 0.64 being near to τ 0 2 0.596, a periodic solution bifurcates from the origin and is asymptotically stable, as shown in Figure 3.
Conclusion
In this paper, we consider a certain fourth-order delay differential equation. The linear stability is investigated by analyzing the associated characteristic equation. It is found that there may exist the stability switches when delay varies, and the Hopf bifurcation occurs when the delay passes through a sequence of critical values. Then the direction and the stability of the Hopf bifurcation are determined using the normal form method and the center manifold theorem. Finally, an example is given and numerical simulations are carried out to illustrate the results. By using Lyapunov's second method, Sadek 1 investigated the stability of system 1.1 . The main result is as the following. i There are constants α 1 > 0, α 2 > 0, α 3 > 0, α 4 > 0, and Δ > 0 such that for all y. for all x, where ε is a positive constant such that with d 0 α 1 α 2 α 2 α 3 α −1 4 . iii φ 0 0 and φ y ≥ α 3 > 0 for all y, and 0 ≤ φ y − φ y /y ≤ δ 1 < 2Δα 4 /α 1 α 2 3 for all y / 0.
Comparing Theorem 5.1 with Theorem 2.5 obtained in Section 2, one can find out that if the sufficient conditions to ensure the globally asymptotical stability of system 1.1 given in 10 are not satisfied, we can also get the stability of system 1.1 , but here the stability means local stability, and the system undergoes a Hopf bifurcation at the origin. Otherwise, here we just need to give the condition on the origin of f x and φ x , the condition is relatively weak. | 2,147.6 | 2009-04-26T00:00:00.000 | [
"Mathematics"
] |
Adaptive Image Compressive Sensing Using Texture Contrast
The traditional image Compressive Sensing (CS) conducts block-wise sampling with the same sampling rate. However, some blocking artifacts often occur due to the varying block sparsity, leading to a low rate-distortion performance. To suppress these blocking artifacts, we propose to adaptively sample each block according to texture features in this paper. With the maximum gradient in 8-connected region of each pixel, we measure the texture variation of each pixel and then compute the texture contrast of each block. According to the distribution of texture contrast, we adaptively set the sampling rate of each block and finally build an image reconstruction model using these block texture contrasts. Experimental results show that our adaptive sampling scheme improves the rate-distortion performance of image CS compared with the existing adaptive schemes and the reconstructed images by our method achieve better visual quality.
Introduction
The core of traditional image coding (e.g., JPEG) is the image transformation based on Nyquist sampling theorem.It can recover an image without distortion only when the transformation number is greater than or equal to the total pixel number of image.However, limited by computation capability, the wireless sensor cannot tolerate excessive transformations, so traditional image coding is not fit for the wireless sensor with a light load [1,2].Besides, owing to the information focus on a few transformation coefficients, the quality of reconstructed image deteriorates greatly once several important coefficients are lost.Recently, the rapid development of Compressive Sensing (CS) [3,4] introduces a new way to solve these defects in traditional image coding.Breaking the limitation of Nyquist sampling rate, CS accurately recovers signals using partial transformations.The superiority of CS lies in the fact that it can compress image by dimensionality reduction while transforming image, which attracts lots of researchers to develop the CS-based low-complexity coding [5,6].
Many scholars are devoted to improving rate-distortion performance of image CS.A popular method is adopted to construct a sparse representation model to improve the convergence performance of minimum 1 -norm recovery; for example, Chen et al. [7] predict sparse residual using multihypothesis prediction; Becker et al. [8] exploit the first-order Nesterov's method to perform efficient sparse decomposition; Zhang et al. [9] use both local sparsity and nonlocal self-similarity to represent natural images; Yang et al. [10] use Gaussian mixture model to generate sparser representation.From different perspectives, these sparse representation schemes achieve some improvement of rate-distortion performance.However, their disadvantage is that rapid increase of computational complexity in spatial resolution, for example, the proposed algorithm by Zhang et al. [9], requires about an hour to recover an image of 512 × 512 in size.To avoid the high computational complexity, some works try to improve the quantization performance according to the statistics of CS samples.An efficient quantizer can reduce the amount of bits; for example, Wang et al. [11] exploit the hidden correlations between CS samples to design progressive fixed-rate scalar quantization; Mun and Fowler [12] and Zhang et al. [13] use Differential Pulse-Code Modulation (DPCM) to remove the redundancy between block CS samples.By reducing statistical redundancies, these quantizers obtain some performance improvements with a Partition Full-sampling image x Setting adaptive sampling rate
Scene
Reconstructed image
CS encoder
Constructing random transformation matrices low computational complexity.Despite its less computational burden, the quantization scheme has limited improvement of rate-distortion performance due to the lower redundancies in CS samples [14].From the above, we can see that there is a tradeoff between computational complexity and quality improvement for image CS.We expect to find a scheme which strikes a balance between the two.Compared with sparse representation and quantization schemes, the featurebased adaptive sampling achieves a satisfying improvement of rate-distortion performance without introducing excessive computations.Its idea is to increase the efficiency of CS sampling by suppressing useless CS samples.The sampling rate of each block is allocated according to various image features; for example, Zhang et al. [15] determine the sampling rate of each block depending on varying block variance; Canh et al. [16] exploit the edge information to adaptively assign the sampling rate for each block.Block variance and edge information mean a low-level vision.They preserve the low-frequency information but neglect the high-frequency texture details attractive to human eyes.Oriented by the twofeature measures, lots of CS samples are invested into those blocks with simple patterns, which results in an undesirable reconstruction quality.To overcome the defect of traditional sampling scheme, useful features should be extracted to express high-level vision.Directed by interesting features, an efficient adaptive scheme can guarantee the recovery of highfrequency details.
Texture as a visual feature is used to reveal similar patterns independent of color and brightness, and it is the mutual inherent property existing in object surfaces; for example, tree, cloud, and fabric have their own texture details.Texture details contain important information on structures of object surfaces, revealing relations between object and its surrounds.Texture details represent high-frequency components which are more attractive to human eyes.In this paper, we propose to set the sampling rate of each block based on texture details.We design texture contrast to measure the varying texture features and assign a high sampling rate to the block with a striking texture contrast.We remove the redundant CS samples of each block with a low-texture contrast.When reconstructing the image, the distribution of texture contrasts is used to weight the global reconstruction model.Experimental results show that the proposed method improves the visual quality of reconstructed image compared with the adaptive schemes based on block variance and edge features.
Adaptive Block CS of Image
The framework of adaptive block CS is shown in Figure 1.At CS encoder, the natural scene is first captured by CMOS sensors as a full-sampling image x with size of × ; that is, the total number of pixels is ⋅ .Then, divide image x into small blocks of × in size and let x represent the vectorized signal of the th ( = 1, 2, . . ., , = / 2 ) block through raster scanning.Next, the number (≪ 2 ) of CS samples for each block is set according to image features.We construct a random transformation matrix Φ of × 2 in size for each block.Finally, the CS-samples vector y of each block of in length is computed by the following formulation: in which the elements of Φ obey Gaussian distribution.We define sampling rate as follows: The CS-samples vectors of all blocks will be transmitted to the CS decoder.When receiving CS samples of each block, we construct the minimum 2 - 1 norm reconstruction model as follows: in which ‖ ⋅ ‖ 2 and ‖ ⋅ ‖ 1 are 2 and 1 norms, respectively, Ψ is the block transformation matrix, for example, DCT and wavelet matrices, and is a fixed regularization factor.
Because the objective function of model ( 3) is convex, it can be solved by using Gradient Projection for Sparse Reconstruction (GPSR) algorithm [17] or Two-step Iterative Shrinkage Thresholding (TwIST) algorithm [18].CS theory states that the signal can be recovered precisely by using model (3) if in which is the sparse degree of the th block and is some constant [19].Due to the nonstationary statistics of nature images, sparse degree of each block distributes nonuniformly.From (4), we can see that blocks with a large sparse degree cannot be accurately reconstructed once the sampling rate is too low; that is, the fixed number of block CS samples is not enough to capture all information on original image.Therefore, the sampling rate of each block should be set adaptively according to its own sparse degree.It is a straightforward method to acquire the block sparse degree that counts those significant transformation coefficients.However, this obviously violates the superiority of CS theory.Once the encoder performs full transformation, image CS has no advantage over the traditional coding.Therefore, it is impractical to directly get block sparse degree using full transformation.To avoid full transformation at encoder, some image features are exploited to indirectly reveal the block sparse degree, for example, block variance, the number of edge pixels.In this indirect way, we can get some improvement of rate-distortion performance; however, these features only reveal the varying of local pixel values, which improves the objective quality of reconstructed image but results in poor visual quality, which is shown especially by the occurrence of many blocking artifacts.In view of the above-mentioned, the proper feature is required to guide the adaptive sampling so as to improve the rate-distortion performance as well as guarantee a better visual quality.
Proposed Adaptive Sampling Scheme
Each block in nature image has different texture details.Richtexture blocks are more attractive to human eyes due to the existence of more high-frequency components while lowtexture blocks which have lots of low-frequency components tend to be neglected by human eyes.Therefore, uniform sampling will degrade the quality of reconstructed image.To solve the defect of uniform sampling, we propose to measure varying texture features of each block and then use them to guide the adaptive sampling and reconstruction.The flow of our method is presented in Figure 2. At CS encoder, we firstly generate the texture-feature map v of full-sampling image x.Then, we compute the block texture contrast according to v .Afterward, the number of block CS samples is determined adaptively by .Finally, the partial transformation matrix Φ is constructed to perform random sampling.At CS decoder, block texture contrast is estimated again according to .The estimated block texture contrast is used to weight the reconstruction model so as to improve visual quality of high-texture regions.
3.1.
Computing Block Texture Contrast.The calculation of texture analysis should not be too much in order to guarantee low encoding complexity.To avoid excessive computations, we use the maximum gradient value in 8-connected region of each pixel to measure the texture variation of each pixel; that is, in which , is the luminance value at pixel position (, ), , is the luminance value at pixel position (, ) in 8connected region of , , and | ⋅ | is the operation to compute absolute value.The texture variation of each pixel in x can be computed by using (5) and used to construct the matrix v as follows: The matrix v is shrunk by hard-thresholding with threshold to generate the texture-feature map v as follows: in which the value is set from 0 to 1.In the texture-feature map v , value 0 means no difference between current pixel and its neighbors, and value 1 means a big difference between current pixel and its neighbors.The energy of texture features in each block is computed as follows: in which Λ(x ) denotes the pixel position set of x .We define the normalized texture-feature energy as the texture contrast, that is, Figure 3 shows feature maps based on block variance, edge, and texture, among which the edge feature is extracted by using Sobel operator [20].It can be seen that texture contrasts are highlighted in the region of hair and eye with rich texture details, and the edge features are also presented in the texturefeature map.However, maps of block variance and edge show fewer texture details, making features on block variance and edge not suitable to dominate the adaptive sampling which is meant to intensively capture the information on rich texture details.In view of the above analysis, the proposed texture contrast can guide CS sampling to capture the rich-texture block at a high sampling rate.As the core of traditional image coding, the fast DCT transformation bears computational complexity ( log 2 ).For texture extraction, however, that is only (), showing a low computational complexity at CS encoder to compute the texture contrast of each block.
Adaptive Sampling and Reconstruction.
Due to nonstationary statistical characteristics of nature image, the block sampling rate varies with the texture contrast, introducing difficulties in controlling the bit rate.To handle that, we set a total sampling rate S for the whole image and then determine the total number M of CS samples as in which is the total pixel number.The number of CS samples for each block can be computed by using the block texture contrast as follows: in which 0 is the initial sampling number of each block and round[⋅] is the round operator.After determining the block sampling rate by (11), we assign some high-texture blocks to excessive CS samples, bringing a high-quality recovery of texture region.However, the blocks in nontexture region are assigned to fewer CS samples, leading to a worse reconstruction quality in non-texture region.The big difference of reconstruction quality makes texture region salient to human eyes, and thus degradation of visual quality happens.To solve that, we set the upper bound U of sampling number to be 0.9 2 for each block.Once the block sampling number exceeds the upper bound, its sampling number is limited to be U.The redundant CS samples are uniformly assigned to some blocks of which the sampling number is smaller than U.After reallocating the redundant CS samples, if the sampling numbers of some blocks exceed U again, we repeat the above steps until the sampling number of each block is smaller than U.According to the number of block CS samples, we construct random transformation matrix Φ and obtain block CS samples vector y by performing (1).When the block CS samples y are received at CS decoder, (3) is used to reconstruct the image block by block.However, the minimum 2 - 1 norm reconstruction model has different convergence performances for various block sparse degrees, giving rise to blocking artifacts of reconstructed image.To reduce blocking artifacts, we can perform the adaptive global reconstruction; that is, the image is recovered once by using all block CS samples.First, all block CS samples are arranged in column as follows: Suppose then we introduce the elementary matrix I to rearrange the column vectors block by block to a raster-scanning column vector of image as follows: Combining ( 12), (13), and ( 14), we get We construct a global reconstruction model as follows: in which Ψ is the transformation matrix of a whole image x.With block sampling number , which reveals the distribution of block texture contrast, we derive the estimator of block texture contrast from (11) as follows: Using this estimator of block texture contrast, we weight the first term in (16) as follows: By (18), we see that the larger ŵ prompts the random projection of block x to be closer to the CS samples vector y .According to Johnson-Lindenstrauss (JL) theorem [21], the Euclidean distance between two blocks is similar to that between the corresponding CS-sample vectors [22]; that is, the weighting coefficients can enforce the rich-texture block to approach the original block and relax the requirement for the Euclidean distance between the low-texture block and its original.Therefore, this weighting constraint adaptively adjusts the reconstruction quality of each block according to the distribution of block texture contrasts.To simplify (18), we construct the diagonal matrix W as follows: in which diag(⋅) is an operator to generate diagonal matrix using the input vector.By using diagonal matrix W, (18) is formed as Suppose ỹ = Wy and Ω = WΘ; we can get We can see that the weighting reconstruction model is still the minimum 2 - 1 norm model.Therefore, the traditional CS reconstruction algorithm can still be used to solve (21).
Experimental Results
Our method is evaluated on a number of grayscale images of 512 × 512 in size including Lenna, Barbara, Peppers, Goldhill, and Mandrill.These test images have different smooth, edge and texture details.For the adaptive sampling scheme, the parameters are set as follows: the initial sampling number M 0 of each block is set to be round(0.3M/n),and the CS samples are quantized by 8-bit scalar quantization.For the adaptive reconstruction scheme, ( 21) is solved by using GPSR algorithm [17], and the transformation matrix Ψ uses Daubechies orthogonal wavelet of 4 in length.In all experiments, the block size is set to be 8, and we set the total sampling rate to be from 0.1 to 0.5.The Peak Signal to Noise Ratio (PSNR) between the reconstructed image and original image is used in the objective evaluation, but all PSNR values are averaged over 5 trials since the reconstruction quality varies with the randomness of random transformation matrix Φ .All experiments run under the following computer configuration: Intel(R) Core(TM) i3 @ 3.30 GHz CPU, 8 GB RAM, Microsoft Windows 7 32 bits, and MATLAB Version 7.6.0.324 (R2008a).
4.1.Select Threshold.In our adaptive sampling scheme, is the only adjustable parameter.Figure 4 shows the impact of different value on the PSNR value when sample rate is 0.1, 0.3, and 0.5, respectively.It can be seen that each test image has higher PSNR values with S being 0.3 or 0.5 and ranging from 0.1 to 0.3, which indicates that our method significantly improves the reconstruction quality at a moderate value.However, when is set to be 0.1, higher PSNR value appears with being near 0.55, and the PSNR value lightly reduces with increasing from 0.55, which indicates that should be greater at a low sampling rate. value is related to the richness of texture details.The greater is, the more feature points gather in the rich-texture region, while conversely the feature points will spread to the edge and smooth regions.Therefore, when sampling rate S and threshold are greater, the reconstruction quality of texture region improves effectively, but otherwise for other regions, thus degrading the objective quality of a whole image.On the contrary, a small could weaken the reconstruction quality of the texture region, which suppresses the improvement of reconstruction quality as well.When the sampling rate S is set to be small, limited by the number of CS samples, fewer CS samples can be assigned to the texture region once value is low, so the reconstruction quality cannot be improved significantly.Apparently, the better objective quality requires a greater .Given the above analysis, we set at 0.15 in our adaptive scheme, in order to guarantee the robust reconstruction quality.
Performance Evaluation of Adaptive Sampling.
Figure 5 shows the reconstructed Lenna images using different sampling schemes at CS encoder when S is set at 0.3.For the nonadaptive sampling scheme, some blurs occur in the reconstructed image, but edge and texture details cannot be well preserved.For adaptive sampling schemes, the blockvariance based reconstructed image has obvious blocking artifacts.And it is the same with the edge based reconstructed one, though it is more visually pleasant than block-variance based reconstructed one.However, with blocking artifacts being suppressed, our scheme gets better visual quality.
Besides, among the four schemes, our method obtains the highest PSNR value, 1.44 dB and 1.15 dB gains, respectively, compared with the block-variance and edge feature schemes.
Overall Performance Evaluation.
To evaluate the ratedistortion performance of the proposed CS codec including adaptive sampling and reconstruction, we select sparse representation and quantization schemes as benchmarks.
For the sparse representation scheme, scalar quantization is used to quantize CS samples; the benchmarks of evaluation are the Multi-Hypotheses Smoothed Projected Landweber (named as MH_SPL) algorithm proposed by [7] and the NESTA algorithm proposed by [8].For the quantization scheme, the DPCM quantizer proposed by [12] is used as the benchmark of evaluation, and its corresponding reconstruction algorithm is the NESTA algorithm, which is named as DPCM + NESTA.The transformation matrix Ψ remains the same as the proposed algorithm.Figure 6 shows the average rate-distortion performance of different reconstruction algorithms.It can be seen that the proposed method improves the PSNR value as bit rate increases.When bit rate is higher than 1.3 bpp, the PSNR value of our method outperforms other algorithms, and the gap between them gradually increases.According to the analysis of computational complexity in [17], the global reconstruction can be done ( log 2 ).
Then we get the total computational complexity () + () + ( log 2 ).Table 2 lists the reconstruction times for different schemes at different sample rates.When the total sampling rate S is 0.1, the execution time of our method is close to that of MH_SPL for different test images, but much less than those of NESTA and DPCM + NESTA.As the total sampling rate S increases, the execution time of our method gradually increases as well.When the total sampling rate S is 0.5, our method takes 164.33 s on average to reconstruct an image.Compared with MH_SPL algorithm, our method requires more reconstruction time at a high sampling rate.Therefore, the improvement of PSNR value for our method requires a large amount of computation.
Conclusion
In this paper, we propose to adaptively sample and reconstruct images based on texture contrast.At the CS encoder, we first compute the texture contrast of each block, and then we set the sampling rate of each block adaptively according to the distribution of block texture contrasts.At CS decoder, the texture contrast of each block is used to weight the reconstruction model.Experimental results show that the proposed adaptive sampling and reconstruction algorithm can effectively improve the quality of reconstructed image.
Our method has better rate-distortion performance than that of sparse representation and quantization schemes.Image coding is the application background for our adaptive CS sampling scheme; thus full-sampling image is available at the encoder.However, since the encoder cannot use the full-sampling image for the application of compressive imaging, our method loses its efficacy.Therefore, our further study should be aimed at realizing the adaptive CS sampling in the analog domain for the sake of the application of our method in compressive imaging.
Figure 1 :
Figure 1: Framework of adaptive block CS.
Figure 2 :
Figure 2: Flow of the proposed algorithm.
Figure 3 :
Figure 3: Comparison of feature maps based on block variance, edge, and texture for 512 × 512 Lenna.
Figure 4 :
Figure 4: PSNR curves of the reconstructed images with the different threshold when the sampling rate is 0.1, 0.3, and 0.5, respectively.
Figure 5 :
Figure 5: Comparison of visual qualities of 512 × 512 Lenna sampled by different methods when the sampling rate is 0.3.
Figure 7 :
Figure 7: Comparison of visual qualities of Mandrill reconstructed by different methods when the sampling rate is 0.3.
Table 1
lists the PSNR values of
Table 1 :
Comparison of PSNRs (dB) reconstructed images when using different algorithms.
Table 2 :
(21)arison of execution time(s) to reconstruct an image when using different algorithms.The proposed adaptive CS scheme involves extraction of texture feature, adaptive sampling, and global reconstruction.Suppose the total pixel number in test image is , and the total number of CS samples is .The extraction of texture feature takes () operations, and the computational complexity of adaptive sampling is ().We use GPRS algorithm to solve the global reconstruction model(21), in which transformation matrix Ψ is constructed by Daubechies orthogonal wavelet.
test image when the sampling rate S is 0.1, but the PSNR values of NESTA and DPCM + NESTA algorithms have little difference from that of our algorithm.However, when the sampling rate S is 0.3 or 0.5, our method achieves an obvious PSNR gain compared with other algorithms.Figure7shows the visual results of reconstructed Mandrill image by different methods, and it can be observed that the proposed method has better visual quality, especially that the texture details are better preserved when compared with other algorithms.4.4.Computational Complexity. | 5,343.8 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
INFLUENCE OF POLYLACTIDE MODIFICATION WITH BLOWING AGENTS ON SELECTED MECHANICAL PROPERTIES
The article presents research of modification of PLA with four types of blowing agents with a different decomposition characteristic. Modifications were made in both cellular extrusion and injection molding processes. The obtained results show that dosing blowing agents have the influence on mechanical properties and structure morphology. Differences in the obtained results are also visible and significant between cellular processes.
INTRODUCTION
Modification of polymeric materials is intended to give the products specific properties or, in case of hard to recycle materials, facilitate their processing.The modification process involves changing the technological conditions of the manufacturing process, the processing tools as well as the addition of auxiliary materials such as fillers, plasticizers and stabilizers.The aids can be divided into two main groups: processing and functional.Group of processing agents are processing stabilizers and processing modifiers.Functional agents include properties stabilizers and property modificators [3,11,12].
Functional modifiers are primarily property stabilizers that prevent undesired behavior of the material during its further use.The properties stabilizers include, among others, anti-aging agents that prevent the release of harmful disintegrants, metal deactivators, anti-aging electric cables and hydrodesistant stabilizers, prevent water effect [2,9].
A large separate group is property modifiers.They enable changing optical, mechanical, surface, flammability properties.
Modifiers include also porous and microporous blowing agents (porophors).Their main ingredient is gas, which, under appropriate conditions of the process, expands, resulting in porosity.The obtained product changes its structure from solid to porous.Mold the material so that the separated gas can be cooled down and concentrate the microporous material.This modifier affects many properties such as density, hardness, elasticity, tensile strength, stiffness [7,8,10,14,15].
Modification with the use of the blowing agents can be carried out using two main processes of plastics processing i.e. extrusion [4,11,12] and injection [1,5,6,13].In both cases, it is important to choose the type and quantity of the dosing blowing agent, and thus the conversion of processing parameters taking into account the characteristics and max.decomposition temperature of the agent.
In the experiments, various types of chemical blowing agents were used: Expancel 950 MB manufactured by Akzo Nobel, Hydrocerol 530, Hydrocerol ITP 810, manufactured by Clariant Masterbatches GmbH and LyCell-F017 manufactured by Ly-TeC GmbH.
Hydrocerol 530 is an exothermic blowing agent with nucleating properties.It comes in a granular form, with the diameter of spherical grains ranging from 2.4 to 2.8 mm.In order to obtain high foaming process efficiency, the processing temperature should range between 120-170°C.Active substances in this blowing agent constitute a mixture of appropriately proportioned chemical compounds such as azodicarbonamide.
LyCell-F017 is an endothermic blowing agent.It has the form of pellets with a diameter ranging from 1.2 to 1.8 mm and a length from 2.3 to 2.5 mm.This blowing agent is a mixture of sodium acid carbonate and 2-hydroxypropanetricarboxylic acid (citric acid).
Expancel 950 MB is a blowing agent that has a form of spherical thermoplastic polymer capsules (microspheres) that contain a hydrocarbon gas.This is an endothermic blowing agent.Expancel microspheres do not bond because the capsules retain their blocking properties, which pre-vents release of the constrained gas.Expancel 950 MB is a mixture that contains 65% microspheres in the copolymer of ethylene and vinyl acetate (EVA).
The decomposition products of the applied blowing agents mainly include carbon dioxide CO 2 , a small amount of water H 2 O and nitrogen N 2 .Selected properties of the discussed blowing agents are listed in Table 1.
Based on the adopted research program, polymeric material for cellular extrusion and cellular injection molding was modified in such a way that the blowing agents were fed into it during the mechanical mixing process.The blowing agents used in the cellular processes were fed into the polymers being processed in the following quan- tities: 0.5%, 1.5% and 3% by mass (w/w).In order to obtain high process efficiency for the above blowing agents, the processing temperature range was 160÷185°C.
The experimental tests were conducted on a laboratory technological line for profile extrusion, its main component being a double-screw extruder EHP 2x20 Sline (Fig. 1), produced by Zamak Mercator (Poland).The extruder's plasticizing unit had four heating zones, the screw had an L/D ratio of 25 and an outside diameter, D, of 20 mm.The rotational speed of the extruder screw ranged from 0 to 200 rpm and was adjusted continuously.The technological line also consisted of a head for profile extrusion.The head had a replaceable extruder die to enable extrusion of profiles with different sizes and shapes, both symmetric and asymmetric.
Used for tape profile extrusion, the extruder die had a width of 15.5 mm and a height of 2.0 mm.The extruder head was made up of two heating zones and two corresponding ring-shaped heaters mounted on the head body.The extrusion line also consisted of a cooling device that had a length of 1740 mm, width of 220 mm and depth of 200 mm.In the tests, we also used a belt hauloff; the belt had a width of 100 mm and a length of 2000 mm.
The extrusion process was carried out under the developed and imposed conditions, set in the extrusion process line.They included the following: temperature values of heating zones in the plasticizing unit were respectively: 125, 140, 150, 160, 165, 175, 180 and 195°C; temperature values in the head for the three heating zones were respectively: 190, 175 and 165°C.The determined rate of screw rotation was altered within the range of 50-100 rpm).Temperature of the cooling agent was 12÷14°C.
PLA modification was also made in injection molding process with the use of the same blowing agents (Table 1).The test stand consists of a screw injection molding machine, Arburg AllRounder 320°C, equipped with an injection mold, as shown in Figure 2. The machine has a single-screw plasticizing system, with a screw diameter of 36 mm.A moveable subassembly of the injection mold is mounted to the moveable table of the machine, in which two mold cavities are located.The mold cavity has the shape and shown in Figure 3.
The mold cavity dimensions are as follows: length 150 mm, width from 10 to 20 mm, thickness 4 mm.Directly in the mold cavities there are point ejector pins mounted, with a diameter of 6mm.In the fixed subassembly of the mold, there are flow system channels which supply the polymeric material to the mold cavities; the channels have a direct contact with an injection nozzle of the plasticizing system.
The following parameters of the cellular injection molding process were adopted: injection and clamping time -2 seconds; molded piece cooling time -30 s, temperature of thermostated mold -19 ±1°C.Injection molding machine All-Rounder 320°C do not have possibility to direct control and readout of injection pressure and counter-pressure.The value specifying the injection pressure is the pressure in the hydraulic system, aimed at testing the value of -5 MPa.In the experiments, the temperatures of the polymers being investigated were set in the particular heating zones of the plasticizing system in the following way: in zone I 160°C, in zone II -170°C, III -180°C, IV -185°C.
In the framework of research program tensile strength and elongation at break were measured.The research of mechanical properties was made on tensile machine Zwick/Roel Z010 according to PN-EN ISO 527-1:1998 and PN-EN ISO 1798:2001.Strength properties of the injection molded parts under static tension were determined using a testing machine manufactured by Zwick Z010.The machine was equipped with 10 kN screw wedge chucks together with the accessories.The measurements were done at a tensile speed of 10 mm/min and under the measuring load range 0÷500 N. The applied shape and dimensions of the specimens complied with the relevant norm.The specimen thickness corresponded to the injection molded part thickness and it was measured and registered each time together with the width of the measuring length right before the tests.
The structure observations of the extrudate was made in passing light on metallographic microscope Nikon ECLIPSE LV100ND with DS-U3 Digital Camera Control Unit.Significant differences in mechanical strength of the porous, both extruded and injection molded parts, can be observed.For example, the blowing agent Hydrocerol 530 (Fig. 7), with the endothermic decomposition characteristics, dosed in the range from 0 to 3.0% causes a decrease in tensile strength by 55% in extruded part and about 30% in injected parts.In the case of polylactide with 3% content of blowing agent LyCell F017 (Fig. 6) the tensile strength decrease average 16% in extruded parts and 17% in injected parts.
The blowing agent was dosed in 0.5-3.0%by weight, so as to produce extrudate and injection molding with a solid surface and a cellular core.The shape and outside dimensions of the product agree with shape and dimensions of solid products made of the tested PLA.
RESULTS AND DISCUSSION
The results of determining selected mechanical properties of the injection molded and extruded parts, obtained at different contents of the blowing agent in the polymers being processed are shown in Table 2 and 3 and Figures 5-8.
With increased content of blowing agent in the injected material the value of the yield strength decreases.Elongation at break decreases monotonically, non-linearly, over the range increasing content of blowing agent.This relationship is similar for each of the materials and blowing agents.
It has been observed that increasing the blowing agent content in the extruded product decreases the value of tensile strength in a non-linear manner in the whole content range of the blowing agent in the polymer.
The macroscopic structure of the produced cellular tapes was examined at the stand for polymer cellular structure image analysis.The stand comprised a metallographic, optical image recording devices and a computer equipped with specialist software.Examples of the cellular A discernible effect of the blowing agent used, its type and effect characteristics on the ob-tained morphology of porous molded pieces were observed.Based on the analysis of the photographs taken, it has been found that the injection molded parts with 0.5% content of the blowing The change in the number of pores and their surface quantity in the cross section of the molded parts can also depend on cooling intensity.Fast cooling hampers the occurrence and growth of the pores, especially of those located closer to the surface layer (Figs. 12, 13).In the case of the porous parts produced using the blowing agent with the endothermic decomposition characteristics, the gas release in the course of processing comes to an end once the energy supply is stopped.The obtained porous structure is uniform; the pores have a spherical or quasi spherical shape.The pores have similar sizes, irrespective of their location in the product.
CONCLUSIONS
The tensile strengths results shown in the work are strongly dependent on the characteristics of the blowing agent used.Agent added to a 3%, irrespective of the characteristics of its activity, it con-tributes to a significant deterioration in the strength properties of the tested PLA compositions.
The porous structure is an advantage of extrusion and injection molded parts as it results in a decreased amount of the polymeric composite needed in their production.Owing to the use of chemical blowing agents, porous parts have, among others, lower weight, enhanced damping properties, and lower processing shrinkage.The cellularity determines the gaseous phase content in a cellular product, determining at the same time the density decrease value of this product.Such content of the blowing agent in the polymer determines both the coating continuity over the whole cross section and the uniform distribution of pores and their similar sizes.
There was also a significant influence of the type of processing and technological conditions on the obtained mechanical strength values of the tested PLA compositions.In the extrusion process, the extruded profile is freely extruded and its cooling is carried out directly in the water in the cooling bath.The reason for this is the formation of a product having a porous structure in the whole area, also in the top layer.In the injection molding process, the polymer compositions are cooled in a closed injection mold.Indirect cooling of the composition, as a result of its contact with the cold surfaces of the injection mold cavities, results in the formation of products having a broad solid top layer and a porous core.This results in reduced strength properties extruded compositions compared to injection molded compositions.The change in strength properties of the same PLA composition obtained in such different processing processes is up to 50%.
The strength properties discussed in the paper depend to a great extend also on the characteristics of the blowing agent used.This is largely due to thermal properties of the polymeric materials being used and the effect of the blowing agents on the polymeric materials used in the process.However, such a dependence has not been thoroughly investigated yet and will, therefore, be a subject of further studies.
Fig. 1 .
Fig. 1.View of technological line for cellular extrusion; head with a die for tape producing, and with the tape produced, PLA + 3% Expancel 950 MB
Table 1 .
Selected properties of blowing agents used in the cellular injection molding process
Table 2 .
Research results of the mechanical properties of the injection molded parts of PLA, average values given with accuracy 1 MPa
Table 3 .
Research results of the mechanical properties of the extruded parts of PLA, average values given with accuracy 1 MPa | 3,061.8 | 2017-12-05T00:00:00.000 | [
"Materials Science"
] |
Technology of wear resistance increase of surface elements of friction couples using solid lubricants
Based on the results of experimental investigations in wear resistance increase using lamellar solid lubricants the technology of wear resistance increase of surface elements of friction couples by applying solid lubricants is developed with the following surface plastic deformation providing enough bond strength of solid lubricant with an element surface and increasing operational life.
Introduction
Films can be formed with submission of particles in the loaded contact rubbing in of particles in a surface or pressing of firm lubricant materials. Adhesion connected of a film produced by dispersion on a substrate of firm lubricant materials with addition organic or metal binding.
Savege has found out, that the film of graphite, drift on copper by a graphite brush, has the basic (base) crystal planes almost parallel a surface. Is shown, that this orientation makes 5... 10° concerning a surface of a substrate in a direction of sliding. In the basic results are received for MoS 2 . The more detailed research with the help x-ray diffraction films MoS 2 , generated on copper, has shown, that they contain a monocrystal layer by thickness 2...5 microns with the basic planes parallel surfaces of sliding. It is interesting, that this guided, layer settles down on unguided (not focused) (figure 1). The authors believe, that the high-energy connections of edges of crystals provide coupling a guided, layer with a surface and increase cohesive durability of a guided, film. The lubricant film by thickness 2.4 microns consists approximately of 300 separate layers S-Mo-S. With deterioration of a film the guided, layer approaches with a metal substrate. This rather thick (2 microns) the layer testifies that the process of sliding is accompanied by the advanced plastic deformation. Film received rubbing, for the first time were studied Johnston and Moore [1]. The cylinder covered sated MoS 2 with a fabric, sated rubbing about a copper surface with a different roughness under various. After the first 100 passes the space near to rough nesses was filled by lubrication, that made a surface more smooth. With the subsequent passes the carry (MoS 2 ) was carried out already on MoS 2 , and the thickness of a film continued to be increased even the after 7000 passes. However other researchers have shown, that there is a limiting thickness of a film, which depends on loading. Certainly, the rougher the surface is, the more material it is required for its covering. A completely different film were formed in a dry and damp atmosphere. In a normal atmosphere the more dense packing of particles was realized, that is caused by influence adsorbed of a moisture ensuring the best coupling of the basic planes MoS 2 /MoS 2 . Lancaster has shown, that on smooth surfaces fragments of a lubricant material by the size 10 microns were transferred on a film by thickness 0.05 microns. If the film formed rubbing of a surface, simultaneously is exposed frictional to influence, at the end it collapses as a result of wear process. Though solid lubricant the core wears out with rather large speed, but on a film is transferred of very poorly worn out material. Frequently meet difficulties of restoration of a film with the help of ;free particles; the degree of restoration depends, apparently, on character of distribution of pressure. Such impossibility to keep a film specifies that has a place superficial sliding, as shear can be realized only with strong adhesion in interphase area MoS 2 /MoS 2 .
Thus, at the first stage of functioning of firm lubrication on both surfaces the thin guided, film is formed. Then has a place sliding either between films, or between a substrate and film. The subsequent sliding results in gradual wear process of a film up to a complete exhaustion of a firm lubricant material, or to destruction of a film, that depends on atmospheric conditions and parameters of sliding. Fusaro in detail has studied wear process and destruction films MoS 2 and of fluorinated graphite. The deterioration is displayed as gradual reduction of thickness of a film caused radial and tangential by its replacement from the zone of contact. Last is caused normal and by frictional loadings.
The second kind of wear process is cracking and chipping of a film (similarly of weariness). It is interesting to note, that the same character of behavior is found out in soft materials with moderate deterioration (Ag/Fe). Deterioration of a film, in essence, same, as well as for volumetric materials. Moreover, is shown, as the intensity of wear process similar. Under these conditions more isotropic materials (metals, organic substances, the glasses) collapse by current, while MoS 2 and the layered materials tend to fatigue to destruction of a guided, external layer.
The process of wear process connected and untied films proceeds in two stages. The process described above, has a place until uncover of roughness of a substrate. In the further durability of a film depends on ability of a firm lubricant material in a vicinity of roughness to cover its top.
The destruction of a continuous film occurs for the different reasons. Conducting are two basic processes. First, the heat generated during sliding, softens a film and promotes its oxidation. Secondly, the interaction with an environment can change structure of a film. It, in particular, is fair for MoS 2 , which oxidation in steam of water, on air or in environments of oxygen results to deformation of a film.
Formulation of the problem
Already in the first works on inorganic firm lubrication were mentioned layered screen structures serving with the basic criterion with selection of lubricant components. However soon it was revealed, that not structure in itself, and nature of connections is important. The materials with hexagonal «layered» by structure appeared effective, if the connection between layers were weak, and within the limits of a layer -rather strong. The materials of a type borazon, having strong interlayer of connection, are not effective as firm lubricant materials. So, Holinski and Gansheimer and other authors connect lubricant action MoS 2 to strong polarization of atom of sulfur giving an opportunity to generate layered structure. The graphite has not weak, border, if its layers are not covered with a lubrication moisture. The early researches PTFE connected its low friction to the minimal force of molecular interaction caused by shielding by a large ion of fluorine a charge on atoms of carbon. the mechanism of lubricant action PTFE, in essence, same, as well as at MoS 2 , behind that exception, that PTFE consists of the poorly connected among themselves circuits, instead of layers. There is, a formation on the initial stage of the transferred layer PTFE on counterspecimen rather, it is essential for efficiency of lubrication. Then actually there is a sliding PTFE on PTFE. With sliding PTFE (volumetric) on PTFE two modes of friction are observed: high friction with characteristic intensive carry and low friction with guided films of shift in interphone of area. Further will be shown, that MoS 2 and other firm lubricant materials (for example, Ag/Fe) behave similarly. The approach based on representation weak interlayer of connection, was deeply advanced Jamison, come to a conclusion, that the efficiency of firm lubrication is caused by this weak connection. However MoS2 has unique structure among layered lubricant materials, that makes it especially effective. In essence, the lubricant ability depends on distance between the basic crystal planes being function of electronic structure of metal. In MoS2 the atoms of molybdenum settle down above and below «holes» in the nearest layer, but not above or under other atoms of molybdenum. Such specific structure connect with spin coupled electrons, that at the end causes absence residual interlayer of connections. Further is shown, that such type of a structure can be received by implantation of atoms copper and silver in layered structures with rather strong interlayer by connections of a type NbS 2 and NbSe 2 . In such cases factor of friction decreases with 0.30 up to 0.10. The implantation in graphite chlorides and metals also raises it wear resistance and loading ability.
The researches in a little bit other direction have shown, that of a film MoS 2 received dispersion, have not by lubricant ability, if are used in amorphous a condition (418 K). Besides it is revealed, that of a lubricant film are effective with thickness 200 nm, that makes approximately 300 layers MoS 2 . However if the dispersion was carried out with higher temperature (423 K), wear resistance was reduced even with constant factor of friction. This result connect to presence of a porous irregular film with the low contents of sulfur.
Fleischauer has carried out detailed research films, received by dispersion, and has found out, that they can have two versions: crystalline grains with base planes parallel a surface or perpendicular to it. The serviceability such films is various, and this distinction is caused by different chemical potentials of crystal planes and their edges. The weak connections and low chemical activity are characteristic of base planes, while the edges of planes form strong connections and actively oxidize. Thus, the lubricant ability is essentially connected to orientation of crystals 3. The study of the structure of the modified lead-tin-base bronze Bowden and Tabor for the first time have developed the theory of lubricant action thin films, being based on experiences with films India, lead and copper on substrates from steel, nickel, copper and lead [2]. Were used of a film of different thickness, and counter-specimen served steel indenters of different radius. They have found out, that the force of friction in all cases depends on width of a path of friction (contact platform). It, certainly, serves confirmation that, that where F -force of friction; A -area of contact; S f -durability on shift of a material of a film. Varying such parameters, as thickness of a film, geometry of a sample, loading and hardness of a substrate, they achieved change of the area of contact and accordingly force of friction. Their concept of lubricant action thin films was retied that , S f f Н S = P S = P S f= (2) where f -factor of friction; P -pressure; Hs -hardness of a substrate. In other words, the area of contact depends on hardness of a substrate, while the durability of a film on shift determines specific where Hf -hardness of a material of a film. For example [2], with friction of spheres (r = 8 and 3 mm) on leaden films (thickness 1... 12 microns) on a steel substrate the area of contact first of all are defined by elastic deformation of steel. Thus, where а -area; L -loading; R -radius of a sphere; E -module of elasticity. The plastic deformation of a film begins to influence the area of contact, when radius of contact is less, than five-multiple thickness of a film. Thus, the area of contact can depend both from elastic, and on plastic deformations.
The experiences of Bridgeman and other researchers have shown, that the durability grows by shift with increase of pressure. Therefore equation (4) here P is equal to hardness of a film. However usually deal with surfaces, flat initially or generated during wear process of curvilinear indenter, thus the area of real contact can be defined by a film [see equation (7)] or geometrical area of contact. Then factor of friction becomes function of loading (or pressure) ( figure 2), and, as shown numerous researches, frictional the behavior is best described by the equation From the equation (8) follows, that factor of friction decreases with growth of pressure, S and α remain constant. If the material is used as a film, in accounts are accepted its durability on shift and parameter a. In quality S the durability on shift of a film Sf or interphase of area Si can be accepted, if in the latter case has a place sliding. The pressure P also accepts different meanings: with low loadings P the equation (7) can be equalled of hardness of a film see. In this case surface of contact (Аг) is discrete and is limited to tops of roughnesses. Thus, the friction should be same, as well as with sliding the friend on the friend of volumetric samples of a lubricant material. With very thin films elastic or plastic deformation of a substrate the influence on the area of contact up to limiting meaning P = Hs can render. In this case friction should be much lower, as shown in a figure 3.
With growth of loading the area of contact is increased, and the friction remains constant up to some critical pressure P = P*. Above than this pressure Аг = Аа (nominal area of contact) and P = L/Aa. Then the friction is reduced, while (with rather high pressure) will not reach size f = a. Thus, for flat surfaces it is possible to use three equations: (Аа -constant determined by geometry of system).
Conclusion
So, it is possible to assume, that there are two modes of friction of firm lubricant materials; a mode of sliding with low pressure and shift mode with high. In the latter case, the mechanism of viscous or plastic current is realized. Factor of friction is directly proportional to the area of contact. This area is defined by the nominal area for flat samples or elastic deformation for curvilinear of contact. In a mode of sliding the area of contact is defined or hardness of a lubricant film, or (for very thin films) hardness of a substrate. The mechanisms of sliding are described by models 3, 4 or 5 (see tab. 1). And as durability on shift Sf it is necessary to use adhesional durability Si. If the above mentioned reasons are applied to real materials, for silver, MoS 2 , lead and model viscous material (200 N/m2 s) it is possible to receive curves shown in a figure 3. The appropriate data on hardness and durability on shift are taken from several sources. The meanings of parameter and are received from work of Bridgeman. These curves are based on simple reasons and do not apply for severity. However they illustrate the tendency in behavior of firm film lubricant materials. Let's notice, that with small loading the friction should be very high, if has a place pure shift or current Obviously, in this case will occur grip or transition to other type of sliding. Usually has a place transition to slipping or essential reduction of the area of contact caused by decrease of normal pressure owing to deformation of a material.
Numerous literary given for films MoS 2 (experiments in conditions of a dry atmosphere) it is enough precisely approximate by the equations (9a) and (9b). Factor of friction makes 0.04 down to pressure ~ 560 MN/m 2 . Then it begins to decrease. It is interesting to note, that for guided films MoS 2 the hardness makes 600 MN/m 2 , so for MoS 2 P* = Hf (hardness of a film). Both Barry and Binkelman, and Reed and Shaw it is revealed, that with small pressure the friction, does not depend on hardness of a substrate. Thus, the equation (9b) is not applied. All experiments with small pressure are carried out for the different nominal areas, so the friction does not depend and on the nominal area of contact. It serves confirmation that, that the hardness of a film determines the actual area of contact. Last circumstance predetermines subsequent tribologycal behavior.
The film MoS 2 is not ideally smooth. With small loading of roughness of a film perceive this loading and, being deformed, form the actual area of contact. The effort of shear individual spot also makes measurable force of friction. With growth of loading the actual area of contact is proportionally increased. Thus, factor of friction remains constant. At the end, when the pressure becomes equal to hardness films MoS 2 , the contact is entered by the whole area of a contact. Then the friction begins to decrease, as now and Ar and Sf constant: If the pressure exceeds P*, the friction decreases up to size equal a, owing to ArSfL-1 ->0. When the durability on shift is not increased with growth of pressure, and should be equalled to zero and factor of friction will aspire to zero. Thus, low factor of friction MoS 2 is caused by high hardness of a guided film.
For very soft backs the friction can grow, as shown in a figure 4 (on the data Barry and Binkelman). If the hardness of a substrate is less, than the hardness MoS 2 , friction grows, probably, because of deformation of a substrate. The thicker film, the effect will be less significant. Figure 4. Influence of hardness of a substrate on factor of friction MoS 2 ("+" -Spherical particles MoS 2 ) 1 -lead; 2 -babbit; 3 -silver; 4 -copper; 5 -silver plate; 6 -brass; 7 -aluminium; 8 -bronze; 9 -steel 1020; 10 -molybdenum; 11 -titanium; 12 -TZM; 13 -tungsten; 14 -rigid steel. For very firm backs (more than 8 GN/m 2 ) was not observed of essential reduction of friction, though a brass and bronze give lower, and titanium the higher debate, than predicts dependence submitted in a figure 4. With pressure exceeding 560 MN/m 2 , continuous film MoS 2 shearing. Factor of friction is directly proportional to the area of contact with the given loading. The contact is defined by the nominal area of a sample or elastic deformation of the concentrated contact. For softer backs there can also be their plastic deformation, which increases factor of friction a little. However, when f comes nearer to α = 0.02, the influence of pressure becomes insignificant. The small meaning of factor of friction (0.02) with high pressure (2.8 GN /m 2 ) is fixed Peterson and johnsori for very thin films, roughnesses, generated at tops, of a substrate. In this case pressure is equal to hardness of a substrate and the actual area of contact is small. For continuous films other meanings turn out. Thus, in a mode of shear of adhesional connections the friction first of all is defined by pressure; for the given pressure it remains constant.
The data for films of tin are submitted in a figure 6. There is a satisfactory conformity between the equations (9a) and (9b) and experimental results. With small pressure factor of friction equal turns out 0.40. This size is close to received Rabinowich with friction of a volumetric sample of tin on steel (0.29... 0.51). The data for lead are submitted in a figure 7. Though as a whole the described above tendency is appreciable, but there are some distinctions, especially with small pressure. Factor of friction lays in a range 0.40... 0.70, instead of is equal, as was predicted, 0.20. However it nevertheless is lower, than meaning 1.30, received in experiences Tsus with sliding lead on steel. This fact becomes clear if to proceed from of the adhesional theory of friction. The strong coupling of lead with steel results in increase of the area of contact for the account of tangential effort. The same behavior observed Kato, working with thick films. Also has noticed, that of a thicker film give higher factor of friction close to friction of a volumetric sample of lead. As the growth of adhesional connections is not characteristic of tin and MoS 2 , they frictional behavior can be predicted. It, probably, is caused low adhesion (Sn/Fe and MoS 2 /MoS 2 ) or that the deformation of these materials does not conduct to growth of friction. Some researchers have fixed high friction (0.30) with sliding MoS 2 on films MoS 2 in a damp atmosphere [5]. Thus the carry of large fragments was observed, that connect to increase adhesion MoS 2 /MoS 2 . Other examples of such influence adhesion are known also. The silver behaves similarly to tin or MoS 2 , when is used as a film for lubrication steel or nickel, but with friction on an aluminium surface its behavior is similar to behavior of lead. For the same reason inefficient lubricant materials are of an aluminium film.
Acknowledgments
The analysis of dependences of friction from pressure for solid lubricant films allows better to understand them frictional behavior. The assembled data enable to offer the simple theory agreed to the numerous literary data. According to this theory friction and deterioration solid lubricant films are simple adaptation of their behavior in the volumetric form. Such adaptation is limited to growth of connections, which can take place for thin films.
Certainly, this concept is speculative and is based on the limited data received under different conditions. These results should be reproduced on one equipment in a wide range of loadings with the control and measurement of the actual area of contact. The measurement of micro hardness of surfaces is necessary for carrying out after sliding for definition of hardness thin films. The concept of growth of connections is not suitable for a number deformative of processes, for example, with reference to contact of materials with hexagonal and cubic structures.
It is offered five different models solid lubricant films. The extensive review of the literature with the purpose is carried out to determine, what from models will be coordinated with the basic knowledge about tribologycal behavior of firm lubricant materials. The behavior films with the account adhesion of a substrate, features of formation films, their deterioration and destruction, crystal structure, influence of the atmospheric factors and of frictional the characteristics is discussed. On the basis of the limited data the conclusion about applicability standard of the adhesional theory of friction to thin films is made. It is offered to distinguish two modes of friction: of sliding with small loadings and shear with large. The transition from one mode to other occurs, when the pressure becomes approximately equal to hardness of a film. In a mode of sliding the actual area of contact is caused nominal, which depends on elastic deformation of a material of a substrate. Exhalation and oxides of a surface change frictional behavior solid lubricant films. | 5,192.6 | 2017-06-01T00:00:00.000 | [
"Materials Science"
] |
Performance of an Array of Oscillating Water Column Devices in Front of a Fixed Vertical Breakwater
: The present study explores the performance of an array of cylindrical oscillating water column (OWC) devices, having a vertical symmetry axis, placed in front of a bottom seated, surface piercing, vertical breakwater. The main goal of this study is the investigation of a possible increase in the power efficiency of an OWC array by applying, in the vicinity of the array, a barrier to the wave propagation, aiming at amplifying the scattered and reflected waves originating from the presence of the devices and the wall. To cope with the set goal, a theoretical analysis is presented in the framework of linear potential theory, based on the solution of the proper diffraction, and pressure-radiation problems in the frequency domain, using the image theory, the matched axisymmetric eigenfunction expansion formulation, and the multiple scattering approach. Numerical results are presented and discussed in terms of the expected power absorption by the OWCs comparing different array’s characteristics i.e.,: (a) angle of incidence of the incoming wave train; (b) distances from the breakwater; and (c) geometric characteristics of the different arrangements. The results show that compared to the isolated OWC array (i.e., no presence of the wall), the power efficiency of the OWCs in front of a breakwater is amplified at specific frequency ranges.
Introduction
Sea waves have enormous power, therefore the construction of structures for mitigating such power is not easily accomplished. Breakwaters are widely used in coastal and offshore engineering, frequently applied in coastal protection and restoration schemes. They are barriers, either watersurface piercing or submerged, that are frequently displaced perpendicularly to the dominant direction of the incoming waves, which absorb, diffract, and reflect part of the wave energy, reducing the amount of energy that reaches the shoreline.
Breakwaters are primarily classified according to their structural features as restrained to the wave impact (fixed) or floating structures, whereas under these broad classifications, they can be further subdivided regarding their construction materials, shape, etc. [1]. Specifically, concerning the fixed breakwaters, they operate by reflecting the incoming wave train, as a bottom mounted rigid structure. Rubble mound, seawall, and barrier types of breakwaters fall in this category. Rubble mound breakwaters have probably existed for around 3000 years [2] and they are still applied for sheltering coastlines from wave action. Construction of breakwaters during the ancient time around the Mediterranean Sea was done by blocky stones, sometimes cementitious infill [3], while in recent times, numerous design methods for hydraulic performance and structural stability of rubble mound breakwaters have been developed i.e., [4][5][6][7]. A seawall breakwater is a most common defensive structure, acting as one large solid block at locations where the ocean environments are dominating at the coast. It is usually composed of prefabricated reinforced concrete caissons, representing a better alternative in terms of performance, construction rapidity, standardization, environmental implications, and construction and maintenance costs when compared with the rubble mound type [8]. Many analytical and laboratory studies and field observations have been undertaken concerning the design and construction of vertical breakwaters (seawalls). Indicative, recent studies are [9][10][11]. On the other hand, in situations where the complete protection from the waves is not required, thin barriers, impermeable or permeable, supported with piles can be used as breakwaters. Pile breakwaters comprise a series of piles which partially attenuate the wave energy due to turbulences and eddies created around the solids, preventing also effectively the soar from sediment siltation [12][13][14]. In addition, barrier type of breakwater consists of perforated and slotted structures, operating as permeable breakwaters, increasing wave reflection as their porosity increases [15,16].
Although a breakwater structure is constructed to minimize the wave action in areas behind of it, in front of the structure the incoming wave energy is amplified due to the scattered and reflected waves originating from the presence of the vertical wall. This phenomenon has triggered increased interest on wave energy conversion systems operating near and/or on a breakwater. Moreover, the installation of wave energy converters (WEC) in front and/or on breakwaters is facilitated by easier electricity transmission to the mainland, allowing for common usage of infrastructure (i.e., electrical cable, power transfer equipment, etc.).
In the context of a breakwater-WEC system, numerous projects and studies have been presented globally, emphasizing the wall's positive effect on the converter's efficiency. Although several different types of WECs are under development, only few typologies have been commonly used in conjunction with coastal protection structures, namely (a) the overtopping devices (OTD); (b) the oscillating wave surge converter (OWSG); (c) the point absorbers (i.e., heaving devices); and (d) the oscillating water column devices (OWC).
Rubble mound breakwater-OTDs utilize a frontal sloping plate that leads the incident waves to overtop into one or more storage basins placed at a higher level than the seawater level. In its natural way back to the sea, the water passes through turbines, generating electricity [17]. Indicative studies concerning the design optimization of an OTD for efficiency maximization are [18][19][20][21]. Regarding the OWSC operation, the converter typically has one end fixed to the sea bed while the other end is free to move. The wave energy is absorbed by the relative motion of the body (i.e., the converter comes in the form of floats, flaps, or membranes) compared to the fixed point. Analytical formulations on the effect of an onshore barrier (i.e., a straight coast and a vertical breakwater) on the device's efficiency are presented in [22,23].
As far as point absorber devices are concerned, they gain the energy from their oscillation in heave direction, which is driven by means of their interaction with the wave field. The arrangement of a breakwater-heaving device system involves partial-floating bodies placed parallel to the predominant wave direction, in front of a vertical seawall. The bodies' movements due to the scattered and radiated waves from the array's members and their interaction with the vertical wall, allows the conversion of the floaters' kinetic energy to electricity using hydraulic or mechanical transmission [24]. Most recent indicative studies concerning hydrodynamic analysis and efficiency estimation of arrays of heaving devices placed in front of a linear vertical wall are presented in [25,26].
An OWC device is a partially submerged, hollow structure open to the seabed below the water line. The vertical motion of the sea surface alternatively pressurizes and depressurizes the air inside the OWC's chambers, generating a reciprocating flow through a self-rectifying turbine which is installed beneath the roof of the device. Looking towards the multi abilities provided by the breakwater-OWC system, several different designs have been presented in the literature, while most of them concern OWCs integrated at a vertical breakwater. Specifically, in [27,28], a theoretical study of an OWC standing at the tip of a breakwater and along a straight coast is presented, respectively, whereas in [29][30][31], an OWC integrated at a flat breakwater is theoretically and experimentally investigated. In [32,33], a detailed analysis concerning the structural and economic feasibility of integrated OWC within a Mediterranean port is presented and in [34], a linearized theory of an array OWC installed on a straight coast is described. In addition, the effect of a breakwater on OWC performance using CFD analysis under the action of regular and irregular waves is presented in [35]. Finally, in [36], a modified integrated breakwater-OWC system is investigated using numerical and experimental simulations in terms of its power performance. As far as the performance of an OWC device placed in front of a vertical breakwater is concerned, only few studies are presented in the literature. In [37], a theoretical analysis of a vertical OWC device placed in front of a vertical wall is presented, whereas in [38], the efficiency of an array of five OWCs placed in parallel direction to a vertical breakwater is examined, for installation at the port of Heraklion (i.e., Crete island).
The scope of this work is to examine the effect of a bottom mounted, surface piercing, breakwater of infinite length, on the power efficiency of an array of OWCs placed in a random location in front of the wall. The examined converters consist of an exterior partially immersed toroidal body supplemented by a coaxial interior, bottom mounted, free-surface piercing vertical cylinder. In the annulus between the internal cylinder and the external torus, a finite volume air chamber is formed in which the oscillating air pressure is developed. An analogous OWC type has been examined to operate alone in the open sea, combined with a wind turbine supported on the converter's interior concentric cylindrical body, and as part of a Jacket platform [39][40][41][42]. A theoretical model is presented, taking into account the wave hydrodynamic interactions among the OWCs and the fluid flow in front of the breakwater. Several distances between the bodies and the breakwater are examined, along with different array configurations (i.e., OWCs in a rectangular, parallel, and perpendicular arrangement to the front wall) and wave heading angles, to assess the array's efficiency towards its optimization. The presented results show that the power efficiency of an array of OWCs in front of a breakwater is amplified compared to the one in unbounded waters (i.e., without the presence of the vertical wall).
This work is organized as follows: Section 2 describes the solution of the corresponding diffraction and pressure-radiation problems, while in Section 3, the OWCs' hydrodynamic characteristics and absorbed power are presented. Section 4 provides and discusses the numerical results and finally the conclusions are drawn in Section 5.
Formulation of the Hydrodynamic Problem
The diffraction and the pressure-radiation problems under consideration are examined within the context of the arrangement shown in Figure 1. An array of N similar OWCs is assumed, situated in the vicinity of a vertical, bottom-mounted, surface piercing, breakwater of infinite length. The water depth is denoted by h, assuming the sea bottom flat and horizontal. The outer and inner radii of each device's chamber are denoted by α, b, respectively, whereas the distance between the bottom of the external torus and the seabed is denoted by hc. The radius of the interior, bottom seated, coaxial cylindrical body is denoted by c. In addition, the distance between the center of the closest to the wall converter and the breakwater is denoted by Lw, whereas the distance between adjacent OWCs is by Lb. Small amplitude harmonic waves (with angular frequency ω, wave height H, and wave length λ) are incident to the breakwater at an angle β. A global, right-handed Cartesian co-ordinate system Oxyz is introduced with origin located at the bottom of the breakwater, with its vertical axis Oz directed upwards, while N local cylindrical co-ordinate systems , , , q = 1, 2, …, N are defined with origins at the intersection , of the sea bottom with the vertical axis of symmetry of each converter. The geometric layout of the breakwater-OWC system is illustrated in Figure 1. In the present analysis, the fluid is assumed non viscous and incompressible and the water flow irrotational so that linear water wave theory may be employed. Under this assumption, the fluid flow around each device q, q = 1, 2, …, N, described by the potential function: , , ; = , , can be decomposed on the basis of linear modeling as [43]: , , = , , + , , In Equation (1), stands for the velocity potential of the undisturbed incident harmonic wave; is the scattered potential around the q OWC, assuming atmospheric air pressure inside the chamber; denotes the pressure-dependent radiation potential around the q OWC when it is considered as an open-duct body (i.e., atmospheric air pressure inside the chamber) due to unit air pressure , , in the chamber of the p device. The term = + denotes the diffracted component of the corresponding total wave potential around the q body.
The potentials , , are solutions of the Laplace equation in the entire fluid domain and satisfy the proper boundary conditions on the sea bed and the water free surface; the kinematic conditions on the mean body's wetted surface and the no-flux boundary condition on the breakwater's surface [44]: In the present study, the method of images is applied together with the assumption of a fully reflecting breakwater. Specifically, the presence of the breakwater in front of the OWC array is represented by taking into consideration the image "virtual" devices with respect to the wall, without the presence of the wall. The equivalent array of 2N devices is exposed to the action of two-directional surface waves (i.e., one propagating at angle β and a second at angle 180-β) [23,37] (see Figure 2). Herein, based on the method of images, 2N local cylindrical co-ordinate systems , , , = 1, … ,2 are defined with origins at the intersection , of the sea bottom with the vertical axis of symmetry of each OWC. Thus, Equation (1) can be written as: In Equation (3), , , correspond to the velocity potential of the undisturbed incident harmonic wave; the scattered potential around the q, OWC, q = 1, 2, …, 2N, and the pressuredependent radiation potential around the q OWC q = 1, 2, …, 2N, respectively. In addition, , corresponds to the air pressure in the chamber of the p device, p = 1, …, 2N.
(c) kinematic condition on the mean device's wetted surface: Finally, a radiation condition stating that propagating disturbances must be outgoing is imposed.
In Equation (4), is the gravitational acceleration; is the water density; , is the Kronecker's symbol. In addition, in Equation (6), the () term denotes the derivative in the direction of the outward unit normal vector , to the mean wetted surface on the q OWC. The wave interaction phenomena among the OWCs (i.e., initial and image solids) have been taken into consideration through the physical idea of multiple scattering. Specifically, by properly superposing the incident wave potential and the propagating and evanescent modes that are scattered and radiated by the OWC's, exact representations of the fluid's velocity potentials around each device can be obtained, based on the single device hydrodynamic characteristics. The latter are derived through the use of matched axisymmetric eigenfunction expansions [45] for the velocity potential around each single OWC, considered alone in the wave field. Based on this method, the flow field around each OWC is subdivided in coaxial ring-shaped fluid regions, defined as I, II, III, (see Figure 1b) in each of which appropriate series representations of velocity potential can be established. These series representations are solutions of Laplace equation; satisfy the Equations (4)-(6); the radiation condition at infinity and the continuity relation, of the velocity potentials and their radial derivatives, at the vertical boundaries of neighborhood fluid regions.
The method for the solution of the diffraction and the pressure-radiation problems of a single OWC device along with the implementation of the multiple scattering approach on array of OWCs has been thoroughly described in [46]. Nevertheless, by the way of example, the velocity potentials around an isolated OWC device are presented in the Appendix A.
Array's Efficiency
Due to the water oscillation inside the OWCs' chambers, the dry air above the free surface is pushed through an air turbine, located at the top of the chamber, producing an air volume flow inside each chamber, denoted as , , ; = q , , . Here, q equals to: The term denotes the vertical velocity of the water surface in the q OWC, q = 1, .., N; is the cross-sectional area of the inner water surface inside the q device; and is the velocity potential inside the OWCs' chambers (i.e., III fluid domain).
Since the velocity potential around each device of the array can be described as a superposition of the diffraction and the pressure-radiation velocity potentials (see Equation (1)), similarly the air volume flow q inside the q OWC, q = 1, .., N, can be written in the form: In Equation (8), the term q stands for the diffraction air volume flow, while q denotes the pressure-dependent volume flow, known as radiation admittance [47].
The radiation admittance of the q OWC q = 1, …, N due to unit air pressure head inside the p device, p = 1, …, N can be also written as a function of the radiation conductance and the radiation susceptance coefficients, i.e.,: Based on the method of images, the considered breakwater-OWC system interacting with an incident wave of angle β, is equivalent to an array of 2N devices, mirror between each other with respect to the breakwater, which are exposed to the action of two wave trains at angles β and 180−β, without the presence of the vertical wall. Thus, the diffraction volume flow inside the q OWC q = 1, …, N, equals to the sum of the corresponding diffraction volume flows inside the q device for wave angle propagation β and 180-β, denoted as , and , , respectively, when the device is assumed part of an array of 2N bodies, i.e.,: Here, the terms , , = β, 180 − β are derived from Equation (7) for every examined wave heading angle.
The radiation admittance, q , of the q OWC device, q = 1, 2, ..., N, placed in front of a vertical wall can be also derived through the method of images. More specific, the radiation conductance and the radiation susceptance coefficients of the q OWC, q = 1, …, N due to unit air pressure head inside the p OWC, p = 1, …, N can be derived by summing properly the radiation conductance and susceptance coefficients of the q OWC q = 1, ..., N, due to unit air pressure head inside the p OWC, p = 1, …, N with the corresponding coefficients of the q OWC q = 1, ..., N, due to unit air pressure head inside the p OWC's image device, denoted as p', p' = N+1, …, 2N [37,38].
In the present paper, an air turbine is assumed to be placed in the OWCs' ducts, between the chamber and the outer atmosphere that exhibits an approximately linear relationship between the inner air pressure and the volume flow (e.g., a Wells type air turbine) i.e.,: q = Λ (11) Here, Λ represents the complex pneumatic admittance of the air turbine [48]. The real part of Λ is related to the pressure drop through the turbine, whereas the imaginary part represents the effect of air compressibility inside the chamber. In the presented numerical results (see Section 4), the pneumatic admittance of the air turbine is assumed to attain an optimum value, Λ , as presented in [49], which maximizes the power efficiency of a similar OWC device, considered alone in the wave field, without the presence of the breakwater.
Having determined all the pressure coefficients of an array of N OWCs in front of a vertical breakwater, each devices' wave absorbed power, , = 1, … , can be written as: Here, ω stands for the wave frequency. In Equation (12), the term denotes the efficiency of the q OWC device since in the present paper, the losses that occur in the energy conversion chain are not taken into consideration.
To evaluate the constructive or destructive effect of a vertical seawall on the WECs' power efficiency, a "q-factor", , is introduced which is defined as the ratio of the total wave power absorbed by the OWC array to N (the number of OWCs in the array) times the absorbed power by the same converter in isolation condition [26,50]. Thus, it holds: For > 1, the scattered and reflected waves due to the presence of the solids and the vertical wall have a constructive effect on the array's efficiency, increasing the value of the total absorbed wave power compared to the corresponding efficiency of N OWCs placed in isolation (alone in the wave field). On the other hand, when < 1, the wave interaction phenomena between the devices and the breakwater have a destructive effect on the amount of the array's efficiency compared to the wave absorbed power by N number of isolated OWCs.
Test Cases
The theoretical method described in the present paper is applied to an array of OWCs placed in front of a breakwater. The examined converter's external and inner radii equal to α and b = 0.9α, respectively. The distance between the bottom of the external torus and the seabed equals to hc and the water depth equals to h = 7.14α. Concerning the inner cylindrical body, it is assumed to be sea bottom seated with a radius of c = 0.4α The OWCs' air turbine characteristics are considered to be equal to the Λopt value of a similar OWC in isolation condition at its pumping resonance wave frequency [51], whereas the distance between the center of the closest to the wall converter and the breakwater is Lw, and the distance between adjacent OWCs is Lb = 4α (see Figures 1 and 2).
The calculation of the Fourier coefficients of the velocity potentials (see Appendix A) around each OWC device in the array is the most significant part of the presented theoretical analysis because of their influence on the accuracy solution. Here, for the I and III ring elements n = i = 60, whereas for the II fluid domain n = 100. The considered number of modes are m = ±7 and the number of wave interactions were taken equal to 7. The presented numerical results were obtained using the in-house developed computer software HAMVAB [52]. The software being relied on analytical representations of the velocity potential around each cylinder-type OWC device of the array was preferred in the present contribution against other available numerical tools applicable to general 3D geometries, since by keeping the same accuracy with them, it is usually less CPU time-consuming (i.e., for the solution of the diffraction and radiation problem in each wave frequency a time of 40 s is required).
In the following subsections, the absorbed power by the OWC array is presented for several examined parameters, namely: (a) number of converters; (b) array orientation to the breakwater; (c) wave heading angles; (d) distances between the device and the breakwater; (e) device's draught; and (f) distances between the devices.
Effect of the OWCs Orientation to the Breakwater
In this subsection, the effect of the OWC-array orientation with respect to the breakwater on the array's efficiency is presented. The examined converters are placed in front of a vertical breakwater in: (a) parallel to the wall arrangement; (b) perpendicular to the wall arrangement; and (c) in a rectangular arrangement in front of the wall. Furthermore, in each of these arrangements, several numbers of OWC devices are considered, i.e., configurations 1, 2, 3, 4, 5. In Figure 3, the examined orientation of the OWC's arrays arrangements and the breakwater are depicted. Herein, the distance between the center of the closest to the wall converter and the breakwater equals to Lw = 3α, whereas the distance between the bottom of the external torus and the seabed equals to hc = 6.14α. According to the presented analysis, the pumping resonance of the water column inside the oscillating chamber occurs at wave frequency equal to 2.62 [rad/s], for α = 1 m, thus the air turbine coefficient inside the OWCs equals to 10.60 [m 5 /(kN.s)]. Figure 4 depicts the modulus of the inner air pressure head inside the 1st OWC of the array, i.e., /( /2), q = 1, normalized by the wave amplitude along with the factor (as described in Equation (13)) for each examined OWC configuration for the parallel to the wall array arrangement. The results are plotted against the non-dimensional wave numbers, kα, in the range of kα ∈ [0.05, 1.5] and for wave heading angles β = 0, π/6, π/4 (i.e., here k stands for the wave number). Similarly, in Figures 5 and 6, the corresponding inner air pressure of the 1st OWC device and the factor for the perpendicular and rectangular to the wall array arrangements, respectively, are presented. It can be seen from Figure 4 that the breakwater has a significant effect on the inner air pressure head of the 1st OWC device. The values of the air pressure inside the OWC in front of a breakwater do not follow a similar variation pattern to the corresponding values for the unbounded water case. Specifically, the values of the air pressure inside the OWC when placed in front of a breakwater attain almost two times larger values than those of the air pressure inside the same OWC, without, however, the presence of the breakwater, for kα tending to zero, at all the examined wave heading angles and OWC configurations (i.e., 1, …, 5 number of bodies in front of the breakwater). Furthermore, it can be seen that the wave interaction phenomena between the OWCs and the vertical wall affect the values of the inner air pressure of the 1st converter also at higher values of kα. It is depicted that the inner air pressure tends to zero for kα values in the range of [0.5, 0.6] for β = 0; π/6; π/4. This behavior does not appear in the case of the isolated OWC without the presence of the wall. The zeroing of the air pressure head can be attributed to the standing wave due to the presence of the vertical breakwater and in particular it appears when the distance between the initial and the image converter equals to half wave length [53]. Furthermore, it is also notable that for kα ≈ 0.7, the inner air pressure of the converter, when it is considered part of an array or isolated in the wave field, exhibits a resonant peak regardless the wave heading angle. This resonance is associated with the pumping modes of the interior basin due to the existence of the moonpool [54]. As far as the number of the OWCs in each examined configuration is concerned, it can be seen that as the number of the OWCs in the array increases, the values of the inner air pressure inside the 1st OWC device attain an oscillatory behavior around the corresponding values of the single breakwater-OWC case, for large values of kα.
Concerning the effect of the breakwater on the array's power efficiency, it can be seen from the factor figures, that this effect is constructive or destructive depends on the examined wave number. Specifically, at small wave numbers, the efficiency of the array is four times higher than the absorbed power by an isolated OWC (without the presence of the vertical wall). On the other hand, at kα ∈ [0.5, 0.6], the factor attains values lower than one, thus the breakwater has a destructive effect on the array's power efficiency. Furthermore, at higher wave numbers, i.e., kα > 0.6 the presence of the breakwater increases the array's absorbed power compared to the isolated case, since > 1. Nevertheless, as the wave heading angle increases, the wave numbers in which > 1 are sifted to higher values.
As far as the perpendicular arrangement of the OWCs in front of the breakwater is concerned, a similar conclusion to Figure 4 can be derived concerning the effect of the vertical wall on the values of the inner air pressure of the 1st OWC (i.e., the one closest to the breakwater). Specifically, as in the parallel arrangement case, for kα tending to zero, the doubling of the air pressure values for the breakwater-OWC case is also notable, compared to the no-wall case. In addition, the zeroing of the values at the wave number, which corresponds to a wavelength, equals to two times the distance between the initial and the image converter is presented here. Furthermore, the resonances at the pumping wave frequency, i.e., kα ≈ 0.7 are also depicted regardless of the number of the converters in the array. On the other hand, the factor graphs do not follow the same pattern as the corresponding graphs in the parallel arrangement. It can be reported that the values of the oscillate around the corresponding values of the single OWC-breakwater case. These oscillations become more pronounced as the number of the OWCs in the array increases and can be attributed explicitly to the scattered and reflected waves that seem to strongly affect the wave field around the converters with respect to the incident wave direction. Concerning the examined wave heading angles, it can be seen that the constructive effect of the breakwater on the array's efficiency, compared to the isolated OWC case, is increased as the values of the wave heading angle and the wave number increase. Furthermore, compared to the parallel OWC arrangement, here the values do not zero at kα ∈ [0.5, 0.6], since the distances between the OWCs and the breakwater are not the same for each OWC (i.e., the distance between the initial and image converters does not correspond to half wave length for each OWC at this wave number band). As far as the remaining examined kα values are concerned, the does not seem to attain significantly higher values than the corresponding factor of the parallel arrangement (see Figure 4), despite that the wave interactions between the bodies and the breakwater are amplified in this arrangement. Hence, it can be concluded that the increased interaction phenomena between the OWCs and the breakwater are not always beneficial for the array's efficiency at every wave frequency.
Continuing with the results of the rectangular arrangement, presented in Figure 6, it can be noted that the graphs of the air pressure inside the 1st OWC device follow a similar pattern as in the aforementioned arrangements (i.e., parallel and perpendicular). It is depicted that due to the presence of the breakwater, the air pressure attains double values than the no-wall case, at wave numbers tending to zero, regardless of the wave heading angle and the number of bodies in the array. Additionally, the resonance of the inner air pressure, at the pumping wave frequency i.e., kα ≈ 0.7, is notable. However, this resonance is dictated by the wave heading angle and the wave interaction phenomena between the converters and the vertical wall, since as the wave angle increases, the air pressure values at kα ≈ 0.7 decrease. Furthermore, the zeroing of the air pressure values at kα ∈ [0.5, 0.6] is also notable in the rectangular arrangement. This was also the case for the parallel and the perpendicular examined arrangements. Thus, it can be obtained that the wave number in which the OWC's inner air pressure zeros is affected by the device's distance from its image device and not by the examined array configuration and orientation to the wave impact. As far as the array's efficiency is concerned, it can be derived that for small wave numbers, the breakwater has a constructive effect on the array's power performance regardless of the examined body configuration, or array arrangement, attaining four times higher values of absorbed power than the no-wall cases. On the other hand, as the wave number increases, the distances between the devices and the vertical wall, as well as the wave heading angles and the number of bodies in the array do not always have a constructive effect on the array's power efficiency.
In conclusion, when comparing the results from Figures 4-6, it can be obtained that for wave numbers tending to zero, the three examined configurations attain similar results. For wave numbers in the range of kα ∈ [0.1, 0.3], the parallel to the breakwater OWC arrangement performs better than the rectangular or perpendicular arrangements, attaining higher values of . On the other hand, the breakwater has a destructive effect on the efficiency of the array arranged parallel to the vertical wall for wave numbers in the range of kα ∈ [0.5, 0.6], whereas for the rectangular and perpendicular array, the presence of the vertical wall has a positive impact on the array's power performance at the same wave number range. At wave numbers around the pumping resonance wave frequency, i.e., kα ≈ 0.7, the three examined arrangements attain similar results. Nevertheless, the array's efficiency is increased as the number of bodies in the array increases. Finally, at large wave numbers i.e., kα > 1, the array's power efficiency is mainly affected by the wave heading angle, i.e., the values of increase as the wave heading angle increases.
Effect of the OWCs' Distance from the Breakwater
In this subsection, the effect of the distance between the OWCs' and the breakwater on their wave power efficiency is examined. Here, a parallel array arrangement of five similar OWCs in front of a breakwater is considered (see Configuration 5 in Figure 3a). The characteristics of the examined OWCs are presented in Section 4.1, whereas here, hc = 6.14α. The examined distances from the wall equal to Lw = 3α, 6α, 9α, 12α, and the wave heading angles to β = 0, π/6, π/4. Additionally, the air turbine coefficient inside the OWCs equals to 10.60 [m 5 /(kN.s)]. Figure 7 depicts the modulus of the air pressure head inside the 1st OWC of the array, as well as the factor for each examined distance between the devices and the breakwater and wave heading angles. It can be noted that the distance between the center of the devices and the breakwater significantly affects the air pressure head inside the 1st OWC device. Specifically, the values of the air pressure when the array is placed away from the wall (i.e., Lw = 6α, 9α, 12α) oscillate around those of the Lw = 3α case. It can be also seen that the larger the distance of the converters from the wall is, the stronger the oscillatory behavior of the air pressure RAOs. As far as the examined wave heading angles are concerned, it can be seen from the left side of Figure 7, that the inner air pressure graphs follow a similar pattern, regardless of the wave heading angle. Nevertheless, it should be mentioned that the wave numbers in which oscillations in the air pressure occur, are shifted to higher values as the wave heading angle increases. Concerning the efficiency of the array at several distances from the breakwater, it can be seen (right side of Figure 7) that the examined distances between the devices and the vertical wall cause the array's absorbed power to increase at some kα values and to decrease at another range of wave number compared to the efficiency of five OWCs in isolation condition. This is due to the reflected waves from the wall, which cause a dominant oscillating behavior on the array's absorbed power values. Moreover, as the distance of the OWC from the vertical wall increases, the earlier the oscillatory behavior of the wave absorbed power occurs. As far as the incident wave angles are concerned, it can be obtained that the wave numbers in which oscillations in the power efficiency occur are shifted to higher values as β increases. Finally, it can concluded that the efficiency results for the close to wall distances (i.e., Lw = 3α, 6α) follow a more steady pattern, at every examined wave number, compared to the results of the large-distance cases (i.e., Lw = 9α, 12α), of which they attain a large oscillatory behavior.
Effect of the OWCs' Draught
Next, the effect of the OWCs' draught on their wave power efficiency is examined. Here, also, a parallel arrangement of an array of five similar OWCs in front of a breakwater is considered (see Configuration 5 in Figure 3a). The characteristics of the examined OWCs are presented in Section 4.1, whereas Lw = 3α. The examined OWCs' draughts equal to hc = 6.14α, 5.64α, 5.14α, 4.64α, and the wave heading angles to β = 0, π/6, π/4. Due to the variation of the converter's draught, and consequently to its pumping resonance wave frequency value the air turbine characteristics for the OWC devices equal to: Λ = 10.6; 9.04; 10.47; 18.49 [m 5 /(kN.s)] for each examined draught hc = 6.14α, 5.64α, 5.14α, 4.64α, respectively, for α = 1 m. Figure 8 depicts the inner air pressure head of the 1st OWC device (left column) and the array's total absorbed power (as presented in Equation (12)), i.e., P(ω)/(H/2) 2 , (right column) for various examined OWC draughts and wave heading angles. From the depicted results, it can be seen that the draught of the OWCs affects the inner air pressure head as well as the array's efficiency. Specifically, as the draught of the converter increases, the pumping resonance wave frequency is shifted at lower values, thus the corresponding wave numbers in which the inner air pressure values resonance are transferred to lower values too. Furthermore, it can be noted that as the draught of the OWC increases, the air pressure head inside the 1st OWC decreases. As far as the examined wave heading angles is concerned, it is depicted that β has a minor effect on the inner air pressure head for kα < 0.5. On the other hand, for kα > 0.5, the inner air pressure head inside the 1st OWC decreases as the wave heading angle increases. Concerning the absorbed wave power, it can be seen from the results that the wave numbers in which the absorbed power maximizes (sharp peaks) are not remained constant regardless of the floater's draught, since the wave frequencies at pumping resonance are shifted to lower values. Furthermore, it is noted that the increase of the OWCs' draught has a destructive effect on the array's efficiency as it results in a significant decrease of P(ω). Concluding, it can be obtained that the increase of the OWCs' draught does not increase the array's efficiency. Contrary, as the draught decreases, the array's absorbed power increases at various wave numbers.
Effect of the Distance between the OWCs
This subsection examines the effect of the distance between the devices on their wave power efficiency. Here, a parallel arrangement of an array of five similar OWCs in front of a breakwater is considered (see Configuration 5 in Figure 3a). The characteristics of the examined OWCs are presented in Section 4.1. Additionally, the distance between the center of the closest to the wall converter and the breakwater equals to Lw = 3α, whereas the distance between the bottom of the external torus and the seabed equals to hc = 6.14α. The air turbine coefficient inside the OWCs equals to 10.60 [m 5 /(kN.s)]. Several distances between the converters of the array are examined, i.e., Lb = 4α; 8α; 16α; 32α. Figure 9 depicts the modulus of the air pressure head inside the 1st OWC of the array, as well as the factor for each of the examined distance cases for zero wave heading angle. It can be seen that the inner air pressure of the 1st device seems not to be affected by the distance between the OWCs since similar results are depicted for each examined distance. Furthermore, the presented results follow the same variation pattern regardless the distance between the OWCs in most of the examined wave numbers. It should be noted that the values of the air pressure inside the 1st OWC for the Lb = 32α case are similar to the corresponding values of an isolated OWC in front of a breakwater (see Figure 4a). On the other hand, it can be seen that the qf factor is significantly affected by the distance between the devices, especially at higher wave frequencies, in which the qf values appear as an oscillatory behavior (i.e., kα > 0.7). This behavior becomes more pronounced as the distance between the adjacent OWCs increases. Contrary, for kα < 0.7, the efficiency of the array seems not to be affected by the distance between the devices.
Conclusions
In this study, the efficiency of an OWC array placed in front of a bottom-mounted, surface piercing, fully reflecting, vertical breakwater is investigated in the frequency domain. The examined OWC-array consists of identical OWC devices containing an exterior partially immersed toroidal body supplemented by a coaxial interior, bottom mounted, free-surface piercing cylinder. In the annulus between the internal cylinder and the external torus, a finite volume air chamber is formed in which the oscillating air pressure is developed. A theoretical formulation based on the image method has been applied to simulate the effect of the vertical wall on the array's power absorption, whereas the wave interaction phenomena between the vertical wall and the converters have been taken into account using the multiple scattering approach.
Three different types of array configurations in front of the vertical wall have been studied, namely a parallel, a perpendicular, and a rectangular arrangement. Furthermore, five types of OWC's arrays configurations have been also examined (i.e., arrays consisting of 1, 2, …., 5 bodies) for various cases of distances between the devices and the breakwater, wave heading angles, devices' draughts, and distances between adjacent devices. Based on the theoretical computations shown and discussed in the dedicated sections, the main findings of the present research contribution concern the significant effect of the breakwater on the array's efficiency, which is amplified compared to the performance of isolated OWCs (without the presence of the vertical walls). The power performance amplification is strongly affected by the array's orientation to the breakwater; the number of the bodies that consist the array; the wave heading angle; the devices' distance from the breakwater and the devices' draught. Therefore, it can be derived that the installation of an OWC array in front of a vertical breakwater can be an effective way to improve its power absorption efficiency.
However, the infinite wall assumption realized theoretically by the image method should be further examined concerning probable limitations attained at specific wave frequencies. Following the remarks from [55] concerning an array of cylindrical bodies in front of a vertical breakwater, discrepancies between the results of a finite-and an infinite-length breakwater occur. These discrepancies are more pronounced at wave numbers tending to zero, whereas at higher wave numbers, the results from both methods convergence.
The present research will be continued further by determining in the effect of a V-shaped breakwater of a random angle on the power efficiency of an array of OWCs and comparing it with the results of the present study.
Funding: This research received no external funding. Wave length rk,θk,zk Local co-ordinate system of the k OWC Φ Time harmonic complex velocity potential Velocity potential of the undisturbed incident harmonic wave Scattered velocity potential of the q OWC Diffraction velocity potential of the q OWC Radiation velocity potential resulting from the inner air pressure in p OWC Amplitude of the oscillating pressure head in the chamber of the p OWC g
Conflicts of
Gravitational acceleration ρ Water density Unit normal vector δq,p Kronecker's symbol Mean wetted surface of the q OWC I The infinite ring element around the q OWC II The ring element below the q OWC III The ring element inside the chamber of the q OWC ( ) Time dependent air volume flow Vertical velocity of the water surface in the OWC Cross-sectional area of the inner water surface inside the OWC q Diffraction volume flow of the q OWC q Pressure-dependent volume flow of the q OWC Radiation conductance of the q OWC Radiation susceptance of the q OWC Λ Complex pneumatic admittance (air turbine coefficient) Λ Air turbine coefficient optimum value ( ) Absorbed wave power by each OWC of the array q-factor term 1 | 10,126.2 | 2020-11-12T00:00:00.000 | [
"Engineering"
] |
Adenovirus-vectored novel African Swine Fever Virus antigens elicit robust immune responses in swine
African Swine Fever Virus (ASFV) is a high-consequence transboundary animal pathogen that often causes hemorrhagic disease in swine with a case fatality rate close to 100%. Lack of treatment or vaccine for the disease makes it imperative that safe and efficacious vaccines are developed to safeguard the swine industry. In this study, we evaluated the immunogenicity of seven adenovirus-vectored novel ASFV antigens, namely A151R, B119L, B602L, EP402RΔPRR, B438L, K205R and A104R. Immunization of commercial swine with a cocktail of the recombinant adenoviruses formulated in adjuvant primed strong ASFV antigen-specific IgG responses that underwent rapid recall upon boost. Notably, most vaccinees mounted robust IgG responses against all the antigens in the cocktail. Most importantly and relevant to vaccine development, the induced antibodies recognized viral proteins from Georgia 2007/1 ASFV-infected cells by IFA and by western blot analysis. The recombinant adenovirus cocktail also induced ASFV-specific IFN-γ-secreting cells that were recalled upon boosting. Evaluation of local and systemic effects of the recombinant adenovirus cocktail post-priming and post-boosting in the immunized animals showed that the immunogen was well tolerated and no serious negative effects were observed. Taken together, these outcomes showed that the adenovirus-vectored novel ASFV antigen cocktail was capable of safely inducing strong antibody and IFN-γ+ cell responses in commercial swine. The data will be used for selection of antigens for inclusion in a multi-antigen prototype vaccine to be evaluated for protective efficacy.
Introduction
The African Swine Fever Virus (ASFV) is a high-consequence Transboundary Animal Disease (TAD) pathogen that causes hemorrhagic fever in swine and has mortality rates approaching 100% [1].There is no vaccine or treatment available for this disease.The ASFV is a large enveloped double-stranded DNA icosahedral virus which exclusively infects the mammalian family of suids and argasid ticks of the genus Ornithodoros.This pathogen is responsible for major economic losses in endemic areas (sub-Saharan African countries and Sardinia) and poses a high risk to swine production in non-affected areas as it continues to spread globally [2].Therefore, it is imperative that appropriate counter-measures are developed to reduce the prevalence of this disease in endemic areas, prevent further outbreaks in affected countries and safeguard the swine industries in non-affected areas.
Development of an efficacious vaccine for ASFV is still a challenge.There is strong evidence to suggest that protection against ASFV can be induced since attenuated virus has been shown to protect against parental or closely related virulent isolates [3][4][5].Attenuated vaccines, however, are yet to be rigorously tested in the field in readiness for deployment.Development of an affordable DIVA (Differentiating Infected from Vaccinated Animals) ASFV subunit vaccine is a more attractive option, especially for use in non-endemic areas, in case of an outbreak.
Subunit vaccines based on one or two ASFV antigens have so far failed to induce immunity strong enough to confer significant protection among vaccinees [6][7][8][9], but, immunizing swine with DNA plasmids expressing a library of restriction enzyme digested ASFV-genome fragments conferred protection in a majority (60%) of the vaccinees against lethal challenge [10].This result, though in favor of developing subunit based vaccines for ASFV, also highlights the main challenges associated with it, i.e. identification of protective antigens as well as a suitable delivery vector to induce strong protective responses.It is envisaged that successful development of an effective subunit vaccine will require empirical identification and validation of multiple suitable antigens that will induce significant protection in majority of the vaccinees.
We have previously shown that immunizing swine using a cocktail of replication deficient adenoviruses expressing ASFV antigens p32, p54, pp62 and p72 elicited robust antigen-specific antibody, IFN-γ + cellular and cytotoxic T-lymphocyte (CTL) responses [11].We used E1-deleted/replication-defective human adenovirus (Ad5) vector since it is safe, gives high protein expression levels and replicates at high titers in complementing cells making production scalable and reproducible [12,13].In addition, efficacy of adenoviruses in swine immunizations has previously been demonstrated in the successful development of a recently USDAlicensed recombinant Foot and Mouth Disease vaccine [14,15].In this study, we evaluated immunogenicity of seven ASFV vaccine candidates selected based on published literature (Table 1).
The ability of these antigens to induce antibody and T-cell responses in commercial swine has not been evaluated so far.The antigen, EP402R, has been previously evaluated, however, only the extracellular domain was included and expressed as a fusion chimera along with other ASFV antigens, p32 and p54.In this study, we altered the EP402R protein sequence to delete the proline-rich repeats in the cytoplasmic tail and the resultant protein was designated EP402RΔPRR.The proline-rich repeats have been shown to interact with the adaptor protein SH3P7 in host cells and it is theorized that this interaction could, in part, be responsible for the immunomodulatory role of the EP402R protein [27].Thus, deletion of the proline-rich repeats is expected to abrogate immunomodulatory effects when the EP402R protein is included in a multi-antigen subunit vaccine.
The focus of this work was to evaluate the immunogenicity of seven novel ASFV antigens, in commercial swine using replication-deficient adenovirus as a delivery platform with an end goal of identifying candidates for rationally designing a prototype multi-antigen ASFV subunit vaccine.
Generation of recombinant adenoviruses expressing ASFV antigens
The amino acid sequences of the ASFV antigens (Georgia 2007/1 isolate) were obtained from Genbank (Accession FR682468).The EP402RΔPRR sequence was generated by deleting the proline-rich repeats from the EP402R cytoplasmic domain [27].Since K205R and A104R polypeptides are short, they were fused in frame to generate a chimeric sequence, designated K205R-A104R.The coding sequences of the target antigens (A151R, B119L, B602L, EP402RΔ PRR, B438L, and K205R-A104R) were then modified to add, in-frame, a FLAG-and HA-tag at the N-and C-termini, respectively, and the resultant amino acid sequences were used to generate synthetic genes which were codon-optimized for protein expression in swine.Synthesis, codon-optimization, cloning in pUC57 vector, and sequence-verification of these genes was outsourced (GenScript, NJ, USA).Each target gene was then amplified by PCR using attB1-FLAG specific forward and attB2-HA specific reverse primers and subcloned into Gateway pDonR221 vector (Invitrogen) as per manufacturer's protocols.Positive pDonR clones were validated by sequencing and used to transfer the gene cassette into the adenovirus backbone vector, pAd/CMV/V5-DEST (Invitrogen) by homologous recombination.Validated positive pAd clones were then used to generate recombinant adenoviruses, designated AdA151R, AdB119L, AdB602L, AdEP402RΔPRR, AdB438L, and AdK205R-A104R, using the ViraPower Adenoviral Expression System (Invitrogen).Antigen expression by the adenoviruses was confirmed by immunocytochemistry of infected Human Embryonic Kidney (HEK)-293A cells (ATCC CRL-1573).One clone of each recombinant adenovirus was selected based on protein expression and amplified in a T75 tissue culture flask to make a working stock.The working stock was then used to infect up to 40 T175 flasks to generate bulk virus for immunization.The infected cells were harvested, lysed by freeze-thawing three times and the lysate was recovered.The titer of the virus in the cell lysates was determined by immunocytochemistry using [16] B119L Critical for virus assembly.90% of deletion mutants are crippled and fail to generate viable viral particles [4], [17] B602L Chaperone for p72 (major capsid protein), repression leads to decrease of p72 expression, inhibition of pp220 and pp62 processing.Deletion severely alters viral assembly.Recognized by domestic pig and bush pig hyper-immune sera [18], [19], [20], [21].
Generation of recombinant ASFV antigens
The genes for antigens A151R, B602L, EP402RΔPRR, B438L, and K205R-A104R were PCR amplified from the respective pDonR clones using FLAG-specific forward and HA-specific reverse primers.The resultant PCR products were cloned into the pFastBac™ HBM TOPO shuttle vector (Invitrogen).Positive clones were identified by PCR screening, sequence-verified and used to generate recombinant baculoviruses using the Bac-to-Bac HBM TOPO Secreted Expression System (Invitrogen).Protein expression by the generated viruses was confirmed by immunocytochemistry of infected Sf-9 cells.One clone of each baculovirus was then scaled up and used to infect High Five cells (Invitrogen) to generate recombinant proteins.These proteins were affinity-purified using the anti-FLAG M2 affinity gel (Sigma, A2220).Recombinant B119L was affinity purified similarly, but from AdB119L-infected HEK-293A cell lysates.
Validation of protein expression
Immunocytochemistry. Protein expression by the recombinant adenoviruses was evaluated by immunocytochemistry as described previously [28].Briefly, HEK-293A cell monolayers infected with the recombinant adenoviruses, were probed with mouse anti-HA-alkaline phosphatase conjugate (Sigma, St. Louis, MO) diluted 1: 1,000 in Blocking buffer (PBS with 5% fetal bovine serum).Duplicate infected HEK-293A cell monolayers were first incubated with a gamma-irradiated convalescent swine serum (1:250 dilution) [11].Following 3X washes, the cells were further incubated with a 1:500 dilution of alkaline phosphatase-conjugated goat anti-porcine IgG (Southern Biotech, Cat# 6050-04) for 1 hr.Following washes as above, Fast Red TR-Naphthol AS-MX substrate (Sigma, F4523) was added to the cells to detect the alkaline-phosphatase activity.Protein expression by the recombinant baculoviruses was similarly evaluated by infecting Sf-9 cells.Mock infected cells served as negative controls.
Swine immunizations.Twenty weaned swine were randomly distributed into the treatment and control groups (n = 10).The treatment group was immunized with the Ad-ASFV cocktail (1 X 10 11 IFU) of each construct (formulated in ENABL adjuvant (Benchmark Biolabs, Cat# 7010106-C6)).The control group received Ad-Luc (6 x 10 11 IFU) formulated as above.The inoculum (5ml) was injected intramuscularly (1-2ml/site) in the neck area behind the ears.The animals were then boosted similarly after 8 weeks.Blood was collected for sera and PBMC isolation once pre-immunization and then biweekly post-prime, and then weekly postboost for 3 weeks to run ELISAs and IFN-γ ELISPOTs.The animals were euthanized at 4 weeks post-boost.
ELISA
Antigen-specific antibody responses were evaluated by an indirect ELISA as previously described [11].Briefly, microplates coated overnight at 4˚C with 100 μl of 1 μg/ml of affinitypurified antigen in bicarbonate coating buffer were washed and blocked with 10% non-fat dry milk in PBS with 0.1% Tween 20 for 1 hr.Sera were diluted at 1:100 (week 4 post-prime) or 1:8,000 (week 2 post-boost) in blocking buffer and added at 100 μl per well in triplicates.After incubation for 1 hr.at 37˚C, the plates were washed and incubated for another hr.with 100 μl/ well of a 1: 5,000 dilution of peroxidase-conjugated anti-swine IgG (Jackson ImmunoResearch, Cat# 114-035-003).Following washes, the plates were developed with Sure Blue Reserve TMB substrate (KPL, Cat# 53-00-02) and reaction was stopped using 1N Hydrochloric acid.The absorbance at 450 nm was read using BioTek microplate reader (Synergy H1 Multi-mode reader).The IgG response by each animal to each antigen was calculated as mean absorbance of test sera minus the mean absorbance of the cognate pre-immunization sera.To determine antigen-specific IgG end-point titers, sera from blood collected two weeks post-boost was serially diluted two-fold starting at 1: 4,000 up to 1: 4 X 10 6 .The pre-immunization serum was similarly diluted.The end-point titer was calculated as described previously [11].
Indirect Fluorescence Antibody assay (IFA)
Pretreated Teflon coated slides with fixed ASFV (Georgia 2007/1)-infected and mock-infected VERO cells (ATCC CCL-81) were used to perform the IFA as previously described [11].Briefly, the slides were incubated with sera from two weeks post-boost diluted 1:250 for 1 hr.at 37˚C.ASFV-specific convalescent serum (1: 10,000) was used as a positive control and normal swine serum (1:250) (GIBCO) was used as a negative control.Following extensive washes with D-PBS, the wells were incubated with FITC-conjugated goat anti-swine sera (Kirkegaard and Perry Cat No. 02-14-02) for 45 minutes at 37˚C, washed again and mounted with Prolong Gold antifade reagent with DAPI (Invitrogen, Cat.No. PT389868).The cells were visualized at a 40X magnification using an Olympus immuno-fluorescent microscope (model BX-40) and photographed by an Olympus digital camera (model DP 70).The IFAs were conducted at Plum Island Animal Disease Center.
Western blot with ASFV-infected cell lysates
Lysates from ASFV Georgia 2007/1 (VERO cell-adapted)-infected VERO cells were used to perform a western blot as previously described [11].Briefly, the prepared cell lysates were electrophoresed on a NuPAGE 4-12% Bis-Tris Gel (1.0mm X 2D well) for 35 mins, followed by transfer to 0.2 μm PVDF membranes (Invitrogen #LC2002) for 1 hour.The membranes were then blocked for 1 hr. in blocking buffer (PBST+5% non-fat dry milk) and transferred to the Protean II Slot-Blotter.Sera from week 2 post-boost were diluted 1:250 in blocking buffer and added to individual wells for 1 hr.at room temperature with shaking.After washing the wells 3X with PBST, the membranes were removed from the blotting apparatus and incubated for 1 hr.with a 1: 2,000 dilution of Goat anti-swine-HRP (KPL #14-14-06).Following washes, the membranes were developed using DAB (Sigma #D4293).ASFV-specific convalescent serum (1:10,000) was used as a positive control and normal swine serum (1:200) was used as a negative control.Background reactivity to host-cell antigens was gauged similarly using mock-infected lysates.The western blot analysis was carried out at Plum Island Animal Disease Center.
IFN-γ ELISPOT assays
Antigen-specific IFN-γ + cell response was evaluated by an enzyme-linked immunospot (ELI-SPOT) assay using the Mabtech kit (Cat# 3130-2A), as per manufacturer's instructions and as previously described [11].Briefly, whole blood-derived PBMCs resuspended in complete RPMI-1640 media were added to wells of MultiScreen-HA plates (Millipore) at a density of 250,000 cells/well.Affinity-purified antigens were added to the cells at a final concentration of 2.5 μg/ml in triplicates.Phytohemagglutinin (PHA) mitogen (5 μg/ml) was used as a positive control, whereas media served as the negative control.The spots were counted by an ELISPOT reader and AID software (AutoImmun Diagnostica V3.4,Strassberg, Germany).The mean number of IFN-γ + Spot-Forming Cells (SFC) for each sample was calculated by subtracting the mean number of spots in the negative control wells from the mean number of spots in the sample wells.The data is presented as mean number of SFC per 10 6 PBMCs.
Statistical analysis
The differences in the mean antigen-specific antibody and IFN-γ + responses between the treatment and the control group were analyzed by an unpaired t-test with Welch's correction, and a P value of 0.05 was considered significant.The analysis was performed with GraphPad Prism Version 6.05 using a significance level of P<0.05.
Ethics statement
All animal procedures were conducted as per the Animal Use Protocol 2014-0020, reviewed and approved by the Texas A&M University Institutional Animal Care and Use Committee (IACUC).The Texas A&M IACUC follows the regulations, policies and guidelines outlined in the Animal Welfare Act (AWA), USDA Animal Care Resource Guide and the PHS Policy on Humane Care and Use of Laboratory Animals.The animals were monitored twice daily for any clinical signs and to document any localized and or systemic adverse effects.At the termination of the study, the animals were euthanized with an overdose of sodium pentobarbital.
Recombinant constructs encoding ASFV antigens
Codon-optimized synthetic genes encoding antigens, A151R, B119L, B602L, EP402RΔPRR, B438L, and K205R-A104R fused in-frame to FLAG and HA tags were used to generate recombinant adenoviruses designated AdA151R, AdB119L, AdB602L, AdEP402RΔPRR, AdB438L, and AdK205R-A104R.The immunogenicity of K205R and A104R was evaluated as a chimera since both proteins are relatively small (~20kDa and ~10kDa) and delivering them in vivo as a chimera would reduce the number of adenoviruses to be inoculated.Evaluation of protein expression by immunocytochemistry of adenovirus-infected HEK-293A cells using anti-HA mAb and the ASFV-specific convalescent serum showed that the assembled recombinant adenoviruses expressed full-length authentic ASFV antigens (Fig 1A and 1B).The synthetic ASFV genes were also used to generate recombinant baculoviruses for generation of affinity-purified recombinant proteins needed for in vitro evaluation of antigen-specific antibody and cell responses.However, despite several attempts, we were unsuccessful in generating a recombinant baculovirus expressing B119L and thus we used affinity-purified antigen from AdB119Linfected HEK-293A cells for in-vitro readouts.The authenticity of the affinity-purified recombinant proteins was validated by western blot using ASF-specific convalescent serum (Fig 1C).Strong bands were detected for all antigens except B438L.The predicted molecular weight of B438L (with the FLAG and HA tags) is ~56kDa.A very faint diffused band (depicted by an arrow) slightly lower than 75 kDa was observed for antigen B438L.The antigen loads had been optimized for signal detection, however, for antigen B438L a strong signal was not detected despite increasing antigen load to microgram quantities (see S1 Fig) .This could be a result of low B438L-specific antibodies in the ASFV-specific convalescent serum.To confirm this observation, we performed western blots of antigens A151R (as a positive control) and B438L, probed with either an anti-HA mAb or the ASFV-specific convalescent serum (S2 Fig) .The band at slightly <75kDa was detected with the anti-HA mAb for antigen B438L, however, no signal was seen with the convalescent serum.This validates that the absence of a strong signal for B438L in the western blot in Fig 1C was indeed due to low B438L-specific antibodies (also confirmed by an ELISA discussed later in the manuscript) in the serum and not an insufficient antigen load.
Ad5-ASFV cocktail primed ASFV antigen-specific antibodies
Twenty commercial swine were randomly divided into two groups (n = 10).Animals of the treatment group were immunized with a cocktail of six recombinant adenoviruses expressing the A151R, B119L, B602L, EP402RΔPRR, B438L, and K205R-A104R ASFV antigens, whereas the negative control group received Ad-Luc sham treatment.After priming, antigen-specific IgG responses were detected in a majority of swine in the treatment group, but not the control group.Data from sera analyzed four weeks post-priming is shown (Fig 2A).The mean response of the treatment group was significantly higher than the control group for antigens A151R (p<0.001),B119L (p<0.01),B602L (p<0.001),B438L (p<0.05) and K205R-A104R (p<0.001).The mean antibody response against the EP402RΔPRR antigen by the swine in the treatment group was slightly higher than the controls but not significant.The strong mean responses observed against antigens B602L and K205R-A104R is consistent with previous studies where these antigens have been shown to be strongly recognized by domestic pig and bush pig hyper-immune sera [20,21].Following boosting at 8 weeks post-priming, antigenspecific recall IgG responses against all antigens were detected in the animals in the treatment group (Fig 2B).The mean response of the treatment group was significantly higher than the control group for antigens A151R (p<0.01),B119L (p<0.001),B602L (p<0.05),EP402RΔPRR (p<0.05), and K205R-A104R (p<0.01), but not for antigen B438L.It is important to note that the responses at week 2 post-boost were evaluated at 1:8,000 sera dilution, whereas the responses post-prime were evaluated at a 1:100 sera dilution (Fig 2).This eliminated the background responses observed against some antigens post-prime in the control group.However, for antigen B119L, the control group still had a low-level of background reactivity after boosting.This background response could be attributed to vector and host-cell line (HEK-293A)specific antibodies since the affinity-purified B119L antigen was derived from lysates of AdB119L-infected HEK-293A cells.Also the response seen in the treatment group is also likely to be inclusive of a low level of vector and host-cell line specific antibodies.Evaluation of antigen-specific end-point titers post-boost in the immunized pigs showed that a majority of the vaccinees had titers !1:256 x 10 3 against antigens A151R, B119L, B602L, and K205R-A104R (Fig 3).The highest titer was 1:2 x 10 6 against B602L in one of the vaccinees (Fig 3).A comparison of the antigen-specific titers in sera from the vaccinees with the titer of the ASFV-specific convalescent serum revealed that Ad-ASFV cocktail was able to induce titers higher or equivalent to the convalescent serum in a majority of animals for antigens B119L (90% of vaccinees), B438L (90% of vaccinees), B602L (80% of vaccinees), and EP402RΔPRR (80% of vaccinees).This is a noteworthy result, since these animals received only two immunizations of the Ad-ASFV cocktail, whereas the positive control convalescent serum came from an animal that received multiple inoculations of live ASFV [11].However, for antigen, K205R-A104R only 3 of 10 vaccinees had titers that matched up to the convalescent serum, whereas for antigen, A151R the titers induced in the vaccinees did not match up to the convalescent serum.
The role of antibodies in ASFV protection is not yet completely understood, however strong evidence in favor of antibodies (reviewed in [29]) and importantly, protection reported by passively acquired anti-ASFV antibodies supports the evaluation of humoral responses in immunogenicity studies focused on identification of novel targets for subunit vaccine development [30,31].In the current study, a cocktail of replication-incompetent adenovirus constructs expressing multiple ASFV antigens primed strong antibody responses against all antigens in a majority of the animals.
Antibodies triggered by the Ad5-ASFV cocktail recognized ASF virus
Indirect Immunofluorescence Antibody Assay (IFA) performed with sera from blood collected from the vaccinees two weeks post-boost, confirmed that the antibodies triggered by the Ad5-ASFV cocktail recognized VERO cells infected with the actual ASF virus (Georgia 2007/1 Antigen specific antibody titers, determined by ELISA, in sera from treatment group animals (T) collected two weeks post-boost.The dilution of the sera at which the absorbance reading was higher than that of the cognate pre-bleed +3 standard deviations is reported as the end-point titer.The ASFV-specific convalescent serum was titrated similarly and is represented by the red star symbol (S).Data is represented as the reciprocal of the end-point sera dilution x 10 6 .For antigen, B119L the sera from control group animals was also titrated to gauge background reactivity to host-cell and vectorderived antigens.An average of the titers of the control group animals was then subtracted from the titer of each treatment group animal to give B119L-specific titers.For the remaining antigens, the post-boost sera from the control group animals showed no reactivity as seen in Fig 2B .https://doi.org/10.1371/journal.pone.0177007.g003isolate) but not mock-infected cells (Fig 4A).Sera from 8 out of 10 swine in the treatment group, but none from the controls, recognized the ASFV-infected cells (Table 2).Sera from 2 animals (swine 89 and swine 91) were most reactive and reacted with the plasma membrane, a virus-factory like structure and general cytoplasm.Western blot analysis of ASFV-infected Vero cell lysates probed with the post-boost sera also validated the above results (Fig 4B).This outcome showed that synthetic genes encoding antigens of ASFV (a Risk Group 3 pathogen) that requires BSL3 biocontainment can safely be used at BSL2 level to develop and test immunogenicity and tolerability of prototype ASFV vaccines.These results, however do not directly demonstrate that the ASFV-specific antibodies have functional activity.In case of ASFV, it is generally acceptable in the scientific community, that conventional plaque reduction assay to measure ASFV antibody neutralization activity is technically difficult since low-passage (virulent) ASFV strains show no or a significant delay in plaque formation, and is especially difficult to conduct the assay in primary swine macrophage cells.A highly attenuated ASFV Georgia strain that is adapted to a suitable cell line (e.g., VERO cells), or a genetically modified ASF virus expressing a chromogenic marker gene, for use in testing study samples for virus neutralization activity was not available at the time the study was conducted.
Ad5-ASFV cocktail primed IFN-γ-secreting cells
Low frequencies of antigen-specific IFN-γ responses were detected in a few animals by IFN-γ ELISPOT analysis of PBMCs collected one-week post-priming (Fig 5A).Specifically, a significant difference (p<0.05) between the mean response of the treatment group and negative control group animals was detected only for antigen A151R (Fig 5A).However, after boosting, strong recall IFN-γ + responses were detected in a majority of animals for all the antigens (Fig 5B ).The mean response of the treatment group was significantly higher than the control group for all antigens (p<0.05 for antigens B119L, B602L, EP402RΔPRR, and p<0.01 for antigens A151R, B438L, and K205R-A104R).The IFN-γ ELISPOT data clearly showed that the homologous booster dose was able to sufficiently amplify the primary response to give strong recall responses against all antigens in a majority of the vaccinees (Fig 5).The high frequencies of antigen-specific IFN-γ + cellular responses induced are promising in light of the results reported from other subunit vaccine studies.Notably, immunization with an ubiquitin tagged chimera of antigens p30, p54 and CD2v using DNA plasmids conferred protection against lethal challenge in some of the vaccinees [7].In addition, in another study, by the same authors, immunizing animals with BacMams expressing the same antigen chimera (p30, p54 and CD2V) conferred partial protection upon a sub-lethal challenge, and a direct correlation between protection and ASFV-specific IFN-γ + response was observed [6].Interestingly, in both studies the IFN-γ response against the extracellular domain of EP402R was negligible.We have shown that the adenovirus-vectored EP402RΔPRR induced strong antigen-specific IFN-γ + responses in 70% of the vaccinees post-boost.
Ad5-ASFV cocktail was well tolerated
Following inoculation of the Ad-ASFV cocktail, three swine in the treatment group were observed to be depressed and one had mild fever on the first day.However, all the swine were normal on all subsequent days.After boosting, one pig in the treatment group was observed to be depressed and had fever that required treatment with Banamine.All the swine in the negative control group were normal post-priming and post-boosting.Overall, the Ad-ASFV cocktail was well tolerated with no adverse effects.
The overall outcome is evidence that a vaccine formulated using a cocktail of replicationincompetent adenovirus expressing protective ASFV antigens is likely to be well tolerated by Fluorescence Antibody assay (IFA) and western blot showed that the antibodies primed by the Ad-ASFV cocktail recognized the parental ASFV infected cells and ASFV-derived antigens.Panel A) Vero cells infected with ASFV Georgia 2007/1 probed with sera from treatment and negative control animals.ASFV specific convalescent serum was used as the positive control and normal swine serum served as the negative control.Data for three animals (that gave the strongest reaction) from the treatment group and one animal from the control group is shown.A summary of IFA results for all animals is presented in Table 2. B) Lysates from Vero cells infected with ASFV Georgia 2007/1 isolate were blotted and probed with sera from all animals.Normal swine serum was the negative control and ASFV-specific convalescent serum was the positive control.Differences commercial swine at doses as high as 10 11 IFU used in a homologous prime-boost immunization regimen.This scenario is anticipated since effective ASFV subunit vaccines will likely require delivery of multiple antigens given that studies conducted so far have shown that a combination of one or a few antigens does not confer complete protection.
Conclusion
The African Swine Fever Virus (ASFV) continues to pose a high risk to the swine industry and it is still causing economic losses in endemic areas.Since there is no vaccine or treatment available yet, it is important to identify viral proteins that can elicit strong immune responses and therefore be considered viable candidates for subunit vaccine development.We have optimized an adenovirus-vector based ASFV antigen delivery system which allows for immunization of swine with multiple ASFV antigens and the subsequent evaluation of their immunogenicity.The robust antigen-specific IFN-γ + responses induced by the adenovirus vector against all the antigens tested in this study as well as other ASFV antigens evaluated in our previous study make it a promising delivery platform for testing vaccine candidates for protection against ASFV [11].Upon investigation of antigen-specific responses of individual animals, we observed a significant (p<0.05)positive correlation between the antigen-specific IFN-γ response and the antigen-specific end-point antibody titers for 4 of the 6 antigens (see S3 The inability of this antigen to induce strong antibody responses was corroborated by the fact that the ASFV-specific convalescent serum also had a comparatively low B438L-specific titer (1:4,000).Thus, even though B438L does not induce a high antibody response, it still is an attractive in coloration are due to actual band intensities; darker color is higher concentration of antibody bound to antigen (antigen concentration is constant).https://doi.org/10.1371/journal.pone.0177007.g004The number of '+' signs represents the comparison between the intensity of a positive signal from the sera of the animals and that from the ASFV-specific convalescent serum (positive control).
'++++': signal as strong as positive control '+': weakest but positive signal '-': No signal detected https://doi.org/10.1371/journal.pone.0177007.t002This study also showed that an adenovirus-based ASFV vaccine can be used successfully for homologous prime-boost vaccination.If this approach is shown to confer protection, it will cut costs incurred by use of a heterologous prime-boost immunization strategy.Thus, these findings support use of the replication-incompetent adenovirus as a vector for the development of a commercial vaccine for protection of pigs against African swine fever virus.The next logical step is to test whether these multiple ASFV antigens delivered in a cocktail format can confer protection in a challenge study.
Fig 1 .
Fig 1.Protein expression by ASFV constructs.The expression and authenticity of the ASFV antigens encoded by the generated recombinant constructs was evaluated by immunocytochemistry and western blot analysis.Panels: A) HEK-293A cells infected with recombinant adenoviruses and probed with anti-HA mAb; and B) HEK-293A cells infected with recombinant adenoviruses and probed with gamma-irradiated ASFV-specific convalescent serum.Negative controls are mock infected HEK-293A cells.C) A western blot of the affinity purified ASFV proteins probed with the convalescent serum.The molecular weights are expressed in kDa.The arrow points to the faint band detected for B438L (the signal intensity of the band increased with longer exposure times).https://doi.org/10.1371/journal.pone.0177007.g001
Fig 2 .
Fig 2. Mean antigen-specific IgG responses post-priming and post-boost.Antigen-specific IgG response was evaluated post-prime and post-boost by ELISA.A) Sera from week 4 post-prime were evaluated at a 1:100 dilution.B) Sera from week 2 post-boost were evaluated at a 1:8,000 dilution (to prevent the absorbance values from going out of range).The error bars represent the SEM.The asterisks denote a significant difference between the mean response of the treatment and control animals.*p<0.05,**p<0.01,***p<0.001.https://doi.org/10.1371/journal.pone.0177007.g002
Fig 3 .
Fig 3. Antigen-specific end-point IgG titers.Antigen specific antibody titers, determined by ELISA, in sera from treatment group animals (T) collected two weeks post-boost.The dilution of the sera at which the absorbance reading was higher than that of the cognate pre-bleed +3 standard deviations is reported as the end-point titer.The ASFV-specific convalescent serum was titrated similarly and is represented by the red star symbol (S).Data is represented as the reciprocal of the end-point sera dilution x 10 6 .For antigen, B119L the sera from control group animals was also titrated to gauge background reactivity to host-cell and vectorderived antigens.An average of the titers of the control group animals was then subtracted from the titer of each treatment group animal to give B119L-specific titers.For the remaining antigens, the post-boost sera from the control group animals showed no reactivity as seen in Fig 2B.
Fig 4 .
Fig 4. Antibodies primed by the Ad-ASFV cocktail recognized ASF virus.Analysis of sera from two weeks post-boost by IndirectFluorescence Antibody assay (IFA) and western blot showed that the antibodies primed by the Ad-ASFV cocktail recognized the parental ASFV infected cells and ASFV-derived antigens.Panel A) Vero cells infected with ASFV Georgia 2007/1 probed with sera from treatment and negative control animals.ASFV specific convalescent serum was used as the positive control and normal swine serum served as the negative control.Data for three animals (that gave the strongest reaction) from the treatment group and one animal from the control group is shown.A summary of IFA results for all animals is presented in Table2.B) Lysates from Vero cells infected with ASFV Georgia 2007/1 isolate were blotted and probed with sera from all animals.Normal swine serum was the negative control and ASFV-specific convalescent serum was the positive control.Differences Fig).An interesting observation, in regards to antigen B438L, is the relatively low humoral responses in contrast to the strong IFN-γ + responses induced (Fig 3, Fig 5, S3 Fig).
Fig 5 .
Fig 5. ASFV antigen-specific IFN-γ response post-prime and post-boost.The frequency of antigenspecific IFN-γ-secreting cells in PBMCs collected post-prime and post-boost was evaluated by IFN-γ
Table 1 . Antigens selected for evaluation of immunogenicity. Gene/ Antigen Functional Characteristics /Immune Relevance Reference A151R
Essential for the virus replication and morphogenesis.May play a role in viral transcription. | 7,190.4 | 2017-05-08T00:00:00.000 | [
"Biology"
] |
Unsupervised Single-Scene Semantic Segmentation for Earth Observation
Earth observation data have huge potential to enrich our knowledge about our planet. An important step in many Earth observation tasks is semantic segmentation. Generally, a large number of pixelwise labeled images are required to train deep models for supervised semantic segmentation. On the contrary, strong intersensor and geographic variations impede the availability of annotated training data in Earth observation. In practice, most Earth observation tasks use only the target scene without assuming availability of any additional scene, labeled or unlabeled. Keeping in mind such constraints, we propose a semantic segmentation method that learns to segment from a single scene, without using any annotation. Earth observation scenes are generally larger than those encountered in typical computer vision datasets. Exploiting this, the proposed method samples smaller unlabeled patches from the scene. For each patch, an alternate view is generated by simple transformations, e.g., addition of noise. Both views are then processed through a two-stream network and weights are iteratively refined using deep clustering, spatial consistency, and contrastive learning in the pixel space. The proposed model automatically segregates the major classes present in the scene and produces the segmentation map. Extensive experiments on four Earth observation datasets collected by different sensors show the effectiveness of the proposed method. Implementation is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/unsupContrastiveSemanticSeg.
I. INTRODUCTION
R APID development of remote sensing technologies has drastically increased the quantity of Earth observation sensors acquiring images with different spatial, spectral, and temporal resolution [1], [2]. A large volume of unlabeled images are currently available for characterizing various objects on the Earth's surface. Automatic analysis of such images is useful to study various anthropogenic and natural factors, including urban monitoring [3], disaster management [4], [5], agricultural monitoring [6], and monitoring natural resources' exploitation [7].
An important step in understanding images is semantic segmentation that assigns each pixel in image/scene to a meaningful category or class. This is true for both computer vision and Earth observation images [8]. Research toward supervised image segmentation methods has received significant attention in the era of deep learning that has outperformed previous methods [9]- [11]. Superior performance of deep learning, especially convolutional neural networks (CNNs), for semantic segmentation can be attributed to their capability to learn spatial features from large volume of labeled data. Most computer vision problems can use crowdsourcing [12] to collect large volume of labeled data. However, collecting labeled data in Earth observation is significantly challenging due to several factors that require domain expertise, including variation among different Earth observation sensors and disparity among different applications. Moreover, active (e.g., synthetic aperture radar) and lower resolution optical images are visually unintelligible, thus making them difficult to be labeled by a volunteer in a crowdsourcing platform. Thus, the applicability of supervised segmentation has been limited on Earth observation images due to the lack of labeled data [13]. Moreover, many Earth observation applications assume presence of only the target scene and no additional scene [14], [15]. Analyzing using only the target scene can be especially useful for quick disaster mapping when there is little time to collect additional unlabeled images.
Recently, unsupervised and self-supervised learning have gained significant attention in machine learning. Such approaches have been devised for different problems, e.g., image clustering [16], video analysis [17], and change detection in Earth observation images [18]. While most Fig. 1. Batch of patches is extracted from the training scene. Model is trained from this batch using deep clustering. Furthermore, this batch is simply transformed and shuffled, to form two other batches, first of which must be similar to the original batch in the feature space and the other must be dissimilar to the original batch in the feature space.
deep-learning-based semantic segmentation methods are supervised [19], [20], unsupervised semantic segmentation methods have been proposed in the literature exploiting deep clustering [21]. Deep-clustering-based approaches have also been extended for Earth observation bitemporal image analysis [3]. As such, self-supervised learning can be potentially used to learn from a single unlabeled scene.
Earth observation scenes generally capture a geographic area and are significantly large in comparison to images in a typical computer vision dataset. As an example, scenes in the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic labeling dataset [22] are up to 6000 × 6000 pixels. Due to the repetitive nature of geographic objects, an Earth observation scene generally captures many instances of the same objects in a single scene. Based on this, we propose to sample smaller patches from a large scene. When randomly sampled, many such patches essentially represent the same object category (e.g., buildings). By taking a batch of patches, an augmented version can be conveniently obtained by data transformation, e.g., noise addition. This allows us to process the patches using a two-stream network similar to contrastive learning [23] and other multiaugmentation methods [24]. By jointly using concepts such as pixelwise deep clustering [25], similarity between multiple augmentations of the same input [24], and contrastive learning [23], we propose a self-supervised method to simultaneously train a network and assign pixelwise labels to an Earth observation scene. The conceptualization behind the proposed method is shown in Fig. 1. The key contributions of our work are as follows. 1) We propose a self-supervised segmentation method that does not require any annotated data and can be trained using single unlabeled Earth observation scene, without requiring any additional pool of unlabeled data. 2) We use the concept of pixelwise deep clustering [25] to automatically discern different classes from a single remote sensing scene. We further use multiple augmentations of same input [24] to ensure that similar inputs produce similar segmentation map. We use the concept of contrastive learning [23] to ensure that dissimilar inputs produce dissimilar output. 3) By performing a set of experiments using input of different sensors and resolutions, we show that the proposed method is able to automatically discern important Earth observation classes. This implies, irrespective of exact application, our method can be a precursor to further analysis in most such applications.
A. Deep Segmentation for Earth Observation
Popular deep-learning-based segmentation architectures include fully convolutional networks (FCNs) [19], U-Net [26], SegNet [27], and dilated convolutional models including DeepLab [28]. For Earth observation images, several supervised segmentation algorithms have been proposed using these architectures [8], [29]- [35]. However, these methods necessitate a large amount of training data for supervised learning. To deal with the lack of training data, Hua et al. [13] proposed a semantic segmentation approach that uses spatially sparse annotations to train the model. In [3], an unsupervised deep clustering algorithm is introduced for the problem of multitemporal Earth observation segmentation. To effectively capture the domain knowledge, Li et al. [36] combine the deep learning module and knowledge-guided ontology reasoning.
Compared with optical images, SAR image segmentation is more challenging due to the sensitivity to noise [37]. The traditional SAR segmentation methods rely on superpixel merging [38], [39]. There are very few methods using deep learning for SAR image segmentation [40]. Wang et al. [40] noted that to train an effective deep model for SAR semantic segmentation, it is important to have high-quality ground-truth data that are not always available.
B. Unsupervised and Self-Supervised Learning
Practicality of supervised methods is limited due to difficulty in acquiring labeled data. Unsupervised learning focuses on alleviating these limitations by learning semantic representations from unlabeled images without relying on predefined annotations. Clustering is an extensively studied unsupervised learning topic. Extending this, deep clustering [16] jointly optimizes the parameters of a deep network and the cluster assignments of the data in feature space. Deep clustering and its variants [41]- [44] divide a set of unlabeled training inputs into groups in terms of inherent latent semantics. Some self-supervised approaches use pretext tasks for learning semantic features [45], [46]. Popular pretext tasks include image rotation [45], jigsaw transformation [47], and rearranging of time-series [48]. Capitalizing on the availability of positive and negative pairs, contrastive methods aim to spread the representations of negative pairs apart while bringing closer the representations of the positive pairs [23], [49]. Bootstrap Your Own Latent (BYOL) [24] further eliminates the necessity of negative pairs using augmented instances of the input. Several works have shown that self-supervised learning can produce good representation even when available data are scarce [50]. Weakly supervised [48], [51], [52], unsupervised [53], and self-supervised learning [54], [55] have also been used in many remote sensing applications, e.g., cloud detection [52], change detection [53], and scene classification [54]. Tao et al. [54] used self-supervised learning for classification using limited label. Yue et al. [56] used self-supervised learning for hyperspectral scene classification.
C. Unsupervised Deep Segmentation
Aligned with the increased interest in unsupervised methods, efforts toward reducing supervision have gained traction in semantic segmentation [21], [57]. A simple yet effective approach toward this is using deep clustering in the pixel space [21], [58]. In [21], a lightweight architecture is used for single-image segmentation and output/label is obtained by arg-max classification of the final layer. Predicted pixel labels and network representation are adjusted in iterations. Pixel-level feature clustering using invariance and equivariance (PiCIE) [25] further exploits geometric consistency in addition to deep clustering for unsupervised segmentation.
Our work is closely related to the above-mentioned unsupervised methods. Like [21] and [25], it exploits pixelwise deep clustering. Our method relies on multiple augmentations of the same input, similar to BYOL [24]. Similar to [23], the method uses contrastive learning. The method focuses on single scene, thus further showing potential of deep self-supervised learning in data-constrained situation, similar to [50]. While works on self-supervised remote sensing classification [54] or self-supervised hyperspectral scene classification [56] still use some labeled samples, our method does not use any labeled sample.
III. METHODOLOGY
To describe the proposed idea, let us denote the available unlabeled scene/image as X and its transformed version asX having the same spatial dimensions of R × C. Although any transformation T X could be useful, we use simple transformations such as addition of Gaussian noise. The transformed version can be taken as an alternative view of the same scene. This allows us to formulate the task of semantic segmentation at hand as a self-supervised problem which typically exploits the idea of reducing the gap among feature representations of multiple views of the same image in an iterative manner without using any labeled data. Both X andX are then processed through a two-stream network and weights are iteratively refined using pseudo labels generated via deep clustering [16] and a contrastive learning strategy [23] in the pixel space to automatically segregate the major classes present in the scene. The proposed method produces segmentation map without using any explicit labels as detailed in Sections III-A-III-F.
A. Proposed Network Architecture
To enable feature learning, a Siamese-like two-stream network architecture is proposed that takes as input the patches of size R × C (R < R and C < C) extracted from X andX . Each training batch is formed by drawing B patches from X denoted as X = {x 1 , . . . , x B } and the spatially corresponding patches fromX , symbolized asX . Since the bispatial patches can be seen as multiple views of the same location, the semantic information can be inferred from them using a proposed Siamese-like architecture [59]. Both the branches have the projection modules f X and fX to obtain learned feature representations for the original and transformed images, respectively. These learned feature representations are then fed to the subsequent prediction modules h X and hX to obtain the respective activation volumes. It is important to note that the projection modules do not share weights (hence, twostream), while the prediction module does share the weights (i.e., h X = hX ), therefore denoted using h only. The projection and prediction modules consist of L 1 and L 2 (in our case L 2 = 1) convolutional layers, respectively, where the total layer L is the sum of L 1 and L 2 . Convolution layers are followed by activation function [rectified linear unit (ReLU)] and batch normalization layer. The input size is preserved in the output as pooling or stride is not used. The projection module uses convolution filters of size 3 × 3, whereas the prediction module is formed with 1 × 1 filter. The K kernels in the final layer groups or clusters input data (pixels) into K groups or classes.
The simplified network architecture for a five-channel network (L = 5) is shown in Table I. The reasoning behind using such an ad hoc lightweight architecture can be explained by the following. 1) Given that training mechanism is unsupervised and training patches are sampled from a single scene, we have limited number of patches. Thus, using a large network is ineffective in such case. This is further supported by previous works on single-scene segmentation [21] that also used such lightweight network. 2) Given that most Earth observation images have much coarser resolution compared with those in computer vision, small networks using only few convolution layers can still capture required spatial context.
B. Pseudo Label Activations
The patches x b andx b refer to patch extracted from the same location in X andX , respectively. The outcome of the network for x b andx b can be represented as Here, each pixel in this tensor can be viewed as a K -dimensional vector of activations. If we denote any generic i th pixel in y b as y b i , then we can obtain the prediction of the semantic label by simply selecting the kernel in y b i that has maximum value. Based on this simple intuition, we formulate the pseudo label assignment as the process of computing c b i by finding the feature having the highest value in K -dimensional pixel activation vector y b i .
C. Pseudo Label Loss Objective
The computed pseudo label c b i is thus considered as the label of prediction y b i . This enables us to quantify per-pixel cross-entropy loss b i between y b i and c b i . b i is aggregated (by computing mean) over pixels in x b and patches in the batch to obtain the loss L p . L p is used to adjust the weights of h and f X . Similarly,L p is computed fromx b (b = 1, . . . , B) and used to adjust the weights of h and fX . Ising L p andL p to iteratively adjust the weights of the network, the proposed method simulates deep clustering in the pixel space.
D. Spatial Consistency
The bispatial patches x b andx b refer to the same location and hence to same objects, and therefore the features computed for such a bispatial pair patch should be similar. To ensure this, we compute per-pixel absolute error loss b i as absolute difference between y b i andŷ b i . The mean of b i over all pixels for all the patches in the batch gives the loss term L s that ensures that the pixels in the bispatial patches x b andx b tend to have the same label. We note that spatial consistency criterion is conceptually similar to bringing closer the multiple views of input as in some self-supervised learning methods [24]. However, differently from them, spatial consistency loss aims to reduce the representation gap at pixel level instead of image level.
A pitfall of the spatial consistency loss is that merely trying to reduce the representation gap of x b andx b may generate trivial solution, simply producing the same output for all pixels.
E. Representation Learning From Disparity
The spatial consistency loss encourages the features computed for a paired bispatial patch to be similar. To balance the overall training procedure, we also use a strategy similar to contrastive learning to ensure that the network should also learn different feature representations for dissimilar patches. To create dissimilar pair patches, we randomly shuffle the batch of patchesX to produceX . This ensures that the paired patches in X andX are indeed dissimilar. These dissimilar bispatial patches are then used to enable the model to learn disparate features computed from x b andx b . Specifically, b i is computed as (negative) absolute error loss between y b i Algorithm ObtainX by randomly shufflingX 6: for j ← 1 to J do 7: for b ∈ B do 8:
F. Progressive Network Training
The proposed mechanism for network training is shown in Fig. 2 and Algorithm 1. Initially, all the trainable weights W 1 , . . . , W L corresponding to all L layers in the network are initialized using He initialization strategy proposed in [60]. Instead, a pretrained network could have been used to initialize weights. However, we note that Earth observation deals with a variety of sensors with different specifics, and suitable pretrained network is not always available. This motivates us to exclude importing weights from pretrained networks.
For each batch of data, training is performed for J iterations when the weights are iteratively optimized using stochastic gradient descent with momentum [61]. Sampling all possible patches from the training scene is equivalent to one epoch, and the training process is performed for a total I epochs. Since pseudo label losses (L p andL p ) and other two losses (L s and L c ) have values in different range, the first epoch is optimized with the sum of L p andL p , while from the second epoch onward the sum of all four losses (L p ,L p , L s , and L c ) is used, which yields a balanced training process taking into account coherent cluster formation, spatial feature consistency, and feature dissimilarity for unpaired patches.
A. Dataset
We use the following datasets for experimental validation. 1) Vaihingen dataset, an urban semantic segmentation benchmark [22], [62] acquired over Vaihingen, Germany, with 9 cm/pixel resolution. The images in Find T D j as the class inT j having highest intersection/overlap with S j 7: Assign T D j as match for S j 8:T j+1 =T j \ T D j 9: 10: end for 11: Assign any remaining class inT N +1 to background. the dataset are composed of three bands-near infrared (NIR), red (R), and green (G), and each image covers approximately 1.38 km 2 . The images show six landcover classes: building, impervious surface, low vegetation, tree, car, and background. Following previous works [13], for test we use the image IDs 11, 15, 28, 30, and 34, i.e., total five test scenes. We train our unsupervised model on a single scene, image ID 1.
2) Zurich summer dataset [63] acquired using Quickbird sensor over Zurich, Switzerland. The images show a spatial resolution of 0.62 m/pixel. Following previous works [13] we use NIR, R, and G in our experiments. Eight different urban classes are present: roads, buildings, trees, grass, bare soil, water, railways and swimming pools. Image IDs 16-20 (i.e., total five test scenes) are used for test, while we train our unsupervised single-scene model on image ID 1. 3) A polarimetric synthetic aperture radar (PolSAR) [64] scene showing an area in Germany comprising four classes [65]. Being characterized by speckle noise and complex backscattering mechanism at the junction of different landcovers, PolSAR images are significantly different from optical images. Thus, the experiment on this dataset illustrates the application of the proposed method beyond typical optical images. Furthermore, due to less visual saliency, PolSAR scenes are challenging to label and there are not many labeled PolSAR datasets. This further proves the application of the proposed single scene unsupervised method on a case where label is actually scarce. This dataset [65] is acquired by ESAR L-band sensor. ESAR is an airbone SAR system of German Aerospace Center (DLR). It captures a semiurban area in Germany (Oberperfaffenhofen, Bavaria province). The scene shows an area of 1300 × 1200 pixels. The reference information for the area is obtained using manually labeling based on the aerial images over the same area in Google Earth. The entire image is classified into four categories: built-up areas (in blue), wood land (in green), open areas (in yellow), and others (in dark blue). The classes are unbalanced, with much more open areas than others. Besides, there are some similarities between the built-up areas and wood land in terms of PolSAR image. Thus, segmentation of this scene is a challenging task for unsupervised methods. 4) Fire disturbance is recognized as an essential climate variables (ECVs) and burned area is its primary descriptive variable [66]. Here, we show the segmentation result produced by the proposed method on a burned area in an Alpine area in north Italy [67]. The fire event took place on February 27, 2019. We applied our segmentation method on postfire image acquired on March 3, 2019, using Sentinel-2 sensor (10 m/pixel spatial resolution and 13 spectral bands), part of Copernicus program of European Space Agency. The goal of this study is to investigate whether the proposed method can identify burned area as a separate cluster from the postevent image. The proposed unsupervised training can be performed either on a different scene from the test scenes (as in the first two cases above) or on the same scene as the test scene (as in the third and fourth cases above).
B. Compared Methods
Our work is one of the first attempts toward obtaining multiclass segmentation in unsupervised way by training on single-scene Earth observation image. Thus, we exclude entirely supervised methods from compared methods and choose following unsupervised/weakly supervised methods for comparison.
1) FEature and Spatial relaTional regulArization (FESTA) [13] is a weakly supervised method proposed in the context of semantic segmentation of high-resolution Earth observation images. The same training scene is used for training FESTA as our method; however, our method assumes no annotated point, while FESTA assumes the presence of some annotated points. We design two variants of FESTA, "FESTA 5 points" by considering five labeled point in the training scene and similarly "FESTA 10 points." 2) An unsupervised deep-clustering-based approach by adopting [16] in pixel space. The same training scene is used as the proposed method, and this method assumes no annotated data as in the proposed approach. 3) Combining deep clustering with image reconstruction as an additional pretext task. This model uses two outputs, one output is optimized for clustering and the other is optimized to reconstruct the input image [68]. 4) Online deep clustering (ODC), derived from [41]. 5) An unsupervised method by simply extracting pixelwise features from the second convolutional layer of VGG16 [69] and applying k-means clustering on the extracted features. This particular layer is chosen since beyond this layer, the spatial size reduces, and thus pixelwise feature extraction is not possible. Since FESTA assumes the presence of labeled pixels in the training scene and we use the same scene for training/testing in II PERFORMANCE VARIATION IN THE PROPOSED METHOD ON THE VAIHINGEN DATASET WITH RESPECT TO EPOCH TABLE III PERFORMANCE VARIATION IN THE PROPOSED METHOD ON THE VAIHINGEN DATASET WITH RESPECT TO K case of the PolSAR scene, we exclude comparison to FESTA for that scene. The burned area scene is evaluated for change detection and hence compared with the relevant method in [67].
C. Settings
The training process of the proposed method is performed using I = 2, J = 50. The number of kernels in the final layer (K ) is set as slightly larger than the number of target classes in dataset, e.g., K = 8 for the Vaihingen dataset and K = 12 for the Zurich dataset. R = C = 224 is used to sample patches from the training scene. A learning rate of 0.001 is used for training.
In an unsupervised clustering setting, it is not possible to automatically discern the name of classes. Hence, each class in obtained segmentation is assigned to the class with most overlap in the reference map. This procedure is further shown in Algorithm 2.
The results are shown as F1 score and intersection over union (IoU). The indices are computed for each target class and the mean is computed over all the classes. We also show accuracy; however, note that accuracy may be misleading as constituent classes are imbalanced and merely learning a single class can lead to seemingly good accuracy.
For the proposed method, the segmentation results are shown as an average of ten runs. Table II shows the performance variation in the proposed method as the number of epochs I is varied by fixing the other parameters. We observe that the performance improvement beyond I = 2 is not significant. Hence, we used I = 2 in our subsequent experiments. To further understand this, we visualize evolution of losses in Fig. 3. L s + L c keeps decreasing slightly beyond I = 2; however, it shows an oscillatory behavior beyond that, which provides further indication toward why optimum result is already reached by I = 2. Table III shows the performance variation in the proposed method as the number of kernels in the final layer (K ) is varied. We recall that the value of K implies the number of classes that we want to cluster the data. The best performance is obtained for K = 8 which is slightly larger than the actual number of classes in the Vaihingen dataset (six classes). Table IV shows the performance variation in the proposed method as the number of layers (L) is varied. The result confirms that only few layers are sufficient for the proposed method, and further increasing the number of layers may not improve the performance.
D. Result on Vaihingen Dataset 1) Result Variation With Respect to Parameters:
2) Ablation Study of Loss Function: Table V tabulates 3) Comparison to Existing Methods: The quantitative result is shown in Table VI. The proposed method outperforms FESTA 5 points, deep clustering, deep clustering with image reconstruction, ODC, and VGG16 + kMeans with respect to all three indices and outperforms FESTA 10 points with respect to two out of three indices. We recall that FESTA is a semisupervised method that uses few annotated points. The proposed method still outperforms it, which shows the efficacy of the proposed method. Segmentation map corresponding to image ID 11 is visualized in Fig. 4. The three columns show input image, reference segmentation, and obtained segmentation, in that order. We observe that dominant classes like buildings (blue) and impervious surfaces (white) are clearly detected by the proposed method. However, it identifies spectrally similar low vegetation and trees in the same cluster. The classwise F1 score is 0.66, 0.48, 0.40, 0.64, and 0.08, for impervious surface, buildings, low vegetation, trees, and cars, respectively. This shows that the proposed unsupervised method is capable of identifying the major classes while its scope is limited for visually inconspicuous classes like cars.
E. Result on Zurich Dataset
The quantitative result of the proposed method versus the compared methods is shown in Table VII. The proposed method outperforms all the compared methods in terms of mean F1, and mean IOU, showing again its superiority even against semisupervised FESTA. Segmentation map for image ID 17 is visualized in Fig. 5. Similar to the observation for Vaihingen, we observe that the dominant classes are clearly detected by the proposed method. However, the performance deteriorates for the nondominant classes.
F. Result on PolSAR Scene
Pauli-color-coded input, reference segmentation map, and the segmentation produced by the proposed method are visualized in Fig. 6. Despite different nature of PolSAR data, the proposed method is able to identify the major classes from the target scene. The quantitative result is tabulated in Table VIII which shows the superiority of the proposed method against other unsupervised methods.
G. Result on Sentinel-2 Burned Area Scene
Our segmentation method is applied on the postchange image (acquired on March 3, 2019). The target area is significantly complex, showing mountain, some snow, forest, in addition to the burned area. Showing the cluster that has the best match to the burned area as positive class and rest as negative class, we obtain a binary segmentation map, as visualized in Fig. 7. It is evident that the proposed method can segregate the target burned area as one class with little false alarm. The method obtains an accuracy of 97.19%.
The result obtained by the proposed method is superior to or comparable to the change detection methods compared in [67] (worst accuracy: 76.16%, best accuracy 99.0%), though the change detection methods use both pre/postchange images, while the proposed method uses only the postchange image.
H. Comments on Computation Time
The proposed unsupervised training on a single scene can be achieved in reasonable time, e.g., it takes approximately 195 s for training on Vaihingen image ID 1 using a machine equipped with GeForce RTX 3090. Using the same hardware and for the same scene, deep clustering [16] takes 280 s and ODC takes [41] 295 s. VGG + kMeans does not involve a training phase. FESTA takes considerably more time than the proposed method (approximately 10 min).
I. Summary of Observations
The proposed method is an inexpensive method, both in terms of annotation (not needed) and computation time. In addition to clustering in pixel space, the proposed method effectively exploits spatial consistency and contrastive loss, which is evident from the fact that the proposed method outperforms deep clustering. While the proposed method's effectiveness to automatically segment small classes is limited, it can effectively segregate the major classes, seen in all the datasets. However, this suits most Earth observation applications where the task is to quickly find one or two classes of interest, e.g., building during Earthquake disaster management and burned area during postfire operations.
V. CONCLUSION
We proposed an unsupervised single-scene segmentation method that combines different recently popular topics from unsupervised and self-supervised learning, e.g., deep clustering in pixel space, different view/augmentation, and contrastive learning. The experimental results on four different Earth observation datasets show that the method can effectively learn dominant classes, e.g., buildings in the Vaihingen dataset. On the other hand, the effectiveness of the method is limited for classes that are inconspicuous. However, given the strong constraints under which the method works (only a single unlabeled scene for training), learning such classes is certainly challenging. A potential direction of extension of this work is training weakly supervised model given few labeled pixels from only such inconspicuous classes. The proposed method complements the supervised models by providing a quick unsupervised way of creating reasonable segmentation map. In future, we will experiment on the images acquired by other popular sensors in Earth observation, e.g., light detection and ranging (LiDAR). He was an Engineer with TSMC Limited, Hsinchu, Taiwan, from 2015 to 2016. In 2019, he was a Guest Researcher with the Technical University of Munich (TUM), Munich, Germany, where he has been a Post-Doctoral Researcher since 2020. His research interests include multitemporal remote sensing image analysis, domain adaptation, time-series analysis, image segmentation, deep learning, image processing, and pattern recognition.
Dr. Saha was a recipient of the Fondazione Bruno Kessler Best Student Award 2020. He is a reviewer for several international journals. He served as a Guest Editor at Remote Sensing (MDPI) special issue on "Advanced Artificial Intelligence for Remote Sensing: Methodology and Application" and Frontiers In Remote Sensing Research Topic on "Learning with Limited Label." | 7,320.6 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Quantum Approximate Optimization for Hard Problems in Linear Algebra
The Quantum Approximate Optimization Algorithm (QAOA) by Farhi et al. is a framework for hybrid quantum/classical optimization. In this paper, we explore using QAOA for binary linear least squares; a problem that can serve as a building block of several other hard problems in linear algebra. Most of the previous efforts in quantum computing for solving these problems were done using the quantum annealing paradigm. For the scope of this work, our experiments were done on the QISKIT simulator and an IBM Q 5 qubit machine. We highlight the possibilities of using QAOA and QAOA-like variational algorithms for solving such problems, where the result outputs produced are classical. We find promising numerical results, and point out some of the challenges involved in current-day experimental implementations of this technique on a cloud-based quantum computer.
Abstract-The Quantum Approximate Optimization Algorithm (QAOA) by Farhi et al. is a framework for hybrid quantum/classical optimization. In this paper, we explore using QAOA for binary linear least squares; a problem that can serve as a building block of several other hard problems in linear algebra. Most of the previous efforts in quantum computing for solving these problems were done using the quantum annealing paradigm. For the scope of this work, our experiments were done on the QISKIT simulator and an IBM Q 5 qubit machine. We highlight the possibilities of using QAOA and QAOA-like variational algorithms for solving such problems, where the result outputs produced are classical. We find promising numerical results, and point out some of the challenges involved in currentday experimental implementations of this technique on a cloudbased quantum computer.
I. INTRODUCTION
The application of quantum computing to hard optimization problems is a candidate where quantum computing may eventually outperform classical computation [1]- [6]. At the time of writing this paper, Noisy Intermediate Scale Quantum (NISQ) computers [5] are being developed by several firms and research groups [7]- [13]. The two main approaches to quantum optimization are (i) the Quantum Annealing (QA) physical heuristic [6] and (ii) Quantum Approximate Optimization Algorithm (QAOA) [4] on the gate-model quantum computer [1].
In this paper, we are going to explore and propose the use of QAOA for hard problems in linear algebra. In particular, we are going to focus on the problem of binary linear least squares (BLLS). The reason for choosing BLLS is because it can be a building block for other hard problems in linear algebra 1 . Previous works in quantum computing for such problems were done with quantum annealing [14]- [17]. We hope that our work provides insights to fellow researchers to further explore the use of NISQ era methods [4], [18] for problems in linear algebra and numerical computation. In Section 2, we cover the necessary background and related work for our paper. Section 3 is about formulating the BLLS problem for the QAOA ansatz. The experiments, results and discussion are detailed in Section 4. We finally conclude our paper in Section 5. We also 1
explained in Section II-A1
have Appendices to complement and support the information in the main paper as needed.
II. BACKGROUND AND RELATED WORK
A. Background 1) The binary linear least squares problem : Given a matrix A ∈ R m×n , a column vector of variables x ∈ {0, 1} n and a column vector b ∈ R m (Where m > n). The linear BLLS problem is to find the x that would minimize Ax − b the most. In other words, it can be described as: arg min The motivation behind choosing the BLLS problem to be applied to QAOA is twofold: firstly, its an NP-Hard problem [19] that makes it a suitable candidate for QAOA. Secondly, it can act as a building block for other hard problems in linear algebra, such as the Non-negative Binary Matrix Factorization [15]. Another reason why one may view BLLS a building block for other problems is because multiple binary variables can be clubbed together for a fixed point approximation of a real variable [14], [16], [20]- [23]. Amongst these, there are some problems that are NP-hard for which an approximate solution would be acceptable [16], [23]. In these cases, QAOA may be able to provide an improvement in approximation ratio (compared to classical solvers) and even increase the probability of sampling the best solution.
2) Non-negative Binary Matrix Factorization (NBMF) : NBMF is a specialized version of the Non-negative Matrix Factorization (NMF) problem. Given a matrix V ∈ R m×n ≥0 , the problem is to factorize it into matrices W ∈ R m×r ≥0 and H ∈ {0, 1} r×n (H would have non-negative real entries in NMF).
NMF and its variants are used in multiple disciplines such as computer vision [24], astronomy [25] and data mining [26], just to name a few. BLLS can be used in order to solve the NBMF variant by using the alternating least squares method [27].
In Algorithm 1, line 5 is solved classically since efficient algorithms exist for it [28], it's line 7 that is solved using BLLS (where QAOA would be applied).
Algorithm 1 Alternating least squares for NMF 1: procedure MAIN(V ) V is the matrix to be factorized 2: Randomly initialize the matrix H ∈ {0, 1} r×n 3: while not converged do 4: for row i from 1 to n do 5: for column j from 1 to m do 7: In the past, quantum annealing was used as a subroutine within this algorithm to solve NBMF and other NMF related problems [15], [16]. Based on our work with this paper, QAOA can be an alternative to quantum annealing for NBMF, which can be explored in the future.
3) Quadratic Unconstrained Binary Optimization (QUBO): The QUBO Objective function is as follows, where q a ∈ {0, 1}, v a and w ab are real coefficients for the linear and quadratic parts of the function respectively. The QUBO objective function is NP-hard in nature [29]. The advantage of this objective function is that many application domain problems map naturally to QUBO [14]- [16], [30], [31]. In the process of applying BLLS to gate model quantum devices, we use the QUBO formulation as an intermediate stage of expressing the problem. 4) The Quantum Approximate Optimization Algorithm (QAOA): In 2014 Farhi et al. proposed an algorithm that uses both quantum and classical computation for solving optimization problems [4]. The potential advantage of using this algorithm is that it can be implemented by using low depth quantum circuits [32], making it suitable for NISQ devices. We here briefly summarize the QAOA formalism applied to binary optimization problems. For the required preliminaries of quantum computing, the authors recommend the textbook by Nielsen and Chuang [1].
One popular method of encoding an optimization problem to be solved using QAOA, is to first formulate the problem as an Ising Objective function.
where σ a = 2q a − 1 Where σ a ∈ {−1, 1}, h and J are coefficients associated with individual and coupled binary variables respectively. The Ising model is a popular statistical mechanics model, associated primarily with ferromagnetism [33]. Because it has been shown to be NP-Complete in nature [34], the objective function associated with it can be used to represent hard problems [35]. Moreover, if any NP-complete problem has a polynomial time algorithm, all problems in NP do, which makes this a tempting target to solve on a quantum computer in polynomial time (although there are no formal proofs that this is generally possible). The problem then would be to maximize or minimize Eqn(3), depending on how it is set up. The quantum Ising Hamiltonian, which naturally maps the Ising objective Eqn(3) to qubits, can be expressed as: Here, indices a, b, i label the qubits, n is the total number of qubits,σ (z) is the Pauli Z operator and I is the identity operator. The other type of Hamiltonian in the QAOA process is a summation of individual Pauli X operators for each qubit involved in the process, which intuitively represents a transverse field in the Ising model: whereσ andσ (x) = 0 1 1 0 In QAOA, the qubits are first put in a uniform superposition over the computational basis states by applying a Hadamard gate, which maps |0 → (|0 + |1 )/ √ 2, on every qubit. Then, the Hamiltonian pairĈ andB is applied p number of times using a set of angles γ and β, where, for 1 ≤ l ≤ p, each 2 γ l ∈ [0, 2π] and β l ∈ [0, π] [4]. The expectation of the resultant state |ψ p,γ,β is calculated with respect to the HamiltonianĈ as ψ p,γ,β |Ĉ |ψ p,γ,β . A classical black-box optimizer then uses the expectation as its input and suggests new γ and β sets (of length p each). The hope is that as the number of qubits (more specifically, variables) n involved in the optimization increases, if for circuit depth p n we are able to efficiently sample the best solution, we would have an advantage in using QAOA over classical methods. Although algorithm 2 is a summary of the QAOA method, we recommend readers the original paper [4] for further details. 5) Implicit filtering optimization: As mentioned before, QAOA requires us to give it the sets of angles γ and β in order to change the state of the quantum system. The most common way to do this is to use classical black-box optimization techniques that do not need the derivative information of the problem [9], [36], [37]. Since the expected value of the objective function cost (or energy) would be approximate in nature, we need an optimization technique that can handle noisy data. The technique of our choice for this work is the Implicit Filtering algorithm [38].
In essence, Implicit Filtering or ImFil is a derivative-free, bounded black-box optimization technique that accommodates while (β, γ) can be further optimized, or a limit is reached do 5: Initialize res set ← {∅} 6: for a fixed number of shots do 7: res set ← res set ∪ QAOA(B,Ĉ, β, γ, p) 8: From res set, calculate the expected value and store in expt val 9: Based on the expt val, pick new 2p angles (β, γ) by classical optimization 10: From the final res set, set the result with lowest energy, best res ← min(res set) 11: return best res This is the approx. soln 12: procedure QAOA(B,Ĉ, β, γ, t) The quantum procedure 13: Initialize n qubits, |ψ ← |0 j ← 1 16: while j ≤ t do 17: |ψ ← e −iγjĈ |ψ 18: |ψ ← e −iβjB |ψ 19: j ← j + 1 20: Measure |ψ in standard basis and store in a classical register o 21: return o noise when it tries to suggest the best parameters to minimize the objective function. Various other techniques for noisy optimization exist, such as Bayesian Optimization [39], COMPASS [40], SPSA [41], etc. However, we found Implicit Filtering the best for our current efforts. For further details, we recommend the book by C.T Kelly on the topic [38].
B. Related work
One of the first applications of quantum computing for solving problems in the field of linear algebra is the HHL algorithm for solving a system of linear equations [42]. This was followed by works for solving linear least squares [43], preconditioned system of linear equations [44], recommendation systems [45] and many others [46]- [48]. Although the classical counterparts of the above mentioned algorithms run in polynomial time, the quantum algorithms mentioned above run in the polylog time complexity.
However, there are some caveats with such kind of algorithms [49]. Among the many caveats, we'd like to emphasize on the two that affect the practicality of their utility in the near future. Firstly, they require fault tolerant quantum computers whereas, at the time of writing this paper, we have just entered the NISQ era [5]. Secondly, for the algorithms focused on linear system of equations [42], [44], [46] and least squares [43], [47], the output data is encoded as a normalized vector of a quantum state |x (which means that the probability amplitudes of the basis states encode the data). This means that we need an efficient method to prepare the input data as a quantum state; and the output will be a quantum state as well, which means it wouldn't be available for us in the classical world directly by performing measurement in the standard computational basis. This can be mitigated by either measuring the final state in a basis of our choice if our goal is to know some statistical information about x [42], [50] or learning certain values in x (though that will eliminate the exponential speedup [49]).
With respect to quantum annealing, O'Malley and Vesselinov's paper in 2016 [14] was one of the first that proposed to solve linear least squares. Other works in this domain were for solving specific NMF problems [15], [16], polynomial system of equations [20], underdetermined binary linear systems [17] and polynomial least squares [22]. It's hard to speculate about speedups analytically with (i) D-wave's noisy implementation of quantum annealing [51] and (ii) the problem of exponential gap-closing between the problem Hamiltonian's ground state and its excited states [52]. The above mentioned quantum annealing techniques use the Ising objective function for problem formulation. This means that measuring the post annealing quantum state in computational basis gives us a classical x, unlike most gate-model algorithms so far, including the ones mentioned above [42]- [44], [46], [47] that encode the solution in the amplitudes of |x .
NISQ-compatible algorithms for efficiently solving linear algebra problems are highly desirable as of the time of writing this paper. The work by Chen et. al [53] proposes a hybrid algorithm that uses quantum random walks for solving a particular type of linear system, producing a classical result in O(n log n). However, the closest related works to ours are the recent papers that employ variational algorithms [54], [55]. The major difference however, is that, in those papers : i) The output is encoded as the vector of probability amplitudes if the quantum state |x and ii) The problems explored thus far are convex in nature and solved in polynomial time classically.
We in this paper implement QAOA on similar problems which were implemented on D-wave's quantum annealer previously, and therefore briefly mention a comparison here. While QAOA can behave like a discretized version of the annealing process [56], it need not do so in order to be effective. Which means that adherence to adiabatic evolution is not a necessity [3], [4]. One of the promises of QAOA is that it can variationally find optimal paths through the complicated cost Hamiltonian spectrum, in a shorter time/depth than annealing. The associated set of challenges and opportunities for QAOA are different from quantum annealing; in this work, we make a first attempt at facing some of those challenges.
III. QAOA FOR BLLS A. Problem formulation
O'Malley and Vesselinov first gave a QUBO formulation for the BLLS problem [14]. The details of how that is done is in Appendix A. Referring back to Eqn(1), if A ∈ R m×n , b ∈ R m and x ∈ {0, 1} n , we can refine Eqn(2) to be where and Which means that the number of qubits depends only upon the size of the column vector x. All the rows in Matrix A and vector b are preprocessed classically in order to produce the coefficients of the QUBO problem. By the equivalence stated in Eqn (4), we can then convert the problem into an Ising objective function (plus an offset value, irrelevant for optimization) B. Mapping to quantum gates Using the h,J coefficients from Eqn (14) along with the mapping to a quantum Ising Hamiltonian given in Eqn(6) we get:Ĉ Because the individual components of Eqn(15) commute [4], we can express the Hamiltonian simulation ofĈ with an angle γ l as follows Similarly, the exponential of hamiltonian B can be broken down as In order to realize Eqn (16) and Eqn (17), we use the following gates While Eqn (18) is the only gate needed to realize Eqn (17), Eqn (19) alone can merely help with the single qubit components of Eqn (16). For the components that require two qubit interaction, the following gate combination (expressed diagrammatically) is used as a template While Eqn (21) shows the ZZ interactions for adjacent qubits, this strategy can be generalized to any pair of qubits in the system. Appendix B provides an example of a QAOA circuit for BLLS. 1) For IBM Q specific gates: Our experiments were done on an IBM Q device (ibmq_london) available to us through the IBM Q Network. This machine has the following basis gates: {U 1 , U 2 , U 3 , CNOT, I}. The first three gates in the set can be described as We can implement Eqn (18) and Eqn (19) [57] as Another practical consideration to be taken is the qubit connectivity of a real quantum computer. As the number of qubits increase, it is safe to assume that full connectivity between physical qubits is not feasible to engineer. This means that for distant-qubits to interact with each other, we would need logical qubit replacement using SWAP operations. Appendix C elaborates on this with a demonstration with IBM Q gates.
IV. EXPERIMENTS A. Experiment methods
The dataset used in our experiments was randomly generated (seeded for reproducibility) consisting of A ∈ R 40×n , b ∈ R 40 and x ∈ {0, 1} n where n ∈ {3, 4, 5, 9, 10} is the size of the problem. All values for A in the dataset are generated by uniformly sampling floating point approximations of real values in the interval [−1.0, 1.0), and then rounding the values to 3 decimal places. For each value of n, we generate 100 test cases with 40 cases in which Ax * = b, where the best solution x * is sampled randomly and b ← Ax * . The other 60 cases have Ax * = b, where b is generated similarly to A and the best solution x * is found by going through all 2 n possible values for x. This is done to cover both scenarios of the least squares problem. The matrix A is a sparse matrix having density of 0.2, this was done because sparse matrices have a lot of applications in numerical computation and machine learning [58]- [60].
We use the QISKIT [61] SDK to write our own implementation of the QAOA algorithm. As mentioned before, ImFil [38] is our black-box optimizer of choice. The only parameter of ImFil we access is the budget, which governs the maximum iteration limit. The rest of the ImFil parameters for our experiments use their default values. Similarly, unless explicitly stated, all qiskit parameters values taken are default as well. All classical simulations were conducted on standard x86-64 based laptops. Following is a list of the experiments we conducted.
1) Experiments with no noise: Our first set of experiments on the dataset were done on a simulator with the statevector backend, giving us the exact waveform. This means that we are able to compute the exact expectation ψ p,γ,β |Ĉ |ψ p,γ,β for the set of angles γ and β. These experiments help us assess the performance of QAOA in a perfectly noiseless environment for a large dataset.
The above set of experiments were done for p =1, 2 and 3 with random starting points: 20 for p =1, 40 for 2 and 60 for 3 (seeded for reproducibility). Our preliminary study suggested a budget of 200 iterations for p = 1, 2 and 400 iterations for p = 3 respectively. This ensured that at least 70% of our tests converged within the budget while being computationally feasible. At the end of the process, the best results from all the starting points is chosen and recorded.
2) Experiments to compare no noise and shot-noise performance: For our next set of experiments, we use measurement based results on the simulator. Each circuit is run a number of times, specified by the 'shots' parameter. This means that the expectation we get for a given γ and β is approximate in nature. Thus, while quantum circuit simulation itself is noiseless and deterministic in producing the same wavefunction before taking each shot, a finite number of shots is sampled from the resulting wavefunction output probability distribution, introducing a stochastic component. In a real quantum device, one is always limited to this finite tomography, as one has no direct access to the qubit register's quantum wavefunction.
Since in a real quantum device, we do not have access to the qubit register's waveform, simulations with shot-noise are important to conduct. Each experiment was done 10 times per shot value. The shot values chosen for these experiments are in the set {2 i |n − 2 ≤ i ≤ n + 2, i ∈ Z}. We chose this range in order to observe the performance in the limit of perfectly reproducing the wavefunction.
Also, the problem instances chosen for this set of experiments are a random subset of the original dataset. For each problem size n ∈ {3, 4, 5, 9, 10}, we randomly choose 5 problems of the 100 problems (while maintaining the 2 : 3 ratio of the problems by their type). This is done because doing the shot-noise experiments on the original dataset would be computationally infeasible for the limited computational resources at our disposal, since each shot-noise experiment is at least 50 times slower than its statevector counterpart.
The parameters of these experiments have also been modified accordingly. They were done for p =1, 2 and 3, for a budget of 200 iterations with random starting points: 5 for p =1, comparison, this subset of problem instances was also run with the statevector backend for the same parameters.
3) Experiments on an IBM Q device: Based on the results of the first two sets of experiments, we design our experiments for the 5 qubit IBM Q device 'ibmq_london'. In a real device like this one, the qubits face decoherence issues, coherent gate errors, control errors, incoherent gate errors, leakage, cross-talk, readout noise and more. The first set of IBM Q experiments was to run QAOA for problems with n =5, for parameters p =1, budget of 200 iterations and a shot value of 1024. The reason for choosing these parameters for QAOA is to take into account the gate depth limitation and noisy computation, thus choosing the minimal number of qubits while still covering a non-trivial problem graph structure, which can still be easily verified with classical methods at this size. The next set of experiments was to take the γ and β from the results of the statevector experiments done in Section IV-A2 (where n =5) and to try and recreate the distribution and expectation values using the quantum computer.
B. Results
Our two main metrics to assess the performance of our method in our experiments are (i) the probability of sampling the best possible solution (or the ground state of Hamiltonian C) and (ii) the relative error of the expectation value with respect to the ground state energy as relative error = expectation energy − ground state energy ground state energy (27) 1) Optimization trajectory for QAOA: In Figure 1, we see an example of how QAOA with ImFil performs on a BLLS problem. As the iterations progress, the fluctuations in the energy expectation also reduces. This happens either till the black box optimizer converges to a solution (depending on default internal parameters in our case) or the iterations have reached the maximum threshold (governed by the budget).
Here the experiments done with the statevector backend, which has access to the exact energy expectation, sets the baseline for the other modes of experiments. While our experiments containing shot-noise due to measurement do relatively well against statevector results, the experiments on a real quantum device are mixed. At the time of writing, the IBM Q device we tested on did not approximate the theoretically-optimal QAOA result distribution very well, but it still finds the best solution every time. We have discussed this further in Section IV-B6.
2) QAOA results with no-noise: Before we study the results of QAOA for BLLS with shot-noise, it is important to evaluate the theoretical performance of the same without any noise at all. Figure 2 shows the relative error growth with respect to the problem size n for the experiments described in Section IV-A1. We use Median as measure of central tendency and Median Absolute Deviation (MAD) for our error bars. Simulations larger than p = 3 take a lot more time for the complete dataset and were computationally infeasible for this project.
The line graphs in the figure seem to suggest a nonexponential growth of the relative error to problem size. You can see that going from p = 1 to p = 2 decreases the relative error moderately. The difference in performance between p = 2 and p = 3 is more modest, particularly for the larger problem sizes n. There may be room for further improvement if we allow a larger simulation time budget, for example by tightening the classical optimizers convergence parameters and increasing the number of initial starting points for the optimizer. Further rigorous experimentation would be required to draw definite conclusions about the scaling based on such numerics. Fig. 3: Comparison of converged final relative error (median), for p ∈ {1, 2, 3}, as a function of problem size n ∈ {3, 4, 5, 9, 10}, done on 5 problem instances per n with a budget of 200 iterations. (TOP) shows the results from the optimization having access to the exact statevector simulator, while (BOTTOM) shows the results using a shot-noise simulator with 2 n+2 shots. For drawing the bottom plot, we use the best angles found using shot-based optimization and used the statevector backend for one more run at those angles in order to compute the exact expectation value and corresponding relative error. These results are from the experiments described in Section IV-A2. Figure 3 shows us how QAOA with ImFil performs for the parameters described in Section IV-A2 for statevector and measurement based results for 2 n+2 shots. We have 5 different problem instances per n ∈ {3, 4, 5, 9, 10}. The reason for choosing 2 n+2 shots for this comparison was to try and see if the optimizer could replicate the statevector results given plentiful shots. In subsequent figures, we'll be showing the performance of the optimizer with fewer number of shots.
3) No noise vs Shot-noise optimization:
We can see the similarities between the top and bottom plots in Figure 3. The main difference however, seems to be the result and error bar overlap between the results of p =1 and 2. While the two lines are close to each other in the statevector results uptil problem size of 9, the measurement-based results for the two parameters are extremely close to each other (when considering median and MAD). This could be attributed to the noise due to approximate results, or due to the small number of experiments we average over, as detailed in Section IV-A2. Another effect of the smaller dataset here is that the relative error's growth doesn't seem fully monotonic to the problem size, unlike in Section IV-B2. However, it still shows a general upward trajectory. Simulations for p = 4 and upwards become computationally infeasible due to the time required, even for the smaller dataset we worked with in Figure 3. This can be attributed to the exponential growth in runtime as a function of the circuit depth p [4]. Figure 4, we use the outputs of the statevector experiments described in Section IV-A1, and look at the success probabilities of finding the ground state for each of the problems in our dataset. We use box-plots to represent the errors for this figure. We can see how even with p =1, QAOA performs better than standard random sampling.
As the data suggests, the probability of finding the ground state goes down rapidly as the problem size is increased. One reason for this is the exponentially-increasing state space with problem size n. While for uniformly random sampling this In this bar graph, we collect the number of experiment instances in which we observe the exact ground state bitstring, at least once (on the Y axis) for n ∈ {3, 4, 5, 9, 10}. Each n has 5 problem instances, which is repeated 10 times for a given shot value. We compare the QAOA shot-noise simulator experiments (colour-coded with the number of binary variables, n), with the results one would expect randomly sampling from a uniform distribution shots times (black bars, labeled with rand, x-axis positioning corresponding to its colour-labeled partner). The QAOA data in this figure comes from sampling the circuit with optimized angle sets γ * and β * , after acquiring them by optimizing for a circuit depth of p = 3 and a budget of 200 iterations, as described in Section IV-A2. would imply exponentially decreasing success probabilities, the ground state probability amplitudes in QAOA can be polynomially or exponentially amplified and this would still show as a decreasing trend as a function of system size. Larger-scale simulations would be required to extrapolate the expected performance scaling as a function of n. Another important factor is that the problem graph for BLLS requires all-to-all connectivity [14], [15] (also see Appendix A). Recent research has shown how having a high problem density can be a challenge for QAOA to optimize over [62]. In our discussion section, we discuss how we can mitigate this (along with other strategies for potential future work).
It is one thing to calculate probability, it is another to sample the best solution (or ground state) from a quantum state after QAOA. Figure 5 displays the number of experimental instances where we sample the ground state, at least once, for a particular set of parameters, across various problem sizes and instances (for shot-noise experiments in Section IV-A2. We contrast this with the analytical results of getting the ground state by uniform random sampling. Here, we see that for optimization done with upto 2 n shots, QAOA has a clear advantage over random sampling. This can be explained by the mechanism of QAOA, which selectively amplifies those bitstring sampling probabilities which have the lowest energy, while suppressing those with higher energy. In this way, the success probabilities may be greatly enhanced over the naive random sampling from the uniform probability distribution.
5) Effect of shot number on optimization:
For QAOA to become practical, the shot number chosen for the computation has to be far less than the number of eigenstates for our cost Hamiltonian (2 n for our case). For this work, we chose not to randomly guess a shot number value but rather get an understanding of the optimization performance for a set of shot numbers in {2 i |n − 2 ≤ i ≤ n + 2, i ∈ Z}. We hope this helps all future research work in finding better estimates for the least amount of shots required for QAOA, especially for these type of applications. In Figure 6 we see the optimization result for a problem instance where n = 5. The shot optimization is compared with the statevector optimization. Here it is important to point that we optimized using the stochastic blackbox (using a given shot number) and then calculate the exact expectation value using wavefunction method (with the statevector backend) in order to assess the true value of relative error. For the most part, our experiments show that as the problem size increases, we see the optimizer do well even with 2 n−2 shots. This seems to indicate that the number of shots required to get a good optimization may not be exponential in terms of the problem size, or at least with a smaller exponent than applying random sampling from a uniform distribution. Further research is needed.
6) IBM Q device performance: We briefly mentioned our real device results in Section IV-B1. The good news is that IBM Q was always able to find the best solution for our optimization experiments. But that came at the price of taking 1024 shots for each QAOA iteration, which is relatively expensive for a problem size of 5 qubits. When we lowered the shot number, the optimal bitstring was not always sampled and the convergence deteriorated further. The immediate cause of why the optimization process on the device was not close to the simulation results, is the inability of the device to approximate the distribution of the measured bitstrings (for a given circuit). We provide an example of this in Appendix D for the readers.
It is crucial to consider the entire context here. Firstly, the problem graph of our use case is fully connected. Due to the sparse connectivity of the on-chip qubits in the device, logical qubits have to be swapped around a number of times for them to be able to entangle with each other (for the ZZ interactions of our problem). This makes the average gate depth for the final (transpiled) circuits that run on the ibmq_london machine to be about 35. Since two qubit gate fidelity is still low (at the time of writing), the error propagates across the circuit. Secondly, due to the large circuit depth on the real device, we need to take decoherence into account. Thirdly, readout-errors were not considered here and they have significant impact on the noise in the qubit measurement results.
It should also be emphasized that in this work, we primarily focused on how to model the BLLS problem using QAOA. Thus, the experiments on the real devices were done "as is", in order to demonstrate the near-term implementability, without any error mitigation [63]. This could be looked at for future work.
C. Discussion
We can see the various possibilities and potential advantages QAOA may provide in solving BLLS and similar problems. However, there are challenges that need to be addressed. These are both theoretical and practical in nature.
One theoretical challenge is the proper pre-processing of the problem Hamiltonian by scaling and shifting the coefficients of the objective function, such that we optimally make use of the parameter space β l ∈ [0, π], γ l ∈ [0, 2π] (most of the problems in the dataset did not suffer from this issue, as we found the default scaling to work well already). However, scaling the problem way beyond necessity also creates issues as the energy landscape is periodic in nature [4]. Thus, one possible way is to use scaling as a heuristic within the QAOA process, and treat it as a hyperparameter to optimize over.
Another challenge, which is both theoretical and practical in nature, is the full connectivity in our problem and in most hard optimization problems in general [62].Computationally non-trivial problems typically require a high degree of graph connectivity (for instance, non-planar graphs are easy to solve classically [64]). Simultaneously, a high connectivity poses a challenge in quantum chip implementation because not all gatesets implement non-nearest neighbour interactions natively. Those need then be implemented effectively by means of a swap network approach [65]. For future work, we can suggest to modify the problem formulation by not considering the ZZ interactions of a pair of qubits, if its coefficient's magnitude falls below a user defined threshold. This can potentially make it easier for QAOA to run, but its effectiveness in finding the ground state would come under scrutiny. Nonetheless, it can make for interesting future work. Also, error mitigation techniques and readout error correction will also help in improving the results [65]. It would take a combination of the above mentioned approaches to improve performance on a real device.
Challenges aside, one of the next steps would be to explore if QAOA can be valuable in applications that require BLLS as a subroutine, such as NBMF [15]. Another step can be to try the BLLS problem on other types of quantum computers [9]- [13] to see how different hardware implementations fare.
V. CONCLUSION
In this work, we described the implementation of a binary optimization problem, relevant to hard problems in linear algebra, on a gate-based quantum computer via a QAOA approach suitable for NISQ devices. We discussed practical implementation considerations, and shown the expected performance on some particular examples. We compared the solutions found using QISKIT, on an exact quantum wavefunction simulator, a shot-based simulator, and using a IBM Q cloud-based quantum processor based on superconducting qubits. In this first implementation, we showed promising mapping of the problems to the QAOA solver, good theoretical performance compared to random sampling, but found it is still challenging to implement linear-depth, high-connectivity circuits on the latest hardware available today. In future work, it would be interesting to try to compare directly the performance and scaling of this method as compared to the most competitive classical alternatives, even though this quantum-classical comparison is sometimes hard to realize in practice. Also, we expect a future experimental implementation would benefit greatly from gate-error mitigation techniques and post-processing readout errors. It would furthermore be very interesting to see what other hard problems in linear algebra may be implemented using the QAOA and what their expected performance would be.
APPENDIX A DETAILED QUBO FORMULATION FOR BINARY LINEAR LEAST SQUARES
In this section of the Appendix, we describe the method by which O'Malley and Vesselinov [14] formulated the binary linear least squares (BLLS) problem. This QUBO formulation will be converted into its equivalent Ising objective function and used in QAOA. Let us begin by writing out Ax − b which would help us in minimizing x and thereby solve Eqn(1) Taking the 2 norm square of the resultant vector of Eqn(29), we get Because we are dealing with real numbers for A and b, (|.|) 2 = (.) 2 And thus, the coefficients in Eqn(2) are found by expanding Eqn(31) to be You will notice that there is a constant value from Eqn(31) that we leave out of Eqn (32) and Eqn (33). Because this value is not a coefficient for any of the variables, we can't optimize over it and it's left as is, which is b 2 2 . Also, the ground state energy (QUBO) for when Ax * − b 2 = 0 where x * is the best solution, is − b 2 2 . APPENDIX B EXAMPLE QAOA CIRCUIT FOR BLLS Let us consider a simple problem, without loss of generality, to demonstrate how quantum circuits for BLLS can be designed. Consider the following problem, Find This particular problem is more appropriately categorized as a linear system of equations (A ∈ R n 2 ) and has a solution x = (1, 1, 0) T , such that Ax * = b or Ax * − b 2 = 0. However, our problem formulation does not change.
In order to solve this problem using QAOA, we require 3 qubits. Using the formulation process detailed in the Appendix A, the QUBO formulation we get is The constant value that didn't make it to the QUBO here is b 2 2 = 18. Converting the QUBO into Ising using Eqn(4), we get The offset when going from QUBO to Ising is -11. Therefore, the Ising ground state for this problem is −18 − (−11) = −7. Now let us assume that we are designing a circuit for QAOA where p = 1. So for a given pair of angles (β, γ) a circuit would look like the one shown down below in Eqn (38).
The results of this circuit will be used to calculate the expectation. Based on the expectation, a new pair of β and γ will be calculated using a classical black box optimization algorithm (like ImFil). These new angles will be fed into another such circuit till the optimization loop converges.
|0
H The results from Eqn (38) would be classical bitstrings when measured in the standard basis. In order to calculate the energy or cost of a particular bitstring with respect to the Ising Cost function Eqn (3), we would first need to substitute a 1 for each 0 and -1 for each 1 in the bitstring. In short, this is becausê σ (z) describes a quantum state to have an energy of +1 for |0 and -1 for |1 (in arbitrary units). For our example, if we measure a bitstring ξ to be {ξ 1 = 0, ξ 2 = 0, ξ 3 = 1} the equivalent ising set would be {σ 1 = 1, σ 2 = 1, σ 3 = −1}.
APPENDIX C IMPLEMENTING TWO-QUBIT INTERACTIONS ON A QPU
As mentioned in Section III-B1, most practical quantum computers would not have all to all qubit connectivity. But if the problem that we need to solve on a quantum device requires dense connectivity (such as the BLLS), we need SWAP gates for allowing distant qubits to interact with one another. Let us first describe the SWAP gate in its matrix form Diagrammatically, we can decompose it with CNOT gates as To illustrate how this takes place, consider a hypothetical device where every qubit is only connected to the adjacent qubit in a line. Thus in order to realize the gates described on the LHS of Eqn (41), one way is to SWAP between the top and the middle qubit so that it could interact with the bottom qubit.
After our desired two-qubit interaction takes place, the top and middle qubits are swapped again, returning the logical qubits to their original place. Of course, there are other methods (involving SWAP) that the compiler may take to realize the original unitary operations to be performed. In Eqn (41), we also show the decomposition of the R z gate as defined in Eqn (26). We compare the result for IBM Q device (TOP) versus the qiskit shot-based simulator (BOTTOM), for the same problem instance for a total of 10240 shots. This figure represents data from an experiment done in Section IV-A3.
APPENDIX D PROBABILITY DISTRIBUTION OF RUNNING
At the time of writing, our experiments on the quantum processor (ibmq_london) are unable to produce results similar to what a simulator produces. Figure 7 shows an example of the distribution we get from running a QAOA circuit with optimized (and fixed) β and γ angles. The simulation suggests a few bitstrings with a high probability of being measured (with the ground state having the highest), whereas the distribution from the quantum processor is more evenly spread out, with the ground state not having a significantly high probability of being measured. | 10,005.8 | 2020-06-27T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Surface operators in the 6d $\mathcal{N} = (2,0)$ theory
The 6d $\mathcal{N}=(2,0)$ theory has natural surface operator observables, which are akin in many ways to Wilson loops in gauge theories. We propose a definition of a"locally BPS"surface operator and study its conformal anomalies, the analog of the conformal dimension of local operators. We study the abelian theory and the holographic dual of the large $N$ theory refining previously used techniques. Introducing non-constant couplings to the scalar fields allows for an extra anomaly coefficient, which we find in both cases to be related to one of the geometrical anomaly coefficients, suggesting a general relation due to supersymmetry. We also comment on surfaces with conical singularities.
Introduction
Understanding the six dimensional N = (2, 0) superconformal field theory is one of the most intriguing problems in theoretical physics. In this paper we revisit the most natural observables in this theory, surface operators [1]. If we define the theory as arising from N coincident M5-branes, the simplest surface operators correspond to the endpoints of M2branes [2].
In some ways the surface operators in six dimensions are analogous to Wilson loops in lower dimensional gauge theories. Wilson loops are the boundaries of fundamental strings, which are the dimensional reduction of M2-branes, and indeed one obtains Wilson loops in compactifications of the 6d theory with surface operators. Wilson loops are not only interesting due to their physical importance, they are also accessible to many perturbative and non-perturbative calculational tools in supersymmetric field theories: Feynman diagrams, holographic descriptions [3][4][5], localization [6], the defect CFT framework and associated OPE techniques [7,8], integrability [9,10], duality to scattering amplitudes [11] and more. See for instance a recent survey of these techniques, as applied to supersymmetric Wilson loops in ABJM theory [12].
We do not expect all these techniques to extend to surface operators in six dimensions, but it is worthwhile to examine which of them may work, and we hope that some calculations may lead to exact results applicable for all N . Here we take the first step in such an examination, defining the notion of a "locally BPS surface operator" and studying basic properties of their anomalies. This is mainly based on previous work [13][14][15][16][17], which we modify and refine in several ways.
As reviewed in the next section, the evaluation of generic surface operators leads to logarithmic divergences. The anomaly depends on the geometry of the surface, as well as intrinsic properties of the operator which are captured by three numbers, known as anomaly coefficients [18].
The "locally BPS" operator couples to the scalar fields via a unit 5-vector n i . This can be viewed as a coupling to an R-symmetry background, and for non-constant n i we find a new anomaly, proportional to (∂n) 2 , with its own anomaly coefficient.
We perform explicit calculations of the three geometrical and one background coefficients in both the free theory at N = 1 and the holographic description valid at large N . An examination of our results reveals that the new anomaly coefficient matches (up to a sign) one of the geometric ones in both regimes. We present here a simple argument, relying on supersymmetry, why we expect this relation to hold for all N . A more rigorous proof of this relation based on the application of defect CFT techniques to surface operators will be presented in [19].
Beyond the study of N = (2, 0) superconformal symmetry, surface operators in conformal field theories have drawn interest within a number of different contexts. Recent work on entangling surfaces in 4d [20][21][22][23] and theories with boundaries [24][25][26] uses some techniques which apply in our case as well. In particular, the classification of local conformal invariants of surfaces is independent of the codimension and translates to the 6d case [27].
Surface operators in the N = (2, 0) theory have been studied both from a field theory perspective [14][15][16][17] and using holography [28,13]. Corresponding soliton solutions of the M5-brane equations of motion have been discussed in the literature under the moniker of self-dual strings [1].
The resemblance to Wilson loops is evident in both the field theoretic and the holographic approach. In the former, for N = 1 as is studied in Section 3, we define the surface operator in analogy to the Maldacena-Wilson loops [4] as where B + is the pullback of the chiral 2-form to the surface Σ and Φ i are the scalar fields. Since for N > 1 there is no realisation of the theory in terms of fundamental fields, we cannot give an analogous definition of the surface operator. However, by analogy with Wilson loops [3][4][5], in the large N limit, these operators in the fundamental representation have a nice holographic dual as M2-branes ending on the surface and extending into the AdS 7 × S 4 bulk, as discussed in Section 4. In the absence of a scalar coupling breaking the so(5) R-symmetry, these would be delocalised on the S 4 [29,30]. At leading order, we need only consider minimal 3-volumes [4,13] (similar to the minimal surfaces of interest in the Wilson loop case [3][4][5]), and to find the anomaly, which is a local quantity, it is enough to understand the volume close to the AdS boundary. High-rank (anti-)symmetric representations are dual to configurations involving M5 branes shrinking to the surface on the boundary of AdS 7 and have been considered in [31][32][33][34].
The definition in (1.1) includes BPS operators. Simple examples are the plane or sphere with constant unit n i . Other examples are briefly discussed in Section 3 and will be explored in more detail elsewhere [35]. We call operators with generic Σ and unit length n i "locally BPS", and show that they possess some nice properties, in particular that all power law divergences cancel.
In the next section we recall the structure of surface operator anomalies and introduce the anomaly coefficients. We evaluate these anomaly coefficients for the two known realisations of the N = (2, 0) theory; first as the theory of a single M5-brane (N = 1) [36], for which the equations of motion are known [37], and second, using holography (for the large N limit) from M-theory on the AdS 7 × S 4 background [38] found in [39]. The resulting anomaly coefficients are presented in equations (3.24) and (4.18). After performing the free field and holographic calculations, we address in Section 5 surfaces with singularities. We discuss our results in Section 6 and offer a simple argument for the relation between two of the anomaly coefficients. We collect some technical tools in appendices. Our conventions can be found in Appendix A. Details of the geometry of submanifolds are compiled in Appendix B. Appendix C contains an alternative, more geometric derivation of the field theory results in Section 3.
Surface anomalies
The most natural quantities associated to surface operators in conformal field theories are their anomaly coefficients. To understand their origin, note that, unlike line operators, the expectation values of surface operators typically suffer from ultraviolet divergences, which cannot be removed by the addition of local counterterms. The regularised expectation value satisfies where is a regulator, A Σ is known as the anomaly density, and we suppressed possible power-law divergences. A Σ is scheme independent and indicates an anomalous Weyl symmetry, since for a constant rescaling g → e 2ω g, the expectation value varies as where the subscript • g denotes the background metric. The anomaly is constrained by the Wess-Zumino consistency condition [18,40] to be conformally invariant. In dimensions d ≥ 3, the local geometric conformal invariants for a 2d submanifold, which have been classified in [27], are R Σ : The Ricci scalar of the induced metric h ab on Σ.
H 2 + 4 tr P : H µ is the mean curvature, P ab the pullback of the Schouten tensor (B.2).
tr W : W abcd is the pullback of the Weyl tensor.
Under conformal transformations, the first two change by a total derivative (type A anomalies) and the last is itself conformally invariant (type B).
As we allow for variable couplings to the scalars, parametrised by a unit 5-vector n i , we find an extra potential type B Weyl anomaly associated to it: (∂n) 2 ≡ ∂ a n i ∂ a n i . This is (up to total derivatives) the only quantity of the correct dimension that can be constructed using only n.
The anomaly of a surface operator in any 6d N = (2, 0) theory then takes the form 3) The anomaly coefficients a 1 , a 2 , b and c depend on the theory (that is on N ) and the type of surface operator (which, at least at large N , is specified by the representation of the A N −1 algebra [41,42]), but not on its geometry or n. They are the focus of this paper. Let us mention that there exists another commonly used basis where whereĨI µ ab is the traceless part of the second fundamental form (see (B.8)). These bases are related through the Gauss-Codazzi equation (B.7). The relation between the coefficients is then Some results about these anomaly coefficients are known for surface defects in generic CFTs. The bound b 1 < 0 was derived in [22] by showing that b 1 captures the 2-point function of the displacement operator, which is positive by unitarity. Similarly, it was shown in [43,22] that b 2 is calculated by the one-point function of the stress tensor in the presence of the surface defect (this was also conjectured in [44]). Assuming that the average null energy condition holds in the presence of defects also leads to a bound b 2 > 0 [23].
For the surface operators at hand, these anomaly coefficients were also calculated previously. At large N , the first such result was a calculation of the 1/2-BPS sphere [28], with total anomaly −4N , implying a = b (N ) = 0 [13]. More recently, it was conjectured that N = (2, 0) supersymmetry imposes b = 0 (or b 1 = −b 2 ) for any N [45]. a and b 2 were calculated at any N > 1 (and for any representation) by studying the holographic entanglement entropy in the presence of surface operators [46,34,23,47]. This result is also supported by a recent calculation based on the superconformal index [48], which suggests that it is exact.
The anomaly coefficient c has previously not been discussed, to our knowledge.
Abelian theory with N = 1
In this section we study the anomaly coefficients of the surface operator in the abelian (2, 0) theory. This is the theory of a single M5-brane and the degrees of freedom form the tensor supermultiplet of the osp(8 * |4) symmetry algebra. It consists of three fields [49] (see also [50,51]) • A real closed self-dual 3-form H = dB + .
• Five real scalar fields Φ i .
Surface operators and BPS condition
We define the surface operators V Σ of the abelian theory as in (1.1). To avoid complications arising for null surfaces (which could be interesting, but lie beyond the scope of this work), we restrict to space-like surfaces in flat 6d Minkowski space (with mostly positive signature).
A surface operator is BPS provided that its variation under the supersymmetry transformations (3.1) vanishes Since this is an integral over the insertion of an operator ψ along the surface, this is satisfied only when the integrand vanishes at every point along the surface, leading to the projector equation If we impose that n 2 ≡ n i n i = 1, then Π − is a half rank projector and otherwise it is a full rank matrix. In the case of a planar surface with constant unit n i , this is a single condition, so the surface preserves 16 supercharges, i.e. is 1/2-BPS. 1 In analogy to Wilson loops in 4d theories, it is natural to discuss "locally BPS operators" [5], where the equations (3.3) are satisfied at every point along the surface, but without a global solution. This amounts to the requirement n 2 = 1, and as shown below, leads to the cancellation of all power-like divergences in the evaluation of the surface operator.
One can also look for surfaces, other than planes, that preserve some smaller fraction of the supersymmetry by relating n i (σ) to x µ (σ) and its derivatives. One simple way to realise 1 The BPS condition for a surface operator extended in the time-like direction can be obtained by Wickrotation to this is for surfaces with the geometry R × S, for some curve S ⊂ R 1,4 . Upon dimensional reduction this becomes a Wilson loop in 5d maximally supersymmetric Yang-Mills (or 4d upon further dimension reduction). Then one can choose n i to follow the construction of globally BPS Wilson loops of [52] or [53] to find globally BPS surface operators. Indeed this was realised recently in [54] (see also [55]).
There are further examples of globally BPS surface operators, which do not follow this construction. The simplest is the spherical surface, but there are several other classes of such operators, which will be explored elsewhere [35].
Propagators
Since the abelian theory is non-interacting, the expectation value of V Σ reduces to where h is the determinant of the induced metric on Σ. Evaluating this requires expressions for the propagators of the tensor and scalar fields. While one would preferably derive the propagators from an action, none is readily available. Many actions for the abelian N = (2, 0) theory have been proposed over the years, but they all suffer from some pathologies regarding the self-dual 2-form (see [56,57,36,58,59] for examples of available actions, and [60][61][62] and references therein for recent accounts of the various approaches in the abelian theory). In any case, gauge fixing and inverting the kinetic operator is not straightforward.
Tensor structure
We sidestep these obstacles by determining the propagators in other ways. The scalar propagator in flat 6d is fixed by conformal symmetry to be The proportionality constant depends on the normalisation of the fields. It could be determined from an action, but in its absence it is fixed by supersymmetry below. The more complicated question is the self-dual 2-form propagator. Let us start by considering an unconstrained 2-form field B with a free Maxwell type action were α is a gauge fixing parameter. In Feynman gauge α = 1, this gives the propagator Now we decompose the field into its self-dual and anti-self-dual parts B µν = B + µν + B − µν and try to deduce the propagators for each component.
Since there is no covariant 4-tensor satisfying the self-duality properties of a mixed correlator B + B − , we can decompose The two terms on the right hand side need not be identical, but the difference between them should be parity-odd. 2 The only such term of the right scaling dimension which we can write down is However, terms of this type do not contribute to (3.4), since the integration is symmetric in x and y. Therefore, for the purpose of our calculation we can take B + B + = BB /2. Note that in curved space we can add to the right hand side a term proportional to the Weyl tensor with all the required symmetries.
Normalisation
The normalisation of the tensor field propagator is fixed by the assumption that the surface operator defined in (1.1) corresponds to a single unit of quantised charge. First, for any closed surface Σ, we can rewrite the surface operator (without scalars) in terms of the field strength as where ∂V = Σ. In order for this to be well-defined, any two such V with the same boundary must yield the same result. Equivalently, for every closed and similarly for * H. 2 The two-dimensional analogue is instructive. The propagator of a free boson in complex coordinate z is given by while for a (anti-)chiral boson one finds Indeed the sum reproduces the free boson propagator, but the two differ by a parity-violating imaginary part.
Now consider a flat surface operator in the (x 1 , x 2 ) plane, which we view as a source for the unconstrained B field. The solution to the equations of motion would be given by convoluting the propagator with this source. Using the expression in (3.7), we get Again, because we don't know the self-dual propagator, the field strength we obtain is not self-dual, but the quantisation condition should still be satisfied. Imposing that the charge enclosed in a transverse sphere is quantised leads to The normalisation of the scalar propagator is then fixed by supersymmetry. A simple way to implement that is to compare with the classical BPS solution of the self-dual string [1] which gives 3 We emphasise that this normalisation is obtained by imposing a quantisation condition on the unconstrained B-field, where we treat B + as the self-dual subsector of a general 2form. This follows the discussion in [56], however some caution is warranted. The interplay between the quantisation and self-duality conditions could lead to obstructions, resulting in halving the self-dual source on the right hand side of (3.12). In that case, the overall normalisation of both propagators would increase by a factor of two.
With the flat space propagators we are able to determine the anomaly coefficients a 1 , a 2 , and c. The calculation of b, however, requires a the curved space propagator, where the right-hand side of (3.9) could pick up contributions whose integral does not vanish. Since we do not know how to fix these terms, we cannot determine b.
Note though that we can calculate the contribution of the scalars to the anomaly coefficient b. The propagator of a conformal scalar in a curved background can be expanded in powers of the geodesic distance [17], and the contribution to the anomaly coefficient b is read off as −2/3.
If we give up the requirement of self-duality, we can use the short-distance expansion of an unconstrained 2-form propagator on curved space, which has been computed in [14,16], and again, the Weyl tensor of the background explicitly contributes to the curvature corrections.
Halving that to try to account for self-duality and adding to it the contribution from the scalars, one obtains b = −4/3 [17]. This is in disagreement with the conjecture b = 0 [45] and therefore one may not trust it.
Evaluation of the anomaly
With the propagators at hand, we can compute the expectation value of the surface operator by evaluating the integrals in (3.4). Generically, these integrals are divergent and must be regularised.
In this section we take a rather naive approach of placing a hard UV cutoff on the double integral (3.4), so as to restrict |σ − τ | > (where the distance is measured with the induced metric), the same regularisation that is used in [14]. A different regularisation is employed in [17], where the surface is assumed to be contained within a 5d linear subspace of R 6 and the two copies of the surface are displaced by a distance in the 6th direction. This restriction to R 5 must still yield the correct answer, since even for surfaces in 4d the geometric invariants in the anomaly (2.3) are independent of each other. Still, in Appendix C we redo the calculation removing this assumption by displacing the two copies of the surface along geodesics in the direction of an arbitrary normal vector field. That approach could be important for the calculation of surface operators in four dimensions, where the restriction to a 3d linear subspace does not allow to resolve all the anomaly coefficients.
To find the anomalies we only need the short-distance behaviour of the propagators, so we use normal coordinates η a about a point σ on Σ. The notations and required geometry are presented in Appendix B.
Starting from the scalar contribution to (3.4), the integrand is Using n i n i = 1 and (B.14), (B.12) we have The integral computing the density of the scalar contribution to log V Σ is then (3.17) Using polar coordinates η a = η e a (ϕ), where e is a 2d unit vector, and the identities δ ab δ cd + δ ac δ bd + δ ad δ bc , (3.18) we are left with the radial integral, for which we introduce the cutoff To get the expression in the second line we also used the Gauss-Codazzi equation (B.7). The calculation of the contribution of the 2-form field is very similar. Expanding the tensor structure, we have (3.20) In terms of η a , the differential forms read (see (B.9)) (3.21) Collecting terms and introducing a radial cutoff as above, we find the contribution As discussed above, since we do not know the contribution of the Weyl tensor to the B-field propagator, we cannot determine b (1) . According to the conjecture of [45] however, it should vanish. This relation is the subject of work in progress [19]. Equation (3.23) differs from (2.3) by the absence of the tr P term, which also vanishes in flat space. Since H 2 doesn't vanish in flat space, it determines a 2 unambiguously and in curved space H 2 is necessarily accompanied by 4 tr P , based on the general argument for the form of the anomaly reviewed in Section 2.
Finally, we reiterate that, depending on the form of the quantisation condition, the result for the anomaly coefficients may be multiplied by 2, see the discussion following (3.14). In any case, the abelian theory should have surface operators with an integer multiple of iB + − n i Φ i in (1.1), and for all of them it is still true that a
Generalising the scalar coupling
Note that the preceding calculation is applicable regardless of whether the operator is locally BPS or not, so we may relax the condition n 2 = 1. In that case the result for the anomaly coefficients is a If we replace n i → in i , we recover the expressions for the surface operator studied in [17]. An operator with n 2 = 0 was studied in [14], but assuming a non-self-dual 2-form. The anomaly coefficients computed in [14] are half of the ones we obtain by setting n 2 = 0 in (3.25), due to a difference in the overall normalisation of the propagator. It would be interesting to study this system in the large n 2 limit. This is similar to the "ladder" limit of the cusped Wilson loop in N = 4 SYM in 4d first suggested in [63] which is related to a special scaling limit of that theory, dubbed the "fishnet" model, which also has a 6d version [64].
Holographic description at large N
The holographic calculation of the Weyl anomaly for surface operators was pioneered by Graham and Witten in [13]. Here we present a rewriting of their argument, which we also generalise slightly to include operators extended on the S 4 .
Surface operators
The N = (2, 0) theory is described at large N by 11d supergravity on an asymptotically AdS 7 × S 4 geometry [38] ds 2 = L 2 y 2 dy 2 + g (0) + g (1) y 2 + such that g (0) is the metric of the dual field theory 4 and g S 4 is the metric of S 4 . The background also includes N units of The full form of the metric is determined by the supergravity equations of motion in the presence of fluxes and by requiring the geometry to close smoothly in the interior. While the latter requires nonlocal information, the near-boundary expansion is fixed to the required order by local information about the boundary. Following [65,66], the first term in this expansion was found in [13] as At this order the S 4 is round, so to leading order the solution to (4.2) is simply The holographic description of the surface operators (1.1) is by M2-branes anchored along Σ on the boundary of AdS [4]. UsingΣ for the world-volume of the M2-brane, it has a boundary at y = 0 with ∂Σ = Σ. The expectation value of the surface operators is then given by the minimum of the M2-brane action, reading (in Euclidean signature and with all fermionic terms suppressed) [67] log where T M2 is the tension of the brane, proportional to N . volΣ is the volume form calculated from the induced metric and A 3 is the pullback of the 3-form potential.
Local supersymmetry
Before studying the M2-brane embeddings, let us note that the M2-brane minimizing (4.5) is also locally supersymmetric. The supergravity fields appearing there sit in the supergravity multiplet, which transform as where EM M , Ψ M and A 3 are respectively the vielbein, gravitino and 3-form potential of F 4 (M = 1, . . . , 11 is the frame index). Using these transformations, the variation of (4.5) is We here denote the coordinates on the world-volume byσâ. The projector equation is then The projector is again half-rank, so that the M2-brane locally preserves half of the supersymmetries (16 supercharges). These supercharges can be shown to agree with the field theory BPS condition (3.3) on Σ once we decompose x M into coordinates on the boundary of AdS, x µ , and the S 4 coordinates n i .
Holographic calculation
To find the saddle points of the action (4.5), we parametrise the M2-brane by y, σ a where σ a are coordinates for Σ. We then use the static gauge to describe the embedding by {u a (y, σ), n i (y, σ)}, where u a are the normal directions to the surface Σ at y = 0. In this setup, the boundary conditions are u a (y = 0, σ) = 0 and n i (y = 0, σ) = n i (σ) (where the right hand side has the n i from (1.1)). Because the metric (4.1) diverges at the boundary of AdS, the volume element on the M2-brane diverges as y −3 , which leads to divergences in the action. Finding the shape of the embedding requires knowledge of the full surface and is generally a hard problem. But since we are only interested in the logarithmically divergent part of the action, it is sufficient to solve the equations of motion for small y. We do this perturbatively following [13], mirroring the solution of the background supergravity equations above.
Using (4.3), the lowest order terms in the metric for our coordinates normal and tangent to the surface, are Here h ab = g (0) ab u=0 is the metric on Σ. Note that away from y = 0, this metric depends on u a (for y = 0, generically u a = 0), as in the first line.
To write down the M2-brane action we need the induced metricĥ ab = ∂ a X M ∂ b X N g M N (including also the S 4 directions). We expand the embedding coordinates as u a (y, σ) = O(y 2 ) , n i (y, σ) = n i (σ) + O(y 2 ) . (4.10) It is easy to check that higher order terms are not required. Then the S 4 metric can be replaced with g (0) S 4 = δ ij dn i dn j and the second fundamental form is II a ab = − 1 2 g a b ∂ b g ab . Dropping the explicit O(y ) as well as the subscript | u=0 along with the superscript (0) , since all the quantities are evaluated on the surface, we find (4.11) The determinant of the metric is then detĥ L 6 y 6 1 + ∂ y u a ∂ y u b g a b − 2H a u b g a b + − tr P + 1 4 (∂n) 2 y 2 det h , (4.12) while the pullback of the 3-form does not contribute to the divergences. We thus find the action At order O(y 2 ), we need only solve for u a (y), which has the equation of motion The action evaluated at the classical solution is then where we see that the anomaly indeed takes the form (2.3). The result is where we discarded an irrelevant term proportional to −2 (see the discussion below). This result agrees with the original calculation of [13] and adds to it the coupling to (∂n) 2 . It is also consistent with the explicit calculation of the 1/2-BPS sphere [28], for which the anomaly is −4N . The anomaly coefficients at leading order in N are then As in the case of Wilson loops in N = 4 SYM in 4d, we expect this holographic description to be correct in the locally BPS case when the scalar couplings satisfy n 2 = 1. Following [29,30], the case of n 2 = 0 should be described by the same surface inside AdS 7 , but completely smeared over the S 4 . In this case we find the same result for the geometric anomaly coefficients as above, and, since the corresponding anomaly term vanishes, c (N ) does not apply.
Power-law divergence
Note that in addition to the log divergence in (4.17), (4.16) produces also a power-law divergence While such divergences can be removed by the addition of a local counter-terms, in the field theory result (3.23), they cancelled without extra counter-terms (for the locally BPS operator). A more elegant way of eliminating the power law divergences also in this holographic calculation follows the example of the locally BPS Wilson loops [5]. A careful treatment of the boundary conditions suggests that the natural action is a Legendre transform of (4.5), which differs from the action we used by a total derivative. This modification does not change the equations of motion, but gives a contribution on the boundary, where it precisely cancels the divergence above.
By looking at the M5-brane metric before the decoupling limit, we can identify the coordinate to use in the transform as r i = L 3 n i /2y 2 . Defining its conjugate momentum by differentiating with respect to the boundary value of the coordinate (where y = ) . (4.20) In the last equality we used the value of the classical action (4.16), undoing the integration, so the classical Lagrangian density. The Legendre transformed action is theñ The last term exactly cancels the power law divergence in (4.19).
Surfaces with singularities
An interesting class of surface operators that has received some attention recently is surfaces with conical singularities. For these surfaces, it was found that the regularised expectation value typically diverges as [68,21,69,70] log V Σc ∼ A log 2 + O(log ) .
Let us consider a conical defect (on flat space) of the form We allow here also a "conical singularity" in the scalar couplings, which has s dependence even as r → 0. It is possible to also allow x µ and n i to have higher order terms in r, but since those lead to subleading divergences, they are unimportant. We can try to use the usual formula for the anomaly (2.3) by plugging in the geometric invariants R Σ = Ωδ(r) , where Ω is the deficit angle, κ =γ 2 /|γ| 2 is the curvature of γ. Plugging into (2.3), the Ricci scalar gives a finite contribution, but H 2 and (∂n) 2 diverge as r → 0. Introducing a cutoffˆ on the r integration, this gives This expression is a bit naive, as we should treat all divergences on the same footing and should identifyˆ = . But then we should not use (2.3), rather go back one step and regularise the divergences that gave rise to the original log divergence while also applying it to the r integration. As we show below, this leads to the expression in (5.4) with log logˆ → 1 2 log 2 . In both the free field case and the holographic realisation this factor of 1/2 is a simple consequence of the usual coefficient of the quadratic term in the Taylor expansion, or in other words of an integral of the form log r d log r.
This factor of 1/2 was noticed already in the calculations of [68,21] and justified in [70] by a careful treatment of the holographic calculation, which is repeated below. We think that the comparison of this to the free-field calculation and the universal nature of our result further elucidates this mismatch from the naive expectation. Our calculation is also more generic, for allowing arbitrary conical singularities and incorporating the scalar singularities too.
We should note, as already noticed in [21], that surfaces with "creases", i.e. co-dimension one singularities, do not lead to additional log 2 divergences and the expression (2.3) can be immediately applied to them.
Field theory
Here we do not rely on (3.23), but go further back to where the log arises from an integral of the form (3.19) ρ dη η = − log + finite, (5.5) where η is a radial coordinate around the point x, and ρ is an IR cutoff related to the overall size of the surface, or at least a large smooth patch where we defined our local coordinate. Near the cone the smooth patch is bounded by the distance from x to the apex, which we denote by r. The integral instead gives With this careful treatment of the log, we can go back to (2.3), plug in the expressions from (5.3) and integrate over r and with the same UV cutoff to find (5.7)
Holography
The derivation in holography is similar. We first note that conformal symmetry fixes the form of the solution as y(r, s) = ru(s) (5.8) To get to (4.17), we integrate over y, but the conformal ansatz suggests to impose the range ≤ y ≤ ru max . Plugging the curvatures from (5.3) into equation (4.17) we arrive at which again gives the log 2 divergence with the same 1/2 prefactor, as in the field theory (5.7).
Example: circular cone
As a simple example of a singular surface we compute explicitly the anomaly of a cone. Denoting the deficit angle by φ (see figure 1) and including an internal angle θ for the scalar coupling n i , we parametrise the cone as follows The conformal invariants are explicitly The divergence is then Notice that as long as the anomaly coefficients satisfy the relation a 2 = −c, which we have shown to hold in the abelian and large N case, the anomaly vanishes for configurations θ = ±φ, which correspond generically to 1/8-BPS configurations.
φ r s θ s Figure 1: On the left, the surface wraps a (circular) cone with a deficit angle φ. On the right, the scalar coupling follows a circle at angle θ on S 2 . For a fixed r, we have a curve that simultaneously traces the circles γ(s) and n i (s).
Conclusion
In this paper we calculated the anomaly coefficients of locally supersymmetric surface operators in the N = (2, 0) theory in 6d, refining and generalising the calculations of [13,17]. We first introduced a new anomaly coefficient c (2.3) arising from non-constant dependence on the internal R-symmetry directions. These are explicit scalar couplings in the abelian theory and motion on S 4 in the holographic realisation. We then presented an explicit calculation for the abelian theory and for the large N limit (using holography). The results are in equations (3.24) and (4.18). Although we are not able to compute the anomaly coefficient b at N = 1 because we do not know the general curved space propagator for the self-dual 2-form, we found the others in both cases.
Making all N conjectures based on the asymptotics is a fool's errand, which we carefully tread. This is especially true given that the abelian theory is not the same as the A N −1 theory at N = 1, since the latter is the empty theory. Nevertheless, in both cases we see that a 2 = −c, and we expect this to hold generally. The argument is based on the BPS Wilson loops of [52], where n i is parallel toẋ µ and which have trivial expectation values. If we uplift them to the 6d theory we expect to find surface operators with no anomaly (and no finite part as well). These operators satisfy H 2 = (∂n) 2 and indeed they do not contribute to the anomaly 5 for a 2 = −c. A proof of this relation as well as properties of b, based on defect CFT techniques, will be presented elsewhere [19].
Two more results are the formalism for regularising surface operators presented in Appendix C and the expression for the divergences due to conical singularities over arbitrary curves in Section 5.
All our calculations are for a surface operator in the fundamental representation. It is expected that 1/2-BPS surface operators are classified by representations of the A N −1 algebra 5 In the uplift we find only surfaces with trivial topology, so the anomaly vanishes regardless of a 1 . of the theory. At large N this is proven, since the asymptotically AdS 7 × S 4 solutions of 11d supergravity preserving the symmetry algebra of 1/2-BPS surface operators can be classified in terms of Young diagrams [41,42].
A calculation of anomalies of surface operators in arbitrary representation, based on the bubbling geometries and holographic entanglement entropy was undertaken in [47]. If we assume b = 0, then for a the fundamental representation, their result reads This is supported by an independent calculation using the superconformal index [48]. In the large N limit, our result [13] indeed agrees with theirs. These calculations do not determine the remaining anomaly coefficients in generic representations. But if we believe the b = 0 conjecture of [45] and our argumentation above for c = −a 2 , this fixes the remaining ones. It would be interesting to reproduce these finite N corrections using other methods as well as do direct holographic calculations for higher-dimensional surface operators.
The anomalies studied here are the most basic properties of surface operators, but finding them is only a first step in understanding these observables and the mysterious theory they belong to. Planar/spherical surface operators preserve part of the conformal group (and with the scalar coupling also half the supersymmetries) and their deformations behave like operators in a defect CFT. A natural next step is to study the defect CFT data: spectrum and structure constants.
Another natural question is the classification of globally BPS surface operators (and local operators within the surface operators) beyond the case of the plane/sphere.
We hope to report progress on these questions in the near future.
A.1 d = 11 Clifford algebra
The 11d Clifford algebra is generated by the set of matrices (Γ M ) B A satisfying Here for readability M is used for flat spacetime, unlike (4.6) where it denotes curved spacetime.
The matrices may be chosen such that Γ † 0 = −Γ 0 is antihermitian while the others are hermitian Γ † M = Γ M (M = 0). In addition, there is an orthogonal, real anti-symmetric matrix C AB such that Γ M C = − (Γ M C) T . C naturally defines a real structure by relating Ψ and Ψ † asΨ This is the Majorana condition.
A.2 d = 6 Clifford algebra
An easy way to construct the 6d Clifford algebra is to decompose Γ M = {Γ µ , Γ i } by introducing a chirality matrix Γ * = Γ 0 Γ 1 Γ 2 Γ 3 Γ 4 Γ 5 . The matrices are then (in the chiral basis) where the algebra is Since γ µ andγ i commute, they define independent spinor representations. Explicitly, we decompose A = (α ⊕ α) ⊗α, so that the indices are (γ µ )β α , (γ µ ) β α and (γ i )β α . The chiral and antichiral representations are related through The chirality operator gives 2 additional constraints . . γ ρ] the antisymmetrised product of γ-matrices. 6 The charge conjugation matrix takes the form and is used to lower (or raise) spinor indices. The matrix Ωαβ is the real, antisymmetric symplectic metric of sp(4) and c is unitary: They satisfy A representation of this algebra is given by (A.12)
A.3 Symplectic Majorana condition
In 6d the spinor Ψ decomposes into a chiral and an antichiral 6d spinor as The Majorana condition on Ψ then translates to where in the second equality we use the properties of our representation. The inclusion of the symplectic form Ω in (A.14) is the reason these equations are known as the symplectic Majorana condition. The spinors ε 1 ,ε 2 , and ψ in (3.1) are of this type.
B Geometry of submanifolds
In this appendix we assemble the geometry results used throughout the main text and in Appendix C. Sections B.1 and B.2 contain our conventions for Riemann curvature and the definition of the second fundamental form of an embedded submanifold as well as some standard results relating the two. In Section B.3 the second fundamental form is related to the coefficients of the normal coordinate expansion of the embedding.
B.1 Riemann curvature
We adopt the convention where the Riemann tensor is defined as It is convenient to split it into a conformally invariant Weyl tensor W µνρσ and the Schouten tensor P µν ,
B.2 Extrinsic curvature
We define the second fundamental form to be The second part is the projector to the components orthogonal to the surface (defined by its embedding x µ (σ)), while the first part is the action of the covariant derivative on the (pullback) of x λ (σ
B.3 Embedding in normal coordinates
Using these standard geometry results, we now derive the expressions needed for (3.16) and (3.21). Unlike in Section 3, we state here the result for a generic curved spacetime M . This allows us to perform the calculation in Appendix C on curved space. Let x µ and η a be Riemann normal coordinates on M and Σ about the same point. In terms of these, the embedding Σ → M may be expanded as These coefficients are constrained by the condition that straight lines in normal coordinates correspond to geodesics. In particular, a curve on Σ given by a straight line in η has constant speed and its curvature in M is normal to Σ at every point, which gives the constraints (B.10) Using (B.4) one easily checks that the second order coefficient equals the second fundamental form The geodesic distance between ξ(η) and the origin of the normal frame is found from (B.9) Furthermore, in normal coordinates, the metrics take the form which yields an expansion for the volume factor (B.14)
C Geodesic point-splitting
In this appendix we present an alternative regularisation of (3.4), essentially point splitting, displacing one copy of the surface operator by a distance in an arbitrary normal direction ν. This regularisation is used in [16,17], but there the vector ν is taken to be a constant, and therefore the method is only applicable if the operators are restricted to a codimension-one subspace.
The technology used to define this regularisation scheme applies for generic smooth embedded surfaces in a Riemannian manifold, and we present here a curved space calculation, as opposed to Section 3.3, where for brevity we restricted ourselves to flat space. However, we still have to restrict to conformally flat backgrounds, since otherwise we do not have a short-distance expansion for the propagator and therefore still cannot infer the anomaly coefficient b.
As expected, we recover the result (3.23) exactly, and thus verify scheme-independence.
C.1 Displacement map
We can regularise the integral (3.4) by displacing a copy of the surface a distance along a unit normal vector field ν. Under that map, which we denote by T , the geodesic distance admits an expansion of the form |T (x µ (σ)) − x µ (σ + η)| 2 = 2 + η 2 + We calculate the higher order terms in (C.1) explicitly in (C.7), but first we note that the only terms contributing to the divergent part are f (3) and f (4) . To see that, the integrals computing the expectation value take the form ρ 0 η m+1 dη |T (x µ (σ)) − x µ (σ + η)| 4 , (C. 3) where ρ is an arbitrary but fixed IR cutoff. We can evaluate (C.3) by expanding the integrand in . Writing s ≡ η/ , we obtain By application of Faà di Bruno's formula one checks that the terms in brackets of order n contribute to the divergence only if m + n ≤ 2. We can therefore safely ignore higher orders in . Only a finite number of terms remains to be computed and we find that the only divergent integrals (C.3) are: 1 + 4f 3 ) 2 + 2f The relevant coefficients can be read off of the expansion of the geodesic distance up to combined order of 4 in η and . The second term on the left hand side of (C.1) can be expanded simply using the embedding (B.9). For the first term, we solve the geodesic equation order by order in the displacement to obtain T (x µ ) = x µ + ν µ − 2 2 Γ µ κλ ν κ ν λ + 3 6 −∂ ν Γ µ ρσ + 2Γ µ νλ Γ λ ρσ ν ν ν ρ ν σ + O( 4 ) . (C.6) Combining these expressions, and writing η a = ηe a (ϕ) as in (3.18) and onwards, the only two non-vanishing relevant coefficients read The first contributes to a scheme-dependent divergence −1 , while the second contributes to the anomaly.
C.2 Evaluation of the anomaly
With the displacement map (C.6) in hand, we can evaluate (3.4). The propagators on a conformally flat background can be obtained by considering curved space actions for a conformal scalar and a Maxwell-type 2-form and inverting the kinetic operators order by order, following [17] and [14]. We find: To apply our regularisation, we should replace ξ by (C.1) in the denominator of the propagators before performing the integral over η. A priori, we should also perform the displacement in the numerator, since a term of order O( ) can contribute to the −1 divergence by multiplying (C.5a). However, one easily checks that the only terms of that order are accompanied by nonzero powers of η, and therefore do not contribute to the divergence of (3.4). We therefore drop the in the numerators of the propagators. The expansion of the numerators is then assembled, as before, from (3.16) and (3.21), but in addition, since we are working on curved space, we obtain an additional term at O(η 2 ) explicitly involving tr P from the propagators (C.8). Collecting terms in analogy to Section 3.3, and integrating out the angular coordinate using (3.18), we obtain the scalar contribution 1 2π 2 + H · ν 4π + 1 16π 2R Σ − H 2 + 4 tr P + 4 (∂n) 2 log + finite, (C.10) while the tensor field yields −2R Σ + 3 H 2 + 4 tr P log + finite. (C.11) Combining these terms, we find log V Σ = 1 4π log Σ vol Σ R Σ − H 2 + 4 tr P + (∂n) 2 + finite, (C.12) which agrees exactly with (3.23). Note that the scheme dependence, which is present in the simple pole of both (C.10) and (C.11), cancels in the final result, and the terms H 2 and tr P combine to an anomaly term as in (2.3), as required. | 11,246.2 | 2020-03-27T00:00:00.000 | [
"Physics"
] |
On Retargeting the AI Programming Framework to New Hardwares
. Nowadays, a large number of accelerators are proposed to increase the performance of AI applications, making it a big challenge to enhance existing AI programming frameworks to support these new accelerators. In this paper, we select TensorFlow to demonstrate how to port the AI programming framework to new hardwares, i.e., FPGA and Sunway TaihuLight here. FPGA and Sunway TaihuLight represent two distinct and significant hardware architectures for considering the retargeting process. We introduce our retargeting processes and experiences for these two platforms, from the source codes to the compilation processes. We compare the two retargeting approaches and demonstrate some preliminary experimental results.
Introduction
In recent years, AI has moved from research labs to production, due to the encouraging results when applying it in a variety of applications, such as speech recognition and computer vision.As the widespread deployment of AI algorithms, a number of AI processors [1,2] and FPGA accelerators [3,4] are proposed to accelerate AI applications meanwhile reducing power consumption, including DianNao [1], EIE [2], ESE [4], etc.Therefore, it is a significant issue for retargeting AI programming frameworks to different hardware platforms.Some popular AI programming frameworks, e.g., TensorFlow/MXNet, have enhanced the fundamental infrastructure for retargetability using compiler technologies.In particular, TensorFlow introduces XLA [5] to make it relatively easy to write a new backend for novel hardwares.It translates computation graphs into an IR called "HLO IR", then applies high-level target-independent optimizations, and generates optimized HLO IR.Finally, the optimized HLO IR is compiled into a compiler IR, i.e., LLVM IR, which is further translated to machine instructions of various architectures using the compiler of the platform.Similarly, MXNet introduces NNVM compiler as an end-to-end compiler [6].
To whom correspondence should be addressed.
The evolving compiler approach significantly enhances the retargetability of AI programming frameworks.However, it still has a number of challenges.First, the non-compiler version is of the essence since it guarantees performance via directly invoking underlying high performance libraries.Therefore, maintaining the TensorFlow non-XLA and MXNET non-NNVM versions are necessary when retargeting the frameworks to a new platform.Second, the existing compiler approaches rely on LLVM backend of the AI processors, since the final binary code generation is implemented by the backend compiler.But for emerging AI processors especially designed for inference, vendors typically provide only library APIs without compiler toolchains.Therefore, it requires us to consider retargetability of non-compiler approaches for AI programming frameworks.
In this paper, we select one representative AI programming framework, Ten-sorFlow, to present our experience of retargeting it to FPGA and Sunway Tai-huLight.For FPGA, the architecture is the X86 CPU equipped with FPGA as an accelerator, thus we discuss how to add a new accelerator in TensorFlow.Meanwhile, we also design a set of software APIs for controlling FPGA in highlevel C/C++ languages.For Sunway TaihuLight, the processor is a many-core architecture which has 260 heterogeneous cores.All these cores are divided into 4 core groups (CG), with each CG including a big core and 64 little cores.Sunway can be regarded as a chip integrating CPUs (big cores) and accelerators (little cores), thus we discuss how to change the CPU type in TensorFlow.In this paper, we respectively discuss how to retarget TensorFlow to these two distinct architectures, and present some preliminary experimental results on FPGA and Sunway TaihuLight.We wish this paper can be helpful for programming framework developers to retarget TensorFlow to other newly designed hardwares.
The rest of this paper is organized as follows: Section 2 and 3 discuss how to retarget TensorFlow to FPGA and Sunway TaihuLight respectively.Section 4 demonstrates experimental results.Section 5 discusses differences of retargeting to FPGA and Sunway.Section 6 discusses the related work.Section 7 concludes.
FPGA Execution Model
A representative approach to utilizing FPGA is Amazon EC2 F1 [3], which is a compute instance with FPGA that users can program to create custom accelerators for their applications.The user-designed FPGA can further be registered as an Amazon FPGA Image (AFI), and be deployed to an F1 instance.
We also follow the design rule for leveraging FPGA in AI programming frameworks.In particular, we create an abstract execution model for FPGA, and provide a set of APIs for the developers to register an FPGA into the system.In this paper, we use the naive first-in-first-out policy (FIFO) to model the FPGA execution, shown in Figure 1a.Furthermore, the task execution on our FPGA is non-preemptive.Our current execution model is similar with GPU kernel execution (without streams).Certainly designers can create different execution models for FPGA, and TensorFlow runtime shall be adjusted correspondingly.
FPGA APIs and Implementation
Furthermore, we also provide a set of abstract APIs for accessing FPGA accelerators.The abstract APIs are designed to be standard C functions and data structures, as shown in the top part of Figure 1b.The APIs are: -FPGA InitConfig.FPGA resource initialization and configuration.
-FPGA CopyBufH2D.Copy data from host to device, using DMA.
-FPGA CopyBufD2H.Copy data from device to host, using DMA.
-FPGA TaskDesc t.Data structure for FPGA task description.
-FPGA CommitTask.Commit a task to FPGA.
The APIs are implemented in the operating system (middle part of Figure 1b) and user-space libraries (top part of Figure 1b) coordinately.User-space libraries encapsulate the FPGA accelerators into APIs, based on interfaces provided by the FPGA driver framework.The FPGA driver framework interacts with FPGA hardware via PCIe bus, and consists of four functional components: "PCIe Driver" for handling PCIe device registration and interrupts, "DMA Configure" for DMA memory transfer requests, "Software Task Queue" for FIFO execution model and "FPGA Monitor & Management" for monitoring and managing FPGA devices, such as querying FPGA states and task status.
TensorFlow Architecture for Supporting Retargetability
Figure 2a illustrates how TensorFlow executes user-defined dataflow graphs.When the session manager receives a message of session.run(), it starts the "computational graph optimization and execution" module, which automatically partitions the dataflow graph into a set of subgraphs, and then assigns the subgraphs to a set of worker nodes.
The execution of subgraphs is managed by "dataflow executor", which is local to one worker node where the subgraphs are assigned to.The dataflow executor schedules operations in subgraphs to the underlying devices.Dataflow (b) Architecture of TensorFlow [7].
Fig. 2: TensorFlow architecture and its execution of user-defined dataflow graphs.
executor prepares the input and output data for each kernel invocation, launches the specific kernel via device executor (e.g.CPU/GPU Executor in Figure 2a).Figure 2b further depicts the overall architecture of TensorFlow framework.The modules related to retargeting are: "Device Layer", "Dataflow Executor" and "Kernel Implementation"."Device Layer" aims to provide proper abstraction of FPGA resources and launching FPGA tasks."Dataflow Executor" should be aware of the FPGA devices and be able to assign operations to them, and "Kernel Implementation" is the fundamental operation kernels on the FPGA.
Supporting FPGA in TensorFlow
Step 1. FPGA Device Abstraction.First, we add the FPGA device into the device layer.Two important issues are addressed here: Memory Management: FPGA accelerators are commonly equipped with DDR memory to hold input/output features and/or weights.This memory is treated as a memory pool in our work and C-style memory management scheme is provided.Thus, four critical routines: memcpyDeviceToHost, memcpyHost-ToDevice, malloc, and free are implemented using APIs provided in Section 2.2.
Execution Model: Execution model determines how TensorFlow runtime interacts with underlying devices and must match the nature of corresponding devices.The abstracted FPGA in this paper is a synchronous FIFO device.An FPGA executor is implemented using APIs defined in Section 2.2.
Step 2. FPGA Device Runtime.Second, runtime support for the new FPGA device will be implemented, including the kernel launching and high level memory management wrapper.
Kernel Launching.In TensorFlow, the dataflow executor assigns operations to specific device by invoking the Compute method of corresponding device, which is set to launch the Compute function of the given kernel.
Besides, a factory class, namely "FPGADeviceFactory" is provided to create and instantiate instances of "FPGADevice".
Step 3. FPGA Kernel Implementation.Figure 3 shows an example operation in TensorFlow, where the Compute function takes the input tensor parameters, the target device, and the context.All the input parameters are encapsulated into the data structure of OpKernelContext.
When defining an operation, its specific implementation on a device is called a kernel, which is typically implemented as libraries.For example, most CPU kernels are implemented via Eigen libraries [8], and most GPU kernels are implemented via CUBLAS or CUDNN libraries.Therefore, when we introduce FPGA for acceleration, we first define the implementation of operations on FPGA, which translates to function calls to FPGA CommitTask defined in FPGA APIs.After implementing an operation on a new device, we should register the new implementation into TensorFlow, using the REGISTER OP and REGISTER KERNEL BUILDER. Figure 3 shows an example for registering a new operation ZeroOut, which has two input tensor parameters a and b, and generates one output tensor c.We specify these information in REGISTER OP and implement the operation in OpKernel.Finally, REGISTER KERNEL BUILDER is used for registering the kernel.
Retargeting TensorFlow to Sunway
In this section, we first briefly introduce the architecture of Sunway processor, and then present our retargeting process.
Sunway Architecture
Sunway 26010 processor [9] is composed of 4 core groups (CGs) connected via an NoC.Each CG includes a Management Processing Element (MPE) and 64 Computing Processing Elements (CPEs) arranged in an 8 by 8 grid.MPE and CPE cluster in one CG share same memory space.All the MPEs and CPEs run at the frequency of 1.45GHz.
On the software side, Sunway uses a customized 64-bit Linux with a set of compilation tools, including native C/C++ compiler and cross compiler.
Aiming at Sunway processor, we regard MPEs as CPUs and leverage CPEs for acceleration.However, the MPEs and CPEs share same memory space, making it pointless to transfer data between them.Thus, we firstly retarget the TensorFlow framework which runs on CPUs to the Sunway MPEs, and then CPEs for acceleration in the retargeted TensorFlow.
Compiling TensorFlow for Sunway MPEs
We have two ways to compile TensorFlow for Sunway.The first is to use the native compiler of Sunway nodes by submitting compilation process as a job for Sunway.The second is to cross-compiler TensorFlow on an X86 server.We select cross-complication, since the native compiler is too restricted to compile the large-scale complex TensorFlow source codes.We met a series of obstacles during the retargeting process, and we discuss them here for providing some experience of porting a large scale software package to Sunway TaihuLight.
Static linked library.First, Sunway TaihuLight does not support dynamic linked library when CPEs are expected to be used.Therefore, we choose to cross-compile TensorFlow into a static linked library, i.e., libtensorflow.a.
The Bazel compilation tool.TensorFlow is configured to use Bazel as its default compilation tool, which can generate dynamic linked library, but does not work well for generating static linked library.Meanwhile, a number of unexpected problems raised when using the cross compiler swgcc in Bazel.Therefore, we switch to use Makefile as our compilation tool.
The Python support.TensorFlow is tightly coupled with the language of Python, which is not supported on Sunway TaihuLight.A number of modules utilize Python-based tools, such as tf.train and tf.timeline.Therefore, we decouple these modules from the TensorFlow framework.As a result, our retargeted TensorFlow on Sunway TaihuLight only supports C++ programming interface, without support for the Python binding.
Processing protobuf.The Protobuf tool protoc is used both during the compilation of TensorFlow (on X86 platform), and during the execution of TensorFlow (on Sunway TaihuLight platform).For such purpose, protoc is required to be compiled on x86 platform using X86 native gcc and cross compiler swgcc.
Two-phase compilation.The compilation of TensorFlow is a two-phase compilation.In the first phase, the X86 gcc compiler is used to generate some tools for X86 platform, e.g., the X86 protoc, which reads the *.pb files in TensorFlow source code and generates the corresponding C++ files.In the second phase, the cross compiler swgcc is used to generate the final libtensorflow.a.During this phase, all dependent libraries should be switched to the static linked versions, e.g., protobuf, libstdc++, libm, etc.
After TensorFlow is cross-compiled successfully, it can run on the MPEs of Sunway TaihuLight.Since Python module like tf.train is disabled, the ported TensorFlow does not support training.Now we have had a baseline TensorFlow which completely runs on the MPEs of Sunway.The operations can be implemented following steps in Section 2.4.Next we add CPEs for acceleration.Specially, MPEs are responsible for graph creation and optimization, together with task creation and scheduling.Meanwhile CPEs can execute the computation-intensive kernels, e.g.convolutions.
Using CPEs for Acceleration
We have two approaches for using CPEs.First, we can force the CPU kernel implementation to invoke CPE libraries, which means MPEs and CPEs are considered together as one device.Alternatively, we can consider CPEs as individual accelerators, similar with GPUs and FPGAs.In this paper, we select first approach as the second approach has been discussed in Section 2.
To use CPEs in an operation, consider the steps described in Figure 3. Take matmul for example, the original implementation will use Eigen as the math library in Compute part.We will change the math library from Eigen to SWCBLAS library, i.e. from Eigen call MatMul<CPUDevice> to sgemm/dgemm call in SWCBLAS.As SWDNN library is being developed, we only use SWCBLAS for implementing the operations in this work.When SWDNN is released, we can use the same approach to change the library from SWCBLAS to SWDNN.
Conv2D, Softmax and Matmul.Other operations can be easily supported once SWDNN is deployed.
Hardware Platforms
FPGA Implementation: We implement a custom PCIe-attached acceleration card based on a Xilinx Virtex-7 690T FPGA chip as shown in Figure 4.The card communicates with host CPU via the standard PCIe Gen3 x8 interconnect.We leverage dual off-chip DDR3-1600 SODIMMs with total capacity of 8GB as device memory.Xilinx Vivado 2016.4 toolset is used and the synthesized core accelerator logic and DMA engine operate at the frequency as high as 200MHz.
Figure 4 further illustrates the design of our FPGA accelerator.For details, we implement a unified hardware template of DNN accelerator with a configurable number of processing elements (PEs) for per layer specific operations, like convolution and full-connection.The processing element is composed of a 1-D array of multiply-and-accumulation (MACC) units, loop tiling and unrolling are leveraged to partition computation into specific PEs.An on-chip buffer is also implemented to hold tiled input feature map.To reduce the external memory bandwidth, temporary results are pushed into the PE buffer.Data movements between PE array and on-chip buffer is elaborately controlled by the PE controller according to the loop unrolling and tiling strategies.
Sunway TaihuLight: The Sunway TaihuLight is described in Section 3, and we use one node for evaluation.As we focus on the inference, the number of nodes does not matter.
Baseline Platforms: For comparison, we also run these models on a CPU and nVIDIA GPU.In particular, the CPU is Intel Xeon E5-2620 which runs at 2.0GHz and has a main memory of 32GB.The nVIDIA GPU is Tesla K40c which has the frequency of 745MHz, and the global memory is 12GB.
Results on FPGA Platform
With our retargeted TensorFlow, programmers can use the "with tf.device("fgpa:0")" statements to use the FPGA, with no modifications in their source codes.Figure 5 shows the overall execution time (data transfer time included) of Cifarnet and Lenet on FPGA, CPU and GPU.In this paper, we focus on retargeting process, thus the underlying FPGA implementation is not optimized.
Results on Sunway TaihuLight Platform
As we treat Sunway MPE and CPEs as a CPU, the source codes needs no modification and the models can be directly executed on the ported TensorFlow.
Figure 6 shows the overall execution time when using only MPE, in comparison with CPU and GPU.Note that the vertical axis is in log scale.Besides, we use only one core of Sunway and CPU, frequency of which are 1.45GHz and 2.0GHz respectively.Therefore, Sunway MPE performs worse than CPU.
Figure 7 demonstrates the execution time of one convolution operation (with the filter size of 3x3) on Sunway MPEs and CPEs, in comparison with CPU.We don't evaluate the overall execution as some operations are not supported on CPEs.The horizontal axis marks different scales of input feature sizes and input/output channel numbers, e.g.224 * 224 * (3 − 16) means input feature size is 224 * 224 while input channel is 3 and output channel is 16.The vertical axis is execution time in log scale.The results show that CPEs can obtain significant performance improvement, up to 45 times than MPE.Furthermore, in our experiments, only one core group is leveraged (the reason is that the SWCBLAS interface is designed for one core group).The performance is expected to be improved when all core groups are utilized and SWDNN is released.Fig. 7: Performance of convolution with 3x3 filter size.
Discussion and Future Work
We have discussed two types of TensorFlow retargeting processes, i.e., FPGA and Sunway TaihuLight.In particular, FPGA represents the approach of introducing a new accelerator into TensorFlow while Sunway TaihuLight represents the approach of changing the CPU architecture in TensorFlow.
Retargeting to a new AI accelerator.Most of emerging AI processors will be deployed as accelerators.Thus, our experience of retargeting to FPGA can apply for such scenarios.The modification for the device layer is the same with the process for FPGA.The runtime support shall be designed by vendors of AI processors, in corresponding to their execution model.Furthermore, amount of work is needed for implementing hundreds of operation kernels.Even if most AI processors will provide machine learning libraries, porting these operation kernels are still time-consuming.We will further explore automatic kernel generation.
Exploiting the computation ability of Sunway TaihuLight.Sunway TaihuLight exhibits performance potential for machine learning, e.g., some preliminary work on SWDNN [14] has been released.To enable more machine learning programs, especially model training, to run on Sunway TaihuLight, a more robust TensorFlow is necessary.Thus, we will further consider following issues, i.e., Bazel compilation tool, Python support, and stable SWDNN library.
Data layout issue.Moreover, the data layout is a significant issue for the framework developers.For example, TensorFlow stores the tensor with the default format of NHWC.But NCHW is the default format for GPU libraries, e.g.cudnn [15], making it the framework's burden to transform between them.Sunway TaihuLight has not finally determined its data layout in SWDNN.When TensorFlow is retargeted to a new platform, data format shall be designed by taking hardware and/or library into consideration.
Related Work
In recent years, AI has drawn many interest from both researchers and industry, especially DNNs (Deep Neural Networks [16,17,12,13]).Despite the enormous advance in AI algorithms, researchers have also done extensive work to meet the performance/energy/programming requirements of DNN applications.
First, from the aspect of software, a huge number of software tools are proposed to enable flexible programming of DNN applications, such as Tensor-Flow [18], Caffe [19], and MXNet [20].All these tools support general purpose CPU and high performance nVIDIA GPU, both of which have mature compiler toolchains [21] and highly optimized libraries [15].
Second, from the aspect of hardware, a series of domain specific accelerators [1,2,22,23] are explored.DianNao [1] leverages loop tiling to efficiently reuse data and supports both DNNs and CNNs.EIE [2] focus on inference for compressed DNN models.Furthermore, researchers also explore FPGA as accelerators [4,24,25] for DNN applications.And to the best of our knowledge, all these accelerators lack mature compiler toolchains, for example, a C compiler.
At last, it is becoming a big challenge to utilize these diverse hardware accelerators in software tools.TensorFlow proposes XLA [5], which leverages compiler technology to transform high-level dataflow graph to compiler intermediate representation, i.e.LLVM IR, relies on hardware-specific backend to generate binary code, e.g., NVPTX for nVIDIA GPU.Similarly, MXNET introduces NNVM [6], which also makes use of compiler backend.However, these compiler-based approaches require a mature compiler backend, which is rarely seen in AI processors.Thus, this work explores non-compiler approach of retargeting software frameworks to diverse AI hardwares.Besides, [26] proposes a NN compiler to transform a trained NN model to an equivalent network that can run on specific hardwares, which sheds some light on automatic retargeting of AI frameworks.
Conclusion
We have presented our experience of retargeting TensorFlow to different hardwares, e.g.FPGA and Sunway, together with some preliminary evaluation results using popular DNN models.We have investigated the differences between FPGA and Sunway with respect to retargeting.
(a) Execution model of FPGA.(b)APIs of FPGA.
Fig. 3 :
Fig. 3: An example of implementing an operation in TensorFlow. | 4,674.6 | 2018-11-29T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2019
. The status of the two neutronics international benchmark projects sanctioned by the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD NEA), the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP), was last directly discussed with the international nuclear data community at the 14th International Conference on Nuclear Data for Science and Technology (ND2019) in Beijing, China. Since ND2019, the quantity of available integral benchmark experiment data has increased. The primary purpose of the ICSBEP and IRPhEP is to provide extensively peer-reviewed benchmark data to the international nuclear community in support of validation and testing of nuclear data and models. A total of 28 countries have contributed to the past and continued success of these projects as benchmark evaluations, technical reviews, and experimental data using their own time and resources: 26 to the ICSBEP and 25 to the IRPhEP. Key contributions to the handbooks over the past three years can only be highlighted within this paper. Full technical details and benchmark experiment descriptions can be located within the benchmark reports distributed within recent editions of the handbooks.
Introduction
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) [1] and the International Reactor Physics Experiment Evaluation Project (IRPhEP) [2] continue to represent the gold standard for neutronics benchmark data as sanctioned activities of the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD NEA). The status of these two international benchmark projects was last provided to the international nuclear data community at the 14 th International Conference on Nuclear Data for Science and Technology (ND2019) in Beijing, China [3] along with a summary of best practices observed during the benchmark evaluation process [4]. These projects enable international success in criticality and reactor physics safety, neutronics code and nuclear data validation, methods development, experiment and reactor design, licensing, training, and education [5]. The contributions towards each of these projects are concatenated within their respective handbooks: the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) [6] and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [7]. The handbook covers are shown in Fig. 1 and Fig. 2, respectively.
A total of 28 countries have contributed to the past and continued success of these projects: 26 to the This paper only briefly summarizes the latest benchmark contributions over the past three years. The full technical details and benchmark experiment descriptions, including sample calculations, can be found within the benchmark reports on the most recent editions of each of these handbooks. Many evaluations are also periodically updated to correct errors, incorporate additional evaluated data, or clarify further the content based upon handbook-user feedback. Summaries of various revisions incorporated with the annual releases of the handbooks were previously published [8,9,10,11,12]. It should also be noted that some benchmark evaluations are included within both handbooks because of their versatile applicability within both criticality safety and reactor physics applications.
Latest ICSBEP Contributions
The 2021 edition of the ICSBEP Handbook now includes 587 evaluations with benchmark specifications for 5,098 critical, near-critical, or subcritical configurations; 45 criticality-alarm-placement/shielding configurations with multiple dose points apiece (contained within 7 evaluations); and 237 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications (contained within 10 evaluations). An additional 838 configurations deemed unacceptable to support highquality criticality safety validation needs; however, the experimental data have been evaluated and preserved to prevent duplication of efforts and identify gaps in existing data to be resolved with improved experimental measurements. A summary of the handbook contents is provided in Table 1 with a breakdown of contributions by country shown in Fig. 3.
Access to the ICSBEP Handbook is available to OECD NEA member countries and recent handbook contributors from non-member countries; requests can be made using the following website: https://oe.cd/ICSBEP.
The Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE) tool was developed to facilitate use and searching of the extensive ICSBEP database [13]; it can also be accessed online: https://oe.cd/nea-dice.
Latest IRPhEP Contributions
The 2021 edition of the IRPhEP Handbook now contains data for 57 unique nuclear facilities with evaluations containing benchmark specifications for 169 experimental series. Four of the 169 evaluations are draft benchmarks specifications yet to be formally adopted into the handbook. Draft evaluations typically represent incompletely evaluated experiments, where the data has been preserved and made available for public use. A summary of the handbook contents is provided in Table 2 with a breakdown of contributions by country shown in Fig. 4. Access to the IRPhEP Handbook is available to OECD NEA member countries and recent handbook contributors from non-member countries; requests can be made using the following website: https://oe.cd/IRPHE. Similarly, the IRPhEP Database and Analysis Tool (IDAT) was prepared for use with the IRPhEP Handbook [14] and is also available online: https://oe.cd/idat. Since ND2019 there have been a total of 10 new benchmark evaluations contributed to the IRPhEP Handbook.
The new contributions include one pressurized water reactor (PWR), one gas cooled fast reactor (GCFR), two light water moderated reactor (LWR), one molten salt reactor (MSR), on space reactor (SPACE) and four fundamental physic reactor (FUND) benchmarks.
Gas Cooled Fast Reactors (GCFR)
ZPR-GCFR-EXP-001 ZPR-9/29: Gas Cooled Fast Reactor Critical Experiments -Phase II [15] and the Spent Fuel Composition (SFCOMPO) database [16]. There are no shortages of international opportunities for involvement via the OECD NEA. Contributions include provision of additional experimental data to evaluate, evaluation of experimental data to prepare new benchmark reports, service as technical reviewers, and programmatic or financial support. Additionally, as users of these handbooks have questions or identify possible errors in the benchmark reports, they are encouraged to inform the leadership of these projects so that improvements can be made in future revisions of the handbooks.
Conclusions
Hundreds of contributors from 28 different countries have combined their efforts to enable the continued success of the ICSBEP and IRPhEP. The handbooks from these two projects continue to grow and are utilized worldwide to support research activities in government, industrial, commercial, and educational environments. The high-quality integral benchmark data within these handbooks support nuclear and criticality safety, as well as enable testing of nuclear data for contemporary and future needs. | 1,458.8 | 2020-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
A Novel Analytical LRPSM for Solving Nonlinear Systems of FPDEs
: This article employs the Laplace residual power series approach to study nonlinear systems of time-fractional partial differential equations with time-fractional Caputo derivative. The proposed technique is based on a new fractional expansion of the Maclurian series, which provides a rapid convergence series solution where the coefficients of the proposed fractional expansion are computed with the limit concept. The nonlinear systems studied in this work are the Broer-Kaup system, the Burgers’ system of two variables, and the Burgers’ system of three variables, which are used in modeling various nonlinear physical applications such as shock waves, processes of the wave, transportation of vorticity, dispersion in porous media, and hydrodynamic turbulence. The results obtained are reliable, efficient, and accurate with minimal computations. The proposed technique is analyzed by applying it to three attractive problems where the approximate analytical solutions are formulated in rapid convergent fractional Maclurian formulas. The results are studied numerically and graphically to show the performance and validity of the technique, as well as the fractional order impact on the behavior of the solutions. Moreover, numerical comparisons are made with other well-known methods, proving that the results obtained in the proposed technique are much better and the most accurate. Finally, the obtained outcomes and simulation data show that the present method provides a sound methodology and suitable tool for solving such nonlinear systems of time-fractional partial differential equations.
Introduction
Fractional-order systems have acquired a lot of attention and interest in various engineering and scientific fields as popular mathematical models used to describe realworld physical phenomena [1][2][3][4][5]. Fractional calculus provides a valuable instrument for showing the development of complicated dynamical systems with long-term memory impacts. In contrast to ordinary derivatives, defining fractional order derivatives of a specific function necessitates the existence of its complete history. Such a non-local feature, i.e., the memory consequence, has made it much more practical to explain various realworld physical systems using fractional differential equations. Investigating dynamics, including complexity, chaos, stability, bifurcation, and synchronization of these fractional order systems, has recently become an interesting research field in nonlinear sciences [6][7][8][9][10][11][12][13]. In order to study the real-world physical systems' dynamic behavior, it is essential to determine how these solution trajectories can change over slight perturbations. Therefore, performing and developing various numerical techniques to analyze and simulate the systems' nonlinear dynamics is important. Considering fractional derivatives, analyticnumeric approaches to fractional calculus frequently depend on versions of the Riemann-Liouville, Caputo, Grunwald-Letnikov, Riesz, or other approaches, which were discussed in previous studies during the past few years [14][15][16]. This study, however, will use Caputo's approach of fractional differentiation, benefiting from Caputo's approach that initial conditions of the fractional partial differential equations, i.e., (FPDEs) with Caputo's derivatives take the same conventional form as in integer order.
Differential equations (DEs) can be used for modeling many chemical, biological, and physical phenomena. Because FPDEs have a significant impact on many applied disciplines, particularly nonlinear ones such as fluid flow, biological diffusion of populations, dynamical systems, control theory, electromagnetic waves, etc., there has been a growing interest in them in recent years [17][18][19][20][21]. Most scientific phenomena in various disciplines such as physics, biological systems, and engineering are nonlinear problems; therefore, it might be challenging to find their exact solutions, e.g., physical problems are typically modeled by utilizing higher nonlinear FPDEs, thereby finding exact solutions for these problems is quite challenging. Thus, numerical as well as approximate methods must be employed. Numerous useful techniques were used for solving linear and nonlinear FPDEs, including the variational iteration technique, the Adomian decomposition technique, the homotopy analysis technique, the homotopy perturbation technique, and the fractional residual power series technique [22][23][24][25][26][27][28].
The fractional power series method (FPSM) has been employed to solve several classes of differential and integral equations of the fractional order if the solution of the equation can be extended into a fractional power series [29]. Moreover, FPSM is a fast and easy method utilized to determine the fractional power series solution coefficients because if we compare the computational effort required to compute the solutions of the FPDEs in FPSM with other methods, it becomes clear that it is much less. Moreover, the results are much better, as the speed of implementation on mathematical packages helps to obtain the results in less time and with more accuracy, especially in non-linear problems [30][31][32]. Recently, the FPSM has received the attention of many researchers, whereby various fractional integral and differential equations were investigated successfully by using FPSM, involving fractional Fokker-Planck equations [33], Sawada-Kotera-Ito, Lax, and Kaup-Kupershmidt equations [34], fractional Fredholm integrodifferential equation of order 2β arising in natural sciences [35]. The Laplace transform (LT) technique represents a simple technique for solving several kinds of linear differential integral and integrodifferential equations, as well as a specific class of linear FPDEs [5]. Solving linear DEs by LT technique involves three steps. Transforming the main DEs into the Laplace space represents the first step of this process. Solve the new equation algebraically in the Laplace space in the second step. The last step involves transforming back the obtained solution in the previous step into the initial space, which solves the problem at hand [36].
Overall, there are no semi-approximate or conventional analytical methods that can produce accurate closed-form or approximate solutions for nonlinear FPDE systems. Accordingly, there is a pressing need for efficient numerical methods so that accurate approximate solution can be found for these models for extended periods. Motivated by the above-mentioned discussion, designing an innovative iterative algorithm to produce analytical solutions to the nonlinear FPDE systems is the main aim of our study. The motivation of this study is to present an analytical method called LFPSM to solve a nonlinear system of FPDEs. To specify the efficacy and accuracy of this method, we apply it to solve three nonlinear systems of FPDEs and compare the results obtained with the exact solutions and solutions obtained by other methods. According to our best knowledge, the proposed method has not been applied to find analytical solutions to Broer-Kaup and Burgers' systems of fractional orders in the literature, which intensely motivated this work.
This study primarily aims to generate accurate approximate solutions to nonlinear FPDE systems in the Caputo sense, which are subject to proper initial conditions by using an innovative analytical algorithm. This algorithm is called Laplace FPSM, which has been suggested and proved in [37]. It is worth mentioning that this newly introduced method relies on transforming the considered equation into the LT space so that a sequence of Laplace series solutions to the new equation form is established, and then the solution to the considered equation can be established by utilizing the inverse LT. Without perturbation, linearization, or discretization, this innovative method can be applied to generate the FPS expansion solutions for both linear and nonlinear FPDEs [38,39]. Furthermore, this technique, unlike the conventional FPSM, does not necessitate matching the corresponding coefficients terms nor the utilization of a relation of recursion. The technique offered is based on the limit concept for finding the variable coefficients. Unlike FPSM, which needs numerous times to compute different fractional derivatives in the steps of the solution, only a few computations are needed to determine the coefficients specified. Therefore, this proposed method has the capability of yielding closed-form solutions, in addition to accurate approximate solutions, by involving a fast convergence series.
The rest of the article is organized as follows. A review of some necessary definitions, properties, and theorems concerning fractional calculus, Laplace transform, and Laplace fractional expansion is presented in Section 2. The methodology for solving a system of nonlinear time-FPDEs by Laplace FPSM is deeply investigated in Section 3. In Section 4, the Broer-Kaup (BK) system of nonlinear time FPDEs, and two Burgers' systems of nonlinear time FPDEs are solved to show that our approach is accurate and applicable. The results are debated graphically and numerically in Section 5. Finally, Section 6 is lifted for the conclusions.
Preliminary Concepts
This section is devoted to overviewing the essential definitions and theorems of fractional differentiation, in addition, to giving a brief for some preliminary definitions and necessary theorems regarding LT, which will be used in sections three and four.
Definition 1. For n ∈ N, and ∈ R + the time-fractional derivative in the Caputo sense for the real-valued function U ( , ) is defined as: [3] where D n t = ∂ n ∂t n , and I t is the R-L fractional integral operator and which is given by: Proof. The proof is in [38].
Proof.
The proof is in [38].
gives ∀k ∈ N and 0 < t < T < 1, then the series of numerical solutions converges to the exact solution [39].
The Methodology of Laplace RPSM
In this part, we present the fundamental idea of the Laplace RPSM for solving the system of time FPDEs with initial conditions. Our strategy for using the proposed scheme is to rely on coupling the Laplace transform and the RPS approach. More precisely, consider the system of FPDEs with the initial conditions of the form: where A 1 , A 2 are two linear or nonlinear operators such that U(η, ) = (U 1 (η, ), U 2 (η, ), . . . , U n (η, )), is the unknown vector function to be determined, and η = (η 1 , η 2 , . . . , η m ) ∈ R m , n, m ∈ R. Here, D t refers to the time-fractional derivative of order ∈ (0, 1], in the Caputo meaning. To build the approximate solution of (1) by using the Laplace RPSM, one can accomplish the following procedure: Step 1: Taking the LT on the two sides of (1) and employing the initial data of (1), as well as relying on Lemma 2, part (2), we get: Step 2: Based on Theorem 1, we suppose that the approximate solution of the Laplace Equation (2) has the following Laplace fractional expansions: and the k − th Laplace series solutions take the following form: Step 3: Define the k − th Laplace fractional residual function of (2) as: and the Laplace fractional residual function of (2) can be defined as: As in [37][38][39], some of the beneficial facts of Laplace residual function, which are fundamental in constructing the approximate solution, are listed as follows: • lim k→∞ L Res u u u k (η, s) = L(Res u u u (η, s)), for η ∈ I, s > δ ≥ 0.
Step 4: The k − th Laplace fractional residual function of (5) is substituted by the k − th Laplace series solution (4).
Step 6: The approximate solution U j,k (η, ), of the main Equation (1), can be attained by applying the inverse Laplace transform operator on both sides of the obtained Laplace series solution.
Numerical Examples
In this section, we show that the Laplace RPSM is superior, efficient, and applicable, which is achieved by testing three nonlinear time-FPDEs systems. It should be noted here that all numerical and symbolic calculations are made using the Mathematica 12 software package.
Example 1. Consider the following Broer-Kaup system of nonlinear time-FPDEs: where ∈ (0, 1] and (x, ) ∈ R × [0, 1]. The exact solutions when By applying the LT operator on (7) and using the second part of Lemma 2 and the ICs of (7), the Laplace fractional equations are: where According to the last discussion of the proposed method, the k − th Laplace series solutions, U k (x, s) and k (x, s) for (8) are expressed as: Hence, the k − th Laplace fractional residual functions of (8) is defined as: The 1 − st Laplace fractional residual functions can be carried out by letting k = 1, in (10): To find the 1 − st Laplace series solution of (8), we simply take the next process lim s→∞ s x+1 L Res U 1 (x, s) , L Res 1 (x, s) = (0, 0), which yields that 1 (x) = −2sec h 2 (x) and ℊ 1 (x) = 4tanh(x)sec h 2 (x). So, the 1 − st Laplace series solutions of (8) are: For k = 2, in (10) the 2 − nd Laplace residual functions can be written as: To find the 2 − nd Laplace series solution of (8), we simply find out the next process lim s→∞ s 2 +1 L Res U 2 (x, s) , L(Res x 2 (x, s)) = (0, 0), and by solving limits, we get . So, the 2 − nd Laplace series solution of (8) could be expressed as: Similarly, for k = 3, we have: By solving lim . It yields that: . So, the 3 − rd Laplace series solution of (8) could be written as: Using Mathematica, we can perform the aforesaid steps for an arbitrary k, and using the fact lim . Thus, the k − th Laplace series solution of (8) could be reformulated by the following fractional expansions: Finally, by applying the inverse Laplace transform for the obtained expansions (17), we conclude that the k − th approximate solution of the time-fractional nonlinear system (7) can be formulated as: When k → ∞ and = 1 in (18), we obtain the Maclaurin series expansions of the closed form: and which is totally in agreement with the exact solution. Example 2. Consider the Burgers' system of nonlinear time fractional IVP: where By taking the Laplace transform operator on both sides of (20) and using the second part of Lemma 2 and the initial conditions of (20), the Laplace fractional equations will be: where According to the last discussion of the proposed method, the k − th Laplace series solutions, U k (x, s) and k (x, s) for (21) are expressed as: As well we define the k − th Laplace residual functions of (21) are: By letting k = 1, in (23), the 1 − st Laplace residual functions are: To find the 1 − st Laplace series solution of (21), we simply take the next process lim s→∞ s +1 L Res U 1 (x, s) , L Res 1 (x, s) = (0, 0), which yields that 1 (x) = − sin(x) and ℊ 1 (x) = − sin(x). Hence, the 1 − st Laplace series solutions of (21) are: By letting k = 2, in (23), the 2 − nd Laplace residual functions are: To find the 2 − nd Laplace series solution of (21), we simply find out the next process lim s→∞ s 2 +1 L Res U 2 (x, s) , L Res V 2 (x, s) = (0, 0), and by solving limits, we obtain 2 (x) = sin(x) and ℊ 2 (x) = sin(x). Hence, the 2 − nd Laplace series solutions of (21) are: Similarly, for k = 3, we have: By solving lim s→∞ s 3 +1 L Res U 3 (x, s) , L Res V 3 (x, s) = (0, 0). It yields that: 3 (x) = − sin(x) and ℊ 3 (x) = − sin(x). Hence, the 3 − rd Laplace series solutions of (21) are: Using Mathematica, we can process the above steps for any k, and by the fact that lim s→∞ s k +1 L Res U k (x, s) , L Res V k (x, s) = (0, 0), one can obtain that k (x) = (−1) k sin(x) and ℊ k (x) = (−1) k sin(x). Thus, the k − th Laplace series solutions of (21) could be formulated on the fractional expansion: In the end, we take the inverse LT for the obtained expansions (30) to get that the k − th approximate solutions of the nonlinear system of time-FPDEs (20) have the form: When k → ∞ and = 1 in (31), the Maclaurin series expansions of the closed forms are: U and which is totally in agreement with the exact solution. Example 3. Consider the Burgers' system of nonlinear time-FPDEs: By taking the LT operator on both sides of (33) and using the second part of Lemma 2 and the ICs of (33), the Laplace fractional equations will be: where According to the last discussion of the proposed method, the k − th Laplace series solutions U k (x, y, s), V k (x, , s) and W k (x, , s) for (34) are expressed as: As well, the k − th Laplace fractional residual functions of (34) are defined as: For k = 1, in (36), the 1 − st Laplace residual functions are expressed as: To find the 1 − st Laplace series solution of (34), we simply take the next process lim s→∞ s +1 L Res U 1 (x, , s) , L Res V 1 (x, , s) , L Res W 1 (x, , s) = (0, 0, 0), which yields that 1 (x, ) = −e x+ , ℊ 1 (x, ) = e x− and f 1 (x, ) = e −x+ . Hence, the 1 − st Laplace series solutions of (34) are: For k = 2, in (36), the 2 − nd Laplace residual functions are: To find the 2 − nd Laplace series solution of (34), we simply find out the next process lim s→∞ s 2 +1 L Res U 2 (x, , s) , L Res V 2 (x, , s) , L Res W 2 (x, , s) = (0, 0, 0), and by solving limits, we get 2 (x, ) = e x+ , ℊ 2 (x, ) = e x− and f 2 (x, ) = e −x+ . Hence, the 2 − nd Laplace series solutions of (34) are: Similarly, for k = 3, we have: By solving lim it yields that: x 3 (x, y) = −e x+y , x 3 (x, x) = e x−x and f 3 (x, x) = e −x+x . Hence, the 3 − rd Laplace series solutions of (34) are: Using Mathematica, we can process the above steps for any k, and by the fact that lim s→∞ s k +1 L Res U k ( , , s) , L Res ∨ k ( , , s) , L Res W k ( , , s) = (0, 0, 0), one can obtain that k ( , ) = (−1) k e + , ℊ k ( , ) = e − and f k ( , ) = e − + . Thus, the kth-Laplace series solutions of (34) could be formulated by the fractional expansions: In the end, we take the inverse LT for the obtained expansions (43) to conclude that the k − th approximate solutions of the nonlinear systems of time-FPDEs (33) have the form: and which is totally in agreement with the exact solution.
Graphical and Numerical Results
This section deals with the validity and efficiency of the Laplace RPSM for systems of time-FPDEs discussed in Examples 1-3 through different graphical representations and tabulated data for the obtained approximation and exact solutions.
The absolute error functions calculated demonstrate the accuracy of the Laplace RPSM. Tables 1-3 illustrate several values of the approximate and exact solutions as well as the absolute errors for systems of time-FPDEs (7), (20), and (44) at selected grid points in the domain. From the tables, the approximate solutions are harmonic with the exact solutions, which confirms the performance and accuracy of the Laplace RPSM, whilst the accuracy is in advance by using only a few of the Laplace RPS iterations. Further, numerical simulations for the attained results of the problems studied are achieved at various values of as illustrated in Tables 4-6. Numerical comparisons are established to confirm the mathematical results for the obtained approximate solutions supported by the results of numerical comparisons. Table 7 shows the absolute errors of the obtained approximate solutions for the system of time-FPDEs (7) at = 1 with the absolute errors of the approximate solutions generated by the MGMLFM [40], while Tables 8-10 show a comparison of the obtained approximate solutions for the systems of time-FPDEs (7), (20) and (44), respectively with previous results generated by the existing method as MGMLFM [40], and FNDM [41] at various values of . As it is evident from the comparison results, the results obtained by Laplace RPSM are close to the exact solutions faster than the mentioned methods. The 3D plots behavior of the approximate solutions of the time-FPDEs (7), (20), and (44) by Laplace RPSM are shown respectively in Figures 1-3 at various values of which are compared with the exact solutions on their domains. Obviously, from these figures, it can be deduced that the geometric behaviors almost agree and strongly match each other, particularly when the integer order derivative is considered. From these graphics, we can conclude that the dynamic behaviors match and correspond well with each other, specifically when the standard order derivative is considered. Moreover, Figures 4 and 5 demonstrate the behavior of the obtained Laplace RPS solutions for the systems of the time-FPDEs (7) and (20) at various values of . It is observed from these figures that the Laplace RPSM approximate solutions match with solutions at = 1 , and this reinforces the effectiveness of the proposed method.
Conclusions
This investigation of time-FPDEs with initial conditions constructs a proper framework for the mathematical modeling of several fractional problems that appear in physical and engineering applications. The current work has introduced the analytical and approximate solutions for known systems of nonlinear time-FPDEs via applying Laplace RPSM. Three nonlinear time-FPDEs systems, including Broer-Kaup and Burgers' systems, have been investigated utilizing Caputo-time fractional derivatives. The exact and the Laplace RPS solutions have been displayed numerically and graphically at various values of the fractional order over (0, 1]. The analysis of simulation results revealed that the Laplace RPS solutions are in imminent consistency with each other, as well as with the exact solutions at integer-order of , which confirms the performance of the proposed method. Numerical comparisons of the obtained results with the results previously calculated by other numerical methods, such as modified generalized Mittag-Leffler function method MGMLFM [40] and fractional natural decomposition method FNDM [41], have been achieved, which indicates the high accuracy and effectiveness of the Laplace RPSM. Consequently, the analysis of attained results and their simulations confirm that the Laplace RPSM is an easy and systematic, robust, efficient, and suitable instrument to generate analytical and approximate solutions of several fractional physical and engineering problems with fewer computations and iteration steps. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,297.6 | 2022-11-04T00:00:00.000 | [
"Mathematics",
"Engineering",
"Physics"
] |
EML4-ALK fusion protein in Lung cancer cells enhances venous thrombogenicity through the pERK1/2-AP-1-tissue factor axis
Background Accumulating evidence links the echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) rearrangement to venous thromboembolism (VTE) in non-small cell lung cancer (NSCLC) patients. However, the corresponding mechanisms remain unclear. Method High-throughput sequencing analysis of H3122 human ALK-positive NSCLC cells treated with ALK inhibitor/ dimethyl sulfoxide (DMSO) was performed to identify coagulation-associated differential genes between EML4-ALK fusion protein inhibited cells and control cells. Sequentially, we confirmed its expression in NSCLC patients’ tissues and in the plasma of a subcutaneous xenograft mouse model. An inferior vena cava (IVC) ligation model was used to assess clot formation potential. Additionally, pathways involved in tissue factor (TF) regulation were explored in ALK-positive cell lines H3122 and H2228. Statistical significance was determined by Student t-test and one-way ANOVA using SPSS. Results Sequencing analysis identified a significant downregulation of TF after inhibiting EML4-ALK fusion protein activity in H3122 cells. In clinical NSCLC cases, TF expression was increased especially in ALK-positive NSCLC tissues. Meanwhile, H3122 and H2228 with high TF expression exhibited shorter plasma clotting time and higher TF activity versus ALK-negative H1299 and A549 in cell culture supernatant. Mice bearing H2228 tumor showed a higher concentration of tumor-derived TF and TF activity in plasma and the highest adjusted IVC clot weights. Limiting EML4-ALK protein phosphorylation downregulated extracellular regulated protein kinases 1/2 (ERK1/2)-activating the protein-1(AP-1) signaling pathway and thus attenuated TF expression. Conclusion EML4-ALK fusion protein may enhance venous thrombogenicity by regulating coagulation factor TF expression. There was potential involvement of the pERK1/2-AP-1 pathway in this process. Supplementary Information The online version contains supplementary material available at 10.1007/s11239-023-02916-5.
Introduction
In recent years, accumulating studies have indicated a close link between genetic alterations and venous thromboembolism (VTE) occurrence in patients with malignant tumors [1][2][3].Our team also explored the relationship between driver oncogene alterations and VTE event occurrence in non-small cell lung cancer (NSCLC) patients through a prospective cohort study.The results showed that the anaplastic lymphoma kinase (ALK) gene rearrangement in NSCLC conferred a significant increase in VTE risk [4,5], which was consistent with previous retrospective studies.Sequentially, it was also verified again by professor Hanny et al. in a cohort study [6].However, little is known about the specific mechanism of ALK rearrangement in NSCLC cells regulating VTE occurrence.ALK rearrangement is the first somatic oncogene translocation discovered in lung cancer.In 2007, a Japanese team identified a fusion gene caused by inversion within chromosome 2p, comprising portions of the echinoderm microtubule-associated protein-like 4 (EML4) gene located in p21 and the ALK gene located in p23 in NSCLC cells [7,8].Basically, the full-length EML4 protein consists of the N-terminal coiled-coil trimerization domain and the C-terminal of the ALK protein, which contains the kinase domain.These domains are able to form a dimer without ligand binding, leading to activation of the ALK protein [9,10].And EML4-ALK fusion seems to be unique to NSCLC [11,12].Sequentially, a transgenic mouse model specifically expressing EML4-ALK in lung alveolar epithelial cells has been established to confirm its potent oncogenic activity [13].Upon transcription of EML4-ALK, Mitogen-Activated Protein Kinase (MAPK), Janus Kinase with Signal Transducer And Activator Of Transcription (JAK-STAT) and Phosphoinositide-3-Kinase with VAkt Murine Thymoma Viral Oncogene Homolog (PI3K-AKT) are constitutively activated.There is evidence that these signaling pathways enhance proliferation, survival and angiogenesis in cancer cells [9,14].
Besides, the corresponding targeted small molecule tyrosine kinase inhibitors (TKIs) have also led to unprecedented survival benefits in NSCLC patients with ALK rearrangement [15,16].In 2011, the first-generation ALK-TKI, crizotinib was approved for treatment of advanced ALK-positive NSCLC, which is a small molecule ATP-competitive ALK inhibitor [17].In order to overcome crizotinib resistance, the second-generation ALK-TKIs, including ceritinib, alectinib and brigatinib were developed [18].ALK TKIs have the potential to inhibit ALK phosphorylation and downstream signalling, leading to cell cycle arrest in the G1-S phase and apoptosis of cancer cells [19].
Typically, VTE occurs as a result of blood stasis, endothelial or vessel wall injury, and hypercoagulability.While cancer patients' blood stasis and endothelial injury could be shared with non-cancer patients, the hypercoagulability driven by malignancy-specific pathways is probably unique to cancer.Previous studies have reported various mechanisms of hypercoagulation in malignancy, including indirect regulatory mechanisms such as expressing proteins by tumor cells that could alter circulating cells [20][21][22][23], and direct regulatory mechanisms such as expressing procoagulant proteins (TF, podoplanin (PDPN)) which directly activate the coagulation cascade/platelets [24][25][26][27].
Whether EML4-ALK fusion protein in NSCLC cells also enhances venous thrombogenicity through the mechanisms mentioned above is still unclear.We performed the following experiments to address this issue further and explore possible new targets for anticoagulation therapy in EML4-ALK-rearranged NSCLC patients.
High-throughput sequencing
High-throughput sequencing was performed to detect different mRNA expressions between H3122 treated with Alectinib (100nmol/L, 6 h) and DMSO.CapitalBio Technology (Beijing, China) used an Illumina NovaSeq 6000 sequencer with a pair end 150-bp reading length (Illumina, San Diego, USA) to sequence mRNA and performed the final data analysis.The screening criteria for differential genes were: |log2FC|>=1 and p-Value < = 0.05.Cluster software was used to depict a heatmap for gene clustering.
Immunohistochemical analysis of human specimens
Human NSCLC tissues were obtained from patients at the Beijing Chao-Yang Hospital, Capital Medical University, with the approval of the institutional review board.For all patients involved in this study, amplification refractory mutation system polymerase chain reaction (ARMS-PCR) and Ventana immunohistochemistry were performed to detect ALK rearrangement.And the patients with ALK-rearrangement were confirmed by fluorescence in situ hybridization (FISH).The patients' demographic and clinical data are presented in Table 1.The cancer tissues were formalinfixed and paraffin-embedded.Anti-Tissue Factor antibody (catalog #228,968, 1:500; Abcam, Cambridge, USA) was used to detect TF.Images were obtained using a Leica TCS SP5 (Wetzlar, Hesse, Germany), and immunohistochemical staining of TF was determined using Images-Pro Plus version 6.0 software (IPP6) to assess the integrated optical density (IOD) and mean density of the immunohistochemical staining section.Finally, the mean IOD and density of cancer tissue immunohistochemical staining from five randomly selected fields (magnification, ×400) were recorded and analyzed.
Flow cytometry and immunofluorescence
Cells (H3122, H2228, H1299 and A549) were surfacestained with BD Pharmingen™ PE Mouse anti-human CD142 (catalog #550,312, BD Bioscience, Franklin Lakes, USA) for 20 min at 4 °C and washed with PBS.Then cells were resuspended in a cell buffer solution at a concentration of 1 × 10 6 /mL for flow cytometric analysis (FACSCanto II; BD Bioscience) and analyzed using BD FCSDiva Software and FCS Express 5 software (De Novo Software, Los Angeles, USA).
H3122 cells in 6 wells were fixed in 4% paraformaldehyde and incubated with immunofluorescence mAbs against TF (Affinity Biosciences, Cincinnati, USA) overnight at 4 ℃, then followed by incubating in Alexa Fluor® 488-labeled goat anti-rabbit IgG for 1 h at room temperature and DAPI solution for 7 min.Images were obtained using a Leica TCS SP5 (Wetzlar, Hesse, Germany).
Plasma clotting assay
Plasma clotting assay was performed to measure the plasma clotting time induced by lung cancer cells supernatant.Lung cancer cells (1 × 10 6 ) were cultured in a complete medium for 24 h.Then, the cell culture supernatant was collected in an Eppendorf and centrifuged (3000 rpm for 10 min at 4℃) to remove cell debris.200 ul of cell culture supernatant and 200 ul of 25 mM/L CaCl 2 were added to 200 ul of citrated human plasma (healthy volunteers) at 37℃ to initiate the plasma clotting process.Samples contained both TF, pro-coagulants-bearing extracellular vesicles and tumorsecreted soluble pro-coagulants.Clotting time was recorded visually by noting when the liquid formed a semisolid gel that did not flow during tube turning over [24].
Mouse model
The animal experiments were approved by the Institutional Animal Care and Use Committee of the Beijing Chaoyang
Targeting phosphorylated EML4-ALK fusion protein activity inhibits coagulation factor TF expression
The schematic figure of the study is shown in Fig. 1.A. Treating H3122 cells, a validated EML4-ALK-positive NSCLC cell line, with Alectinib (100nmol/L, 6 h) [30].Alectinib is a highly selective ALK inhibitor that showed strong antitumor activity against cancer cells which could achieve target inhibition of EML4-ALK fusion protein autophosphorylation, and substantially change the mRNA expression profile compared with DMSO in high-throughput mRNA sequencing analysis (Fig. 1.B).Besides, among the genes involved in aforementioned tumor-related coagulation [23,31,32], F3 gene was significantly downregulated in Alectinib-treated group (logFC =-1.54,P < 0.001) (Fig. 1.B).Subsequentially, using RT-PCR and immunoblotting, we confirmed that F3 mRNA expression and TF protein were much lower in H3122 cells after being treated with Alectinib (Fig. 1.C and Fig. 1.D).
To further investigate the expression and localization of TF in H3122 cells, immunofluorescence and flow cytometry were performed, and the results indicated that TF was mainly presented in the cell membrane (Fig. 1.E), and the mean fluorescence intensity (MFI) of anti-TF antibody in H3122 cells was also significantly decreased in Alectinibtreated group (P = 0.038) (Fig. 1.F and Fig. 1.G).
ALK-positive lung cancer cell with high TF expression enhances clot formation
Given the effect of Alectinib on regulating TF expression, we further sought to explore the expression of TF in ALK-positive and ALK-negative NSCLC patients and cell lines.First, we determined the expression of TF in NSCLC patients, and the characteristics of the patients were listed in Table 1.Ten ALK-positive NSCLC patients and 17 ALK-negative NSCLC patients were included, and the immunohistochemical staining results showed that TF was predominantly expressed at the tumor site in ALK-positive NSCLC (Fig. 2.A).Furthermore, quantifying the immunochemical staining results with IPP6, we also observed that ALK-positive NSCLC presented a higher IOD value than ALK-negative NSCLC (Fig. 2.B, P = 0.001), which was proportional to the total amount of expression of TF.However, there was no significant difference in the mean optical density value (Fig. 2.B, P = 0.096), which reflects the intensity of TF protein.
Next, TF expression was evaluated in ALK-positive NSCLC lung cancer cell lines (H3122 and H2228) and ALK-negative NSCLC lung cancer cell lines (H1299 and Hospital, the Capital Medical University.Four-week old male athymic nude mice (BALB/c Nude, Vital River Laboratory Animal Technology, Beijing, China) were used to prepare the xenograft model.Human lung cancer cells (H2228, H1299) at a concentration of 1 × 10 7 cells in 200uL of suspension were injected using 1 ml syringe into the subcutaneous tissue on the backs of mice (n = 6).And the control group was injected with PBS (n = 6).The tumor size was measured weekly until the volume was 200mm 3 (about 3-4 weeks).When the tumor volume reached 200mm 3 , IVC model was developed.The inferior vena cava (IVC) ligation model was performed as described in the previous study [28,29].IVC was separated from the aorta after the laparotomy, and was ligated distal to the renal veins by using a 6 − 0 silk suture.To induce thrombus formation within the IVC, the tributaries surrounding the IVC were also ligated to create a total stasis environment.And 48 h later, clots were collected from the IVC and weighed.The clot weight was adjusted by body weight (clot weight/body weight).
Blood collected from the orbital sinus of mice was pipetted into sodium citrate tubes and placed in a centrifuge at 3000 rpm for 10 min at 4℃ to separate the plasma.Blood samples were not performed to remove pro-coagulantsbearing extracellular vesicles and tumor-secreted soluble pro-coagulants.
The experiments above were repeated three times.
TF activity assay
TF procoagulant activity was assessed using a Tissue Factor Chromogenic Activity Kit (catalog #CT1002b, ASSAY-PRO, Missouri, USA) as the manufacturer's protocol.
Statistics
Summary statistics are presented as mean ± SEM (standard error of the mean Next, the expression of tumor-derived TF in circulation was further examined.As soon as the xenograft model tumor volume reached 200 mm 3 , plasma was collected for ELISA detection of tumor-derived TF concentration.Meanwhile, TF activity (tumor-derived and mice-derived) in the plasma of mice bearing H2228 is highest (H2228 vs. H1299, P = 0.021; H2228 vs. blank control, P = 0.006), followed by H1299 and blank control (H1299 vs. blank control, P < 0.001) (Fig. 2.F).After that, the IVC ligation model was also developed on blank mice and mice bearing H1299 and H2228 tumors.The results indicated that clot weight increased in mice with NSCLC when compared to blank mice after adjusting body weight (clot weight/body weight) (blank control vs. H1299, P = 0.025; blank control vs. H2228, P < 0.001).Also, consistent with the level of TF concentration and activity of NSCLC cell lines, the clot weight in mice bearing H2228 tumors was significantly higher than those bearing H1299 tumors (H2228 vs. H1299, P = 0.005) (Fig. 2G).A549).RT-PCR showed that ALK-positive NSCLC cell lines exhibited a higher level of F3 mRNA expression, especially the H2228 cell line, whose expression level was about 625-fold higher than that of the H1299 cell line (Fig. 2
.C).
Consistent with the RT-PCR results, flow cytometry also confirmed that the H2228 cell line expressed the highest TF protein (Fig. 2.D).Similarly, the TF activity of culture supernatant obtained from H2228 cultures was the highest (79.2 ± 5.0pM), followed by H3122 (22.9 ± 3.5pM), A549 (12.4 ± 1.1pM), and H1299 (10.5 ± 0.4pM) (Fig. 2.E).In addition, we also performed a plasma clotting assay using the culture supernatant of each NSCLC cell line to assess their prothrombotic potent, as TF in the culture supernatant can activate coagulation factors in platelet-poor plasma and finally lead to insoluble fibrin formation.Consistent with the level of TF expression in NSCLC cell lines, the plasma clotting time of culture supernatant obtained from H2228 cultures was the shortest (45.3 ± 1.9s), followed by H3122 (121.7 ± 9.6s), A549 (602 ± 2s) and H1299 (621.7 ± 26.3s) (Supplementary Fig. 1.A).Besides, the upper regulatory pathway of AP-1 overlapped with the downstream pathway of EML4-ALK at the node of ERK1/2 [37,38].
The results verified that cfos gene expression was downregulated with F3 in Alectinib-treated H3122 cells through RT-PCR (Fig. 3.A, left panel).Further, pERK1/2 and cfos protein expression were also decreased in Alectinib-treated H3122 using Western Blot (Fig. 3.A, right panel).
Next, we determined TF and cfos expression in H3122 cells after SCH772984 treatment.SCH772984 is a selective inhibitor of ERK1/2, which adopts a unique kinase binding mode in ERK1/2 [39].Pretreated H3122 cells with SCH772984 (2 μm/L, 24 h) to downregulate ERK1/2 phosphorylation, and compared cfos and TF protein expression
EML4-ALK fusion protein regulates TF expression through the pERK1/2-AP-1 pathway in NSCLC cells
The promoter region of the F3 gene contains multiple elements for diverse transcription factors binding [33], and phosphorylated EML4-ALK fusion protein in NSCLC cells could activate multiple downstream pathways [15].We further analyzed the results of high-throughput mRNA sequencing for transcription factors that may regulate F3 gene expression and KEGG pathways in which EML4-ALK fusion protein is involved.The results implied that cfos mRNA expression, whose protein product was one of the subunits comprising F3 gene transcription factor AP-1 [34][35][36], was also down-regulated along with the F3 gene.indicated that pERK1/2 and cFOS proteins were decreased with TF downregulation (Fig. 4A, middle panel).Also, mRNA expression of cFOS and F3 genes was both reduced after Alectinib treatment (Fig. 4A, right panel).In addition, ERK1/2 and AP-1 inhibitors also reduced the expression of TF mRNA and protein in H2228 cells (Fig. 4B and C).
Discussion
In recent decades, an increasing number of studies have found that ALK-rearranged NSCLC patients have a higher risk of VTE occurrence [41,42].Besides, a large proportion of these VTE events are developed in newly diagnosed with DMSO control.The results showed that cfos and TF proteins were downregulated when ERK1/2 phosphorylation was inhibited (Fig. 3B).Also, in line with the ERK1/2 inhibitor, pretreated H3122 cells with T-5224 (400nmol/L, 24 h), a small molecule selective inhibitor that selectively inhibits c-Fos/AP-1 binding to DNA [40], TF expression was downregulated compared to DMSO control (Fig. 3C).
After evaluating the mechanism of EML4-ALK regulating TF through the ERK1/2 /AP-1 axis in H3122 cells, we further verified these observations in H2228 cells.When H2228 was pretreated with the same concentration of Alectinib, flow cytometry analysis showed that TF expression was significantly down-regulated at 18 h and even more evident at 24 h (Fig. 4.A, left two panels).Western Blot results demonstrated that TF might be the key regulator of thrombus formation in EML4-ALK rearranged NSCLC in both in vitro and vivo experiments.Meanwhile, we also observed in vitro that EML4-ALK fusion protein in NSCLC cells NSCLC patients, supporting the underlying cancer-specific biology as a causal factor.According to previous clinical studies, the association between EML4-ALK rearrangement and VTE occurrence is verified.Thus, in this study, we firstly These mainly included cancer cells expressing proteins that could alter the number of platelets and leukocytes in circulation (such as IL6, MUC1, MUC16, and MUC5AC) [46] and the expression of procoagulant proteins such as TF and PDPN, which could directly activate the coagulation cascade and upregulation of antifibrinolytic/anticoagulation proteins (PAI1 and HPSE) [47][48][49][50].
TF protein is a 47 kDa transmembrane protein that is highly expressed in many human cancers, including glioma, pancreatic, head, neck, lung, cervical, and prostate cancers, as well as leukemia [51].Besides, an alternative spliced (as) form of TF that lacks the transmembrane domain can be released from cells.But studies about procoagulant activity of asTF are inconsistent [52][53][54].So, the researches mostly focus on the full-length TF (TF).Studies have shown that tumor growth and angiogenesis are mediated by TF expression, and that TF expression correlates directly with oncogenic status, while circulating TF-positive extracellular vesicle level correlates with oncogenic status as well [55].
And tumor TF expression level is proven to influence cancer enhanced venous thrombogenicity via pERK1/2 mediating AP-1-TF signaling.Consequentially, tumor-derived TF in the circulation triggered a coagulation cascade (Fig. 5).
This research indicated that the EML4-ALK-pERK1/2-AP-1-TF axis might be a potential mechanism of VTE in ALK-rearranged NSCLC patients.The nodes of this axis also play essential roles in cancer initiation and progression [43][44][45], and both of them are therapeutically targetable proteins, which implies an important translational potential for the treatment of cancer-associated VTE.
Cancer-specific mechanisms of VTE in EML4-ALK fusion NSCLC cells
VTE occurrence in malignant patients could be due to multiple factors, including patient characteristics (like age, sex, and co-morbidities), cancer treatment (like chemotherapy, radiotherapy, and anti-vascular therapy), and cancer itself.Recent studies have discovered various cancer-specific mechanisms of VTE in brain, pancreatic, and colon cancer.above suggest TF as the specific mechanism of EML4-ALK fusion in NSCLC-associated VTE.
EML4-ALK-pERK1/2-AP-1-TF axis in EML4-ALK fusion NSCLC cells
The EML4-ALK fusion protein is the product of the EML4-ALK fusion gene caused by chromosome translocation.The EML4 locus located in the short arm of chromosome 2 is broken, inversed, and fused with the ALK locus located on the same chromosome in somatic cells [7].Thus, the fusion protein is composed of the amino-terminal half of EML4 protein ligating to the intracellular region of the receptortype protein tyrosine kinase ALK.This action leads to dimerization and autophosphorylation of the ALK kinase domain and thus abnormally activates downstream signaling pathways, such as PI3K/AKT, JAK/STAT3, and RAS/ ERK [67,68], finally acquiring tumor-formation activity [13].
The regulatory mechanism of TF expression has been reported in many cancer models containing brain, breast, and colorectal cancer [2,24,[69][70][71][72].These studies revealed that multiple signaling pathways, transcription factors, and microRNAs, such as the Raf-MEK-ERK signaling pathway, transcription factor AP-1, and nuclear factor κB (NF-κB) could regulate TF expression in cancer cells [70].And the mammalian target of rapamycin (mTOR) kinase pathway was identified in human pancreatic neuroendocrine tumor cell lines [73].
In the current study, targeted inhibiting EML4-ALK fusion protein in NSCLC cells showed downstream suppression of pERK1/2 and reduction of cFOS, a subunit of AP-1, accompanied by downregulation of TF.It is similar to a previous study within breast cancer cells, in which ERK1/2 kinase activity was measured in nuclear extracts and shown to be upregulated in MDA-MB-231 breast cancer cells with higher expression of TF mRNA [70].Also, both AP-1 and NF-κB are important transcription factors for TF expression.However, compared to NF-κb, MDA-MB-231 nuclear extracts contain a molar excess of AP-1 [70].And in our study, targeting ERK1/2, we also observed the downregulation of the subunit of AP-1 and TF.Finally, after pretreating with an AP-1 inhibitor, there is a significant reduction of TF expression both in H3122 and H2228 cell lines.Nevertheless, the former study only investigated the regulation of TF expression in non-specific breast cancer, and in this study, we linked EML4-ALK oncogenic alteration, the activation of downstream signaling pathways, and the coagulation cascade.Overall, the current study implies that EML4-ALK fusion protein in NSCLC cells may regulate the expression of TF through the pERK1/2-AP-1 axis.
prognosis [55].TF initiates the extrinsic coagulation cascade and can be released into circulation through cell-derived extracellular vesicles [56].When TF-positive extracellular vesicles shed from cancer cells are associated with coagulation factor VII (FVII), this can trigger the blood coagulation cascade, leading to cancer-associated VTE [55].There is no evidence that cancer cells-derived TF is activated in the circulation.
In the current research, as cohort studies indicated a heightened risk of VTE occurrence in patients with ALKrearranged NSCLC [4,6], we targeted inhibition of EML4-ALK fusion protein in NSCLC cells and observed that TF expression was significantly decreased.And this finding is similar to previous studies that increased cell surface expression of TF was associated with higher procoagulant activity in malignancies and higher circulating levels in vivo [57,58], which was also shown in breast and ovarian cancer patients [59,60].Thus, inhibiting EML4-ALK fusion protein in cancer cells might decrease coagulability.Previous studies detected tumor-derived TF-positive extracellular vesicles (micropaticles or microvesicles) in the plasma of human pancreatic/colorectal tumor cells xenografted mice and examined venous thrombogenicity in a mouse model [61][62][63][64].In addition, we demonstrated the function of TF by testing the activity of TF in human ALKpositive/ALK-negative NSCLC cell lines and also in mice bearing tumors derived from the corresponding cell lines.Furthermore, EML4-ALK fusion NSCLC cell lines with higher TF expression showed shorter clotting time in their culture supernatant involving plasma clotting assay.As a result, the plasma clotting assay might be influenced by samples containing TF-positive extracellular vesicles and tumor-secreted soluble procoagulants.And further experiments should be performed to verify this result.
Also, several studies within different cancer types indicated that increased tumor-derived TF-positive microvesicles upregulated the VTE incidence and venous clot weight in IVC stenosis or ligation mouse models [28,29,63].In addition, Zwicker et al. proved that tumor-derived TF-positive microvesicles are associated with VTE in cancer patients [65].Our findings are consistent with these results.Additionally, the present study has intensively considered oncogenic mutation and demonstrated that oncogenic mutation is a significant factor influencing cancer hypercoagulation.
We further identified TF expression in ALK-positive and ALK-negative NSCLC tissues and observed that ALKpositive NSCLC tissues exhibited higher TF expression.This finding was also indicated in a previous retrospective study, and the ALK-positive NSCLC patients in this cohort with higher TF expression showed a greater possibility of developing VTE compared to patients with EGFR mutation and both negative [66].Collectively, the findings mentioned patients harboring EML4-ALK rearrangement.A non-anticoagulant target for VTE treatment might offer a new reference for thromboprophylaxis and anticoagulation therapy for cancer patients, and it might also provide more theoretical support for a combination treatment strategy of ALK and MEK inhibitors.
Limitations
This study has the following limitations.First, the mouse model used in this study is a subcutaneous ectopic lung cancer model, which cannot fully represent the real pulmonary environment with rich blood circulation.This limitation can be addressed by building an orthotropic model in the future.However, due to differences in the immune response and tumour microenvironment between mice and humans, animal models may not fully recapitulate human disease physiology.Additionally, the complex heterogeneity and microenvironmental factors present in clinical settings may not be fully reflected in the results obtained from the limited cell line experiments.Thus, further in vivo experiments and clinical studies should be performed to confirm these results.And these results should be verified in more types of cell lines.Also, there is a lack of evaluation of TF concentration in peripheral blood of lung cancer patients in the current study, limited by the fact that the number of enrolled ALK-rearranged NSCLC patients is too small.A comprehensive analysis will be carried out after further expanding the sample size in the future.
Conclusion
In summary, we have uncovered that EML4-ALK fusion protein in lung cancer cells enhances venous thrombogenicity through the pERK1/2-AP-1-tissue factor axis.This finding provides more theoretical support for a combination treatment strategy of ALK and MEK inhibitors and offers a new reference for thromboprophylaxis and anticoagulation therapy for cancer patients.
The translational potential of the EML4-ALK-pERK1/2-AP-1-TF axis in the treatment of cancerassociated VTE
Thromboprophylaxis and anticoagulation therapy for cancer patients need to consider the risk of bleeding, the impact of the anti-tumor treatment process, and the additional sense of futility and burden caused by such treatment because it seems this action does not extend survival.Despite anticoagulation therapy, the incidence of recurrent pulmonary embolism (PE) remains relatively high [32,74].Hence, identifying patients at high risk of VTE is a current research focus.Notably, many studies suggested that high levels of TF expression were observed in different types of cancer, and the level of TF expression was associated with tumor progression and hypercoagulability [55].Thus, TF expression might be a marker of cancer prognosis.However, little research on individualized anticoagulation therapy based on specific cancer types.This study aims to identify a nonanticoagulant target for VTE treatment in NSCLC with a specific oncogenic mutation.
This regulatory axis may be an appealing target for cancer-associated VTE in EML4-ALK fusion NSCLC patients.The nodes of this axis have essential roles in the occurrence and development of malignancy, and they all have corresponding specific targeted inhibitors.For example, small molecular ATP-competitive ALK inhibitors, which can effectively inhibit the autophosphorylation of ALK protein and suppress downstream signal activation [75,76], have led to unprecedented survival benefits in EML4-ALK fusion NSCLC patients [77][78][79].Although targeted ERK1/2 inhibitors are still in preclinical/clinical research [80,81], inhibitors targeting upstream signaling protein MEK (RAS-RAF-MEK-ERK1/2 pathway) have been approved by FDA in the USA for the treatment of various solid tumors and have achieved excellent results in clinical [82][83][84].Hrustanovic, Gorjan, and Tanizaki, J et al. [85,86] found that EML4-ALK fusion NSCLC cells specifically depend on the MAPK pathway, and their sensitivity to MEK inhibitors is similar to that of KRAS or BRAF-positive lung adenocarcinoma cells.However, ALK inhibitors are not entirely effective, and single-drug treatment for long-term use can induce the reactivation of the downstream MAPK pathway and lead to drug resistance.Thus, a combined treatment strategy of initial ALK and MEK inhibitors is recommended to improve the survival rate of patients [87].Now that the current data support the role of the axis in the pathogenesis of cancer-associated VTE in EML4-ALK fusion NSCLC cells, the node inhibitors may fulfill a dual purpose in patients with EML4-ALK fusion NSCLC.And the results suggest that clinicians should give careful consideration to providing thromboprophylaxis to NSCLC 1 3
Yanping
Su and Jiawen Yi contributed equally to this work,
Fig. 1
Fig. 1 Targeting phosphorylated EML4-ALK fusion protein activity inhibits coagulation factor TF expression.(A) Schematic figure of the study.(B) Next-generation mRNA sequencing showed Treatment of ALK rearrangement human lung cancer cell line H3122 with Alectinib (100nmol/L, 6 h) substantially change the mRNA expression profile compared with DMSO-treated (vehicle-treated) H3122 cells; Among the genes involved in tumor-related coagulation, F3 gene was significantly downregulated in Alectinib-treated group (logFC =-1.54,P < 0.001).(C) mRNA expression of F3 gene in H3122 cells treated
Fig. 2
Fig. 2 ALK-positive lung cancer cell with high TF expression enhances clot formation.(A) Primary lung cancer tissues obtained from ALK positive and ALK negative lung cancer patients.Tissue sections were stained immunohistochemically with anti-TF mAb.ALK positive NSCLC cancer cells express TF protein mainly on the cell membrane (upper panel); TF protein were negatively expressed in most ALK negative NSCLC cancer cells (lower panel).(B) Quantifying the immunochemical staining results with IPP6.ALK positive NSCLC tissues presented higher IOD value (left panel, P = 0.001).There was no significant difference in mean optical density value (right panel, P = 0.096).(C) The mRNA expression of F3 gene in ALK posi-
Fig. 4
Fig.4EML4-ALK fusion protein regulates TF expression through pERK1/2/AP-1 pathway in H2228 cells.(A1-2) Pretreating H2228 with the same concentration of Alectinib in H3122.Flow cytometry analysis showed that TF expression was significantly down regulated at 18 h and even more obvious at 24 h (n = 3).(A3) Western Blot was observed decreased pERK1/2 and cFOS along with TF down regulation (n = 3).(A4) F3 gene and cFOS mRNA expression were similarly reduced compared with DMSO control which quantified by quanti-
Fig. 5
Fig. 5 Summary scheme graphic showing an association between EML4-ALK fusion protein mediated cellular pathway and thrombosis.Phosphorylated EML4-ALK fusion protein activated downstream the ERK1/2 pathway, inducing cFOS expression which was one of subunit
Table 1
Information of patients for immunohistochemistry | 6,605.2 | 2023-11-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Numerical Solution of the Multigroup Neutron Diffusion Equation by the Meshless Rbf Collocation Method
The multigroup neutron diffusion criticality problem is studied by the radial basis function collocation method. The multiquadric is chosen as the radial basis function. To investigate the effectiveness of the method, one, two and three-group problems are considered. It is found that the radial basis function collocation method produces highly accurate multiplication factors and it is also efficient in the calculation of group fluxes.
INTRODUCTION
Numerical solution of the neutron diffusion equation has been done by many numerical methods such as the finite difference, finite element and boundary element methods.These are all mesh-based methods in which the nodes that discretize the problem domain are related in a predefined manner.In this paper we apply a novel, meshless technique, the radial basis function (RBF) collocation method, for the numerical solution of the neutron diffusion equation.
Meshless methods have emerged in late 70's and became an alternative class of numerical tools for the solution of differential and integral equations.As their name implies, the nodes do not have to satisfy any relation.The analyst can distribute them uniformly or randomly.There are many meshless methods in literature with different mathematical properties.For details, we refer the readers to Liu [1].
The RBF collocation method was proposed by Kansa [2] and has found itself a wide application area [3][4][5] over the past decades.It is a strong-form meshless method and unlike the weak-form meshless methods it is a truly meshless one since there is no integration in the solution procedure.There is numerical evidence that the method has an exponential convergence rate [6] and it is shown to be more accurate then the finite difference, finite element with linear shape functions and spectral methods [7,8].On the other hand the method is less stable then weak-form meshless or mesh-based methods, but the high level of accuracies that can be obtained by fewer nodes motivates the use of this approximation scheme.
FORMULATION OF THE PROBLEM
This study deals with the numerical solution of the multigroup neutron diffusion criticality problem and for a square homogeneous system with reflective boundary conditions at its bottom and left sides and with vacuum boundary conditions at its right and top sides the problem can be mathematically written as where = 1, … .Here and n denote the energy group and the iteration index, respectively, is the total number of energy groups, is the diffusion constant, is the neutron flux, is the multiplication factor, is the fission spectrum function, is the number of neutrons per fission, Σ , Σ and Σ are macroscopic removal, scattering and fission cross sections, respectively and is the size of the domain. is the fission source and it is defined as The system of equations, Eq. ( 1), is solved by fission source iteration, which starts by guessing the fission source and the multiplication factor as ~ 0 , ~ 0 .Next, the neutron flux of the first group, 1 1 , is calculated.Then, by using this flux, 2 1 and neutron fluxes of the following groups can be found.After that a new fission source and a multiplication factor is determined by where Ω is the area element.This iterative strategy goes on until a predetermined convergence criterion is satisfied For the formulation we first introduce a set of internal nodes with members such that: = , : 0 < < , 0 < < , 1 ≤ ≤ (6) Then we introduce a set of reflective boundary nodes with 2 members such that: = ∪ (7) where represent a set of reflective boundary nodes on the bottom side while represent a set of reflective boundary nodes on the left side.That is: where represent a set of vacuum boundary nodes on the right side while represent a set of vacuum boundary nodes on the top side.That is Then, the set of boundary nodes simply = ∪ = ∪ ∪ ∪ (11) The set of domain nodes, is defined as = ∪ (12) which represents a set with = + members.Secondly, we introduce a set of external nodes, .For the purpose of preserving the nonsingularity of the coefficient matrix, the number of members of has to be equal to .That is = , : < 0 ∨ > ∧ < 0 ∨ > , < ≤ + (13) The neutron flux is to be approximated by where , is the RBF.For the first part of the collocation process, the neutron diffusion equation is required to hold for , such that 1 ≤ ≤ .Then There are many RBFs encountered in the literature with different properties.In this study we will employ the multiquadric, which was proposed by Hardy [9] to approximate geographical surfaces Here, is called the shape parameter.It determines the shape of the RBF and has an important role in numerical applications.Theoretically, as → ∞ the approximation error goes to zero [10]; but this property would be achieved if infinite precision computation could be performed.
NUMERICAL RESULTS
To investigate the performance of the RBF collocation method we considered one, two and three group criticality eigenvalue problems.The analytical solutions of these problems can be found in [11].A program is written in FORTRAN and calculations are performed with double precision.In all tests uniformly scattered nodes are used.The power is assumed to be 16 and the convergence criterion is chosen as 10 −6 for all problems.Accuracy of the method is examined via calculating the error in and maximum errors in group fluxes where subscripts and denote analytical and numerical, respectively.RBF collocation method is invariant under uniform scaling, hence computations are made on a domain scaled to 0,1 2 by defining the variables = and = .
In the first problem we studied the one-group case.The length of the square domain is taken as = 50 , while = 1.77764 , Σ = 0.0104869 −1 , = 2.5, Σ = 0.0143676 −1 and = 1.The analytical value of is 1.46657782.Fig. 1 shows the variation of and with respect to the reciprocal of the fill distance (distance between adjacent nodes), where 2 = 0.06.It is observed from this figure that decreases continuously with decreasing value of the fill distance.It has its minimum value of 5.642 × 10 −3 when −1 = 36.Highly accurate values are obtained above −1 = 22 and, the percent error has decreased to its minimum of 4.091 × 10 −6 when −1 = 32.In the second problem the number of energy groups is two and = 25 .The nuclear data is given in Table 1.Diffusion constants are given in units of centimeters and all cross sections have units of inverse centimeters in Table 1 and later on in Table 3.For this problem = 1.96293774.The numerical results of the two-group problem are summarized in Table 2 where 2 = 0.06.We see that the maximum errors in group fluxes are similar and decrease with decreasing fill distance value.For the multiplication factor, a very high level of accuracy is obtained above −1 = 16.It is also observed from this table that the number of iterations increases by one when −1 = 32.In Fig. 2 the variation of ,1 and ,2 with the shape parameter of the multiquadric is illustrated where a fill distance of = 0.04 is chosen.The maximum pointwise errors in flux for both groups decrease continuously with increasing shape parameter up to 2 ≅ 0.12.Beyond this value the errors start to oscillate and the numerical solution breaks down except for 2 = 0.149.This is expected since as the shape parameter increases the collocation matrix becomes more and more ill-conditioned.
(a) (b) Figure 2. Variation of (a) ,1 , (b) ,2 with respect to the shape parameter The error in multiplication factor is shown in Fig. 3 where, again = 0.04 .It is seen that the error increases with the shape parameter at first up to 2 = 0.015 and then starts to decrease until 2 = 0.072 where the analytical solution is reproduced.Above this value it increases again, and similar to the pointwise errors in group fluxes the numerical solution oscillates and breaks down above 2 ≅ 0.12.It should be noted that the change of errors in flux and multiplication factor with the shape parameter is found to be similar for the one and three-group problems also.
The third problem deals with the solution of the three-group neutron diffusion equation where is assumed to be 25 .The nuclear data characterizing a three-group structure is given in Table 3 and the analytical value of is 0.75024241 for this problem.The number of iterations, maximum pointwise errors for the three group fluxes and the error in is given in Table 4 for different fill distance values where 2 = 0.06.Once again, it is found that the errors in group fluxes and multiplication factor decrease with decreasing value of the fill distance.Highly accurate values are obtained when the fill distance is 0.05 or less.It is also observed that the number of iterations does not depend on the choice of .
CONCLUSIONS
In this study we have solved the multigroup neutron diffusion criticality problem numerically by the meshless RBF collocation method.We used the multiquadric as the RBF and worked on three problems.
We have found that, for all the problems considered, the RBF collocation yields highly accurate results for the multiplication factor and it also works well in the computation of group fluxes.It was seen that both the maximum pointwise errors in group fluxes and the error in the multiplication factor decreases with decreasing fill distance value.For the two-group problem, it was shown that, by fine-tuning of the shape parameter, the analytical result can be reproduced.
The dependence of the errors to the shape parameter has been investigated for all problems and illustrated graphically for the two-group problem.It was observed that for group fluxes the error decreases with increasing shape parameter up to a certain point.Then starts to oscillate and the numerical solution breaks down at higher values of because of the ill-conditioning of the collocation matrix.Unlike the group fluxes, the multiplication factor has a maximum error value and it starts to increase before going into the ill-conditioned region where it does not converge to any value.
Figure 1 .
Variation of (a) and (b) with respect to the fill distance
Figure 3 .
Figure 3. Variation of with respect to the shape parameter Also a set of vacuum boundary nodes with 2 members are introduced such that , + Σ , , , 1 ≤ ≤ , 1 ≤ ≤ The collocation is completed by requiring the reflective and vacuum boundary conditions to hold for points , which are members of and , respectively.In Eq. (17), the elements of the global system matrix are block matrices themselves.For every energy group an + × + system of equations have to be solved.As an example for the first group one has to solve Here and are square matrices of dimension and respectively.The matrix is rectangular with dimensions × , while is again rectangular with dimensions × . , and , − vectors are dimensional while the vector , is dimensional.Solution of Eq. (19) yields , and hence the numerical result.
Table 2 .
and for the two-group problem
Table 3 .
Three-group nuclear data
Table 4 .
and for the three-group problem | 2,622.2 | 2013-12-01T00:00:00.000 | [
"Mathematics"
] |
Epistemic Complexity of the Mathematical Object “Integral”
: The literature in mathematics education identifies a traditional formal mechanistic-type paradigm in Integral Calculus teaching which is focused on the content to be taught but not on how to teach it. Resorting to the history of the genesis of knowledge makes it possible to identify variables in the mathematical content of the curriculum that have a positive influence on the appropriation of the notions and procedures of calculus, enabling a particularised way of teaching. Objective: The objective of this research was to characterise the anthology of the integral seen from the epistemic complexity that composes it based on historiography. Design: The modelling of epistemic complexity for the definite integral was considered, based on the theoretical construct “epistemic configuration”. Analysis and results: Formalising this complexity revealed logical keys and epistemological elements in the process of the theoretical constitution that reflected epistemological ruptures which, in the organisation of the information, gave rise to three periods for the integral. The characterisation of this complexity and the connection of its components were used to design a process of teaching the integral that was applied to three groups of university students. The implementation showed that a paradigm shift in the teaching process is possible, allowing students to develop mathematical competencies.
Introduction
Integral Calculus teaching should be developed as a solid mathematical culture in which university students can qualitatively and quantitatively analyse different phenomena of the everyday environment to increase their abstraction and reasoning capacities. However, the literature on mathematics education shows that Integral Calculus teaching focuses on a formal mechanistic approach, which emphasises the content to be taught rather than how to teach it. It has identified students' difficulties in establishing connections for the integral that would enable them to be competent in its use and handling. For example, to resolve a precise problem situation, the integral may be used as an area, although in another context, it may be an antiderivative or a measure. These aspects make the integral a mathematically complex object. They favour presenting it from the articulation of the complexity which constitutes it, understood as a plurality of valuable meanings for the design and implementation of instructional processes, so as to improve them permanently.
The authors of [1,2] focused on the integral as a mathematical object of teaching, exposing the need for studies of the ontology of mathematical objects that allow us to characterise their complexity, given that they offer elements for the design and execution of instructional processes which are different from the traditional ones. Hence, this study maintains interest in this line of research by proposing two objectives: (1) to characterise the anthology of the integral analysed from the epistemic complexity that composes it, based on a historical-epistemological-hermeneutical study; and (2) to consider this complexity when guiding the instructional process with university students and determining whether the articulation of this complexity in some way allows us to overcome the learning problems identified and to achieve some change in the current teaching paradigm.
The onto-semiotic approach to mathematical cognition and instruction (hereafter, OSA [3]) was used as a theoretical support because it offers tools that enable the identification of the complexity of mathematical entities and the connection of the units in which this complexity erupts, following the model outlined in [4,5], through multiple meanings (partial meanings) described in terms of practices and epistemic configurations of the primary objects activated in these practices [6].
This manuscript is focused on responding to the first objective, related to formalising the complexity of the integral, sharing the position put forward in [4,7] and considering that, in studies on the ontology of a mathematical object, logical keys and epistemological elements are evident in the process of theoretical constitution, which not only allow us to better understand the concept, but also reveal characteristic aspects of the mathematical construction activity that must be taken into account for its comprehension. This component enabled the identification of epistemological breaks in the evolution of the integral which, in the organisation of the information, gave rise to three periods, each of which generated a global epistemic configuration, described in detail in the Results section.
The development of the second objective aimed to demonstrate whether the complexity identified for the integral is the origin of the various difficulties manifested in the teaching and learning processes of Integral Calculus referenced in the mathematics education literature, and whether knowing it allows for some kind of change in the current teaching paradigm. The Methodology section of this work describes how the proposal was developed, the results of which, due to their length and detail, can be consulted in [8,9].
Theoretical Background
The research interest is focused on the complexity of the integral. We cite some works, among them: for the definite integral [10,11]; for the integral in general [1,2,[12][13][14][15][16]. Elements that allowed to identify a first classification for the epistemic complexity of the definite integral, ref. [10] describes four particular epistemic configurations of reference: 1.
The geometric is used to determine the area under a curve and the abscissa axis and to calculate lengths, areas, and volumes in a static geometric context. Leibniz is considered as its main driver; 2.
The result of the process of change frames all those cases in which the integral is necessary to solve situations in other sciences associated with non-static processes. Newton is presented as the promoter of this meaning; 3.
The inverse of the derivate is from the original relationship between the derivate and integral. It is associated with the works of Newton in 1711 and Leibniz in 1710; 4.
The approach to the limit is related to the formalisation initiated by Cauchy in 1825, which gave rise to a new definition of definite integral.
The author of [7] complemented the above four configurations by adding another two.
1.
The algebraic is generated from the formalisation of the concepts taught by teachers, considering that they spend a large part of their time practicing the integration rules; 2.
The generalized is framed by the need to expand the set of integrable functions after the foundation built by Cauchy in 1831.
It should be noted that the historical changes found are characterized by the solutions that is presented for the existing problem in a certain epistemic configuration at a certain moment in history. These changes may imply both the rupture of the epistemic configuration and its evolution to an inclusive or complementary one. From these parameters, [12] raises eight partial meaning, also shown as epistemic reference configurations, which complement or modify those exposed by the authors mentioned above:
1.
Intuitive, related to the work of the Greeks in geometry; 2.
Primitive, related to Newton and Leibniz's works to find the primitive of a function. Establishes the inverse relationship between derivatives and integrals from the Fundamental Theorem of Calculus; 3.
Geometric, related to the resolution of clearly geometric problems in the Middle Ages;
4.
Summation, related to the problem of the foundation of the calculus which began at the end of the 19th century; 5.
Approximate, related to the intra-and extra-mathematical application of the definite integral; 6.
Extra mathematics, related to the breadth of application possibilities for the integral to problem situation in different areas of knowledge; 7.
Accumulated, related to the intuitive processes of "integration" produced in the medieval period form Newton's work linked to dynamics on 1687, (change and movement); 8.
Technological, related to the use of mathematical software to perform calculations on computers and the ability to use the appropriate software tools.
These eight configurations demonstrate the extension of the study, not only to the definite integral, but also tangentially involves the indefinite integral, without clarifying its origin, unchecking it, and defining it as an entity independent of the definite integral. Hence, the existence of certain limitations in each partial meaning is shown, which later became motivating situation for other more consistent "meanings", here called "secondary". For example, improper integrals are extension of the defined integrals, which were not contemplated in any of the previous configurations.
Theoretical Framework
In this research, we use the OSA as theoretical support because it offers theoretical tools that can help us reflect on the complexity of mathematical objects and the possibility of articulating them in the search for an explanation of how they arise [3]. In the OSA, the analysis of mathematical activity involves determining the types of entities involved in this activity. It distinguishes six main types: situations/problems, actions, language, concepts, properties, and arguments [6]. These objects relate to each other forming "configurations" that can be epistemic or cognitive, if seen from the perspective of the mathematical institution, or personal, if seen from the perspective of the subject who performs them [3]. A configuration is defined as networks of intervening and emergent objects of the systems of practices activated to solve problems. Systems of experiences and configurations are proposed as theoretical instruments to represent mathematical activity, which allows mathematical knowledge to be inferred [5]. The conception of epistemic configuration is one useful instrument for the analysis of mathematical writings and historical-epistemological studies of mathematical objects [3].
The notion of language use plays a significant protagonist in OSA, alongside the concept of institution; is considered the contextual mechanism that relativizes the ways of being, of existing of mathematical entities [3]. According to their use of language, the mathematical objects that interact in mathematical experiences and those that appear from them can be considered as from the perspective of being/existing, grouped into facets or dual dimensions (see [5]). For this work, we will use one of those dual dimensions: the unitary-systemic. According to this duality, mathematical objects can participate as unitary objects or as a system. When a mathematical entity is considered an object, a unitary perspective on it is being adopted. However, there are times when it is possible to adopt a systemic perspective on such an object, for example, when considering its component parts.
The Emergence of Mathematical Objects in OSA
Ref. [5] showed that the route by which mathematical entities emerge from experiences is complex, and differentiates at least two levels of occurrence. In the primary, representations, descriptions, propositions, procedures, problems and opinions (first objects) appear, established in "epistemic configurations". Mathematical practice can be thought of metaphorically as "climbing a ladder". The phase on which the practice is based forms a configuration of primary entities previously known. Whereas, the higher phase arrived at as a consequence of experience generates a new configuration of primary entities in which one (or some) of those entities were not known previously; thus, new primary entities emerge as a consequence of mathematical practice [4]. On a second level, there is the emergence of a mathematical object (the integral, in our case) that can be characterised by different representations: antiderivatives, areas under curves, accumulation functions, among others, and which can have equivalent definitions, properties, theorems, etc. This second emergence is a consequence of the interactions of different aspects that implicitly or explicitly generate in the classroom a descriptive-realistic vision of mathematics that considers: (1) mathematical propositions and enunciations refer to properties of mathematical objects; (2) these objects attain a certain autonomous kind of existence of the subjects that they know them and the language they use to know them.
From this point of view, the integral object is located on the second level. It is the emergence of a universal reference related with diverse configurations of primary objects a certain let the mathematical conventions to be carried out in different contexts in which the integral has been interpreted as: an approximation of the limit of a Riemann sum, the inverse of the derivative, the result of a process of change, a summation, the result of a process of accumulation, as an driver that discoveries from a given function-other function (its primitive)-, that allows us to understand that the integral can be demarcated and embodied in various ways. According to OSA [4,5], the result is that it is considered an object, named an integral, which rescues the global reference title of all primary object configurations. Now, this global reference in the mathematical activity takes the form of a specific configuration of primary objects. Therefore, what can be done with this secondlevel object is determined by this configuration. Ref. [4] mentioned that in OSA, the entity that rescues the role of global reference can be seen as single to simplify the reasons and, equally, as multiple, simultaneously, as it metaphorically breaks down into a combination of first objects grouped in various configurations.
OSA's idea of the complexity of mathematical objects makes it possible to identify a diverse system of problem-solving practices in which the (secondary) mathematical object does not appear directly. What does appear are representations of the object (secondary), diverse meanings, propositions, properties, actions, procedures, and opinions that are applied to that mathematical object (epistemic configurations of primary objects). In other words, throughout history, different epistemic configurations of primary objects have been generated for the study of the (secondary) mathematical object, some of which have served to generalise the pre-existing ones.
Methodology
To determine the complexity of the integral, we carried out a study based on the interpretative-hermeneutic paradigm, setting off from recognising the difference between social and natural phenomena, seeking the greater complexity and the unfinished character of the former, which are always conditioned by the participation of the human being. Under this premise, we pursued finding, interpreting, and clarifying the development of the different notions for the integral.
Sample
Epistemic configurations proposed by two authors, here called tertiaries [11,12], who perform the first classification for the definite integral using theoretical tools provided by OSA.
Instruments
A template to build an epistemic configuration; primary, secondary, and tertiary bibliographic sources. Specialised software to systematise the information collected and matrices for data triangulation.
Procedure and Data Analysis
The modelling of the complexity of the definite integral proposed in [11,12] was considered as a starting point. In the epistemic configurations proposed by those authors, improper integrals were not shown as an extension of definite integrals, were not they clearly determined. Since the origin of these integrals is not considered in any configuration, it motivated us to delve into secondary sources on the history of mathematics: [17][18][19][20][21][22][23][24][25][26][27][28][29][30], to identify, characterise, and categorise their evolution, to determine in which configuration they fit or fail to complete or modify them. In this phase, the analysis instrument used was the format for the construction of an epistemic configuration; each of the six elements that comprise it was broken down, analysed and modified as the selected and systematised data were collected and analysed.
The classification of the systematised information showed the need to go deeper into the study, based on a historiography for the integral, where it was necessary to resort to secondary bibliographic sources, i.e., texts about the history of mathematics related in the theoretical background of this work. The triangulation of this information made it possible for us to understand different notions of the mathematical object integral, considering the context in which each one arose. Interpreted in a modern context, they allow to distinguish between definite, indefinite, and improper integrals, elements that lead to conclude that the tertiary sources, by focusing the research on the definite integral, did not consider historiographical aspects of the evolution of the concepts of the integral from its origins. between the components that modify the epistemic configurations proposed by [11,12], are, for example: Archimedes' theorem for the quadrature of the spiral or Fermat's two squares theorem, which influenced the works of Newton and Leibniz in the Middle Ages in the constitution of the concept of the integral, theoretical aspects needed to consult primary sources, also related in the theoretical background of this work.
Once those limitations were detected, the next step was to check, both in the secondary and primary sources, whether the type of problem addressed in each historical period involved the use of a clearly indefinite integral, or whether the result found was the one we know today as a consequence of the calculation of a definite integral, or an improper one, considering that, for each period, these differences were not yet known. The idea was to complete the modelling of the complexity of the integral already identified in the elements that make up the epistemic configurations created from the primary and secondary sources, rethinking what those mathematicians did, but this time from a modern perspective, which would allow us to identify the type of integral that was used, and whether it corresponded to finding a solution to the problematic situation identified for each period. In this phase of the research, due to the amount of information collected and aiming to purify it, it was necessary to follow the same method used in [31] for optimisation; [6] for the derivative, given that they studied the epistemic complexity of these mathematical objects also using the tools provided by OSA. The deductions obtained were subjected to a triangulation process, preserving the structure of the epistemic configuration tool, model abbreviated in Table 1, which allowed characterising the complexity of the integral by identifying three global periods, each of which generated a global epistemic configuration, described in detail in Section 5 of this paper.
Validation of the Proposed Characterisation
Once this epistemic complexity was identified, we proposed to determine how it contributes to the instructional processes from the current programmes offered in three colleges of a university in the city of Bogotá: Finance, Cadastral Engineering, and Administration. To this end, we analysed the syllabus of these programmes, focusing on the subject Calculus 2, where integral calculus is taught. The purpose was to determine what part of this complexity was present and, if any, how it is articulated for the teaching of integral calculus and its applications. After analysing the three Calculus 2 programmes in detail, a restructuring of the part corresponding to the teaching of the integral was proposed, articulating the three global epistemic configurations proposed, redesigning activities that would allow the integral to be shown from different concepts, allowing reflection on its logical structure of production, construction and application. A representative sample of partial meanings was formulated, connected to each other, taking as a reference the complexity elaborated and proposed in this work. A sequence of tasks was designed and implemented with three groups of students from these colleges, with follow-up during three academic semesters, with the aim of observing some evidence in the students of the connection between partial meanings and their use when solving problems in different contexts, thus analysing their performance. In other words, to evidence the development of mathematical competencies in students when using integrals. To verify the benefits of this restructuring, the work we carried out with these groups was compared with the results obtained with a fourth group that continued to be taught the integral in traditional classes. Due to their length and details, the results found can be consulted in [8] and were expanded in [9].
Results and Implications
It presents the results of this work in two senses: (1) The complexity of the integral, the object of this paper, and (2) we mention some details related to the characterisation of the complexity of the integral and its articulation when planning and implementing a sequence of tasks with university students, since the details and results of this characterisation can be found in [8].
Results in Relation to the Complexity of the Integral
This study allowed to identify the existence of certain limitations in each of the partial meanings put forward by the tertiary authors, given that they only focused their interest on the definite integral, leaving aside elements that became motivating situations for other more consistent "meanings" involving indefinite and improper integrals, here called secondary, and which are detailed in the three global epistemic configurations that we propose. Table 1 shows the synthesis of this globality, articulating the primary epistemic configurations within the global ones, allowing us to visualise the complexity of the integral.
From this classification, we modelled the complexity of the integral in three global epistemic configurations: (1) origins of the integral (GEC1); (2) the operation of integration to support the nascent Integral Calculus (GEC2); (3) formalization of the Integral Calculus (GEC3). It should be clarified that, within these global configurations, we distinguish in detail the appearance of definite, indefinite and improper integrals that allowed to locate the configurations proposed by the tertiary sources within one of the three global epistemic configurations established here.
In the validation of the pilot group, we identified in the students that this notorious complexity allows them to focus their attention not only on algorithms and techniques, but on the very complexity of the integral, allowing them to identify the different meanings and access the understanding of fundamental concepts of the Integral Calculus, such as: calculation of area between curves (with definite integrals), application of the convergence criteria for improper integrals; calculation of the centre of gravity of a body and the force of attraction of gravity; calculation of the area of a flat enclosure, calculation of the length of a curve, calculation of the volume and area of a solid of revolution, among others, enabling them to establish different relationships between these concepts, apply and extrapolate them to other situations that require the solution of new problems.
Origins of the Integral
Around 340-194 B.C the Athenian school tackled three problems related to measurement: doubling cube, trisecting an angle, and squaring the circle, all of them in a clearly intra-mathematical context. For reasons of space, we present the position of three mathematicians representing this school who worked to find a solution to those problems: Eudoxo around 340-330 B.C. created the exhaustion method, inscribing a succession of polygons in the non-rectilinear figure to be squared, choosing the sequence in such a way that the differences between the measure of the figure to be squared and the measure of each polygon form a sequence that satisfies the hypothesis of the previous proposition.
Euclid aboaut 300 B.C., using the method proposed by Eudoxo, carried out measurements in which he compared known and unknown magnitudes, respecting the principle of homogeneity (one-dimensional, compares one segment with another taken as a reference unit. Two-dimensional, finds a square equivalent to any plane figure. Three-dimensional, finds a cube equivalent to any solid). In the first two propositions of Book XII, Euclid exposes the idea of decomposing-recomposing rectilinear plane figures to obtain their squareness. Archimedes considered Democritus as the first, who, following Euclidean approaches, established the formula for the volume of a cone or a pyramid correctly, "considering these solids as if they were formed by innumerable parallel layers" [32] (p. 23).
At the end of the third century B.C., Archimedes, retaining this form of reasoning, used strict proofs to find areas, volumes, centres of gravity of curves, surfaces, circles, spheres, conics, and spirals; he perfecting the exhaustive method. He combined the geometric with the laws of mechanics and the exhaustive method, a process that gave rise to the indivisible and infinitesimal, respectively. He positioned the method of exhaustion as an approximation between inscribed and circumscribed geometric figures of a given measurement that delimit the figure sought, so that the difference between them is so small that they are considered equivalent.
We found in this type of work, that the ancient Greeks, from purely geometric processes, implicitly used the integral as an operation, whose result was impossible to determine due to the theoretical limitations of this time [33]. That is, if we look at it in current terms, it would reflect the implicit use of the "indefinite integral", but the results found showed a measure, a number; which, seen in current terms, is the equivalent of the application of a "definite integral", and sometimes that of an "improper" one. For example, in the case of the quadrature of the Archimedes' spiral-A curve that describes a material point that moves with uniform speed along a ray that rotates with uniform angular speed around its end-We start from a succession of infinite layers that cover this area and, although the result is a number (which was considered a measure), such succession of layers, with the current look, can be understood as a succession of functions that converge to another function. Now, we find that the application of this implicit notion of integral is known today as indefinite integral, but the ancient Greeks used it to obtain a measurement (today we know that a measurement is obtained as a result of calculating a definite integral). Procedure that, translated to current notations, is practically the same as the Riemann integral, whose polar equation has the form ρ = aυ, where a > 0 y is a constant. We illustrate as an example the way Archimedes used the following theorem: The area of the first cycle of a spiral is equal to one third of the area of the circumscribed circle ( Figure 1).
Demonstration 1.
Let us consider a spiral with polar equation ρ = aυ. Let us calculate the area when the polar angle varies from 0 to 2π, i.e., from the first turn of the spiral. The radius of the circumscribed circle is 2πa. To do this, we divide this circle into sectors of amplitude υ = 2π n , from υ = 2πk n to υ = 2π(k+1) n for k = 0, 1, · · · , n − 1. In each sector, we examine the spiral arc that remains within it and delimit the area corresponding to said spiral arc between the areas of two circular sectors. The largest circular sector area inscribed in each spiral arc is 2 a2πk n ·2 2π n , and the smallest circular sector area circumscribed in each spiral arc is 2 (a2π (k + 1) /n)·(2π/n). [30] (p. 140).
In modern notation, the area, S, of the spiral verifies that: Archimedes knew that n ∑ k=1 k 2 = 1 6 n(n + 1)(2n + 1). Using this result, he wrote the above inequality in the form 4π 3 a 2 1 2 , a third of the area of the circumscribed circle, subtracted k, the previous inequality, and carried out simple operations, obtaining: Using the Archimedean axiom, the conclusion is that S = K. We observe that the proposition starts from an implicit, "indefinite" assumption, whose result is a number (implicitly, it is the result which, in current terms, corresponds to calculating a definite integral). The global epistemic configuration 1 (GEC1) associated with this period is summarised in Table 2.
Problem situations
Relative measurement problem, in an intra-mathematical context, from three situations: (a) doubling the cube; (b) trisecting an angle; and (c) squaring the circle. Problem of measuring long distances (for example, the distance from the Moon to the Earth). Problems in which areas, volumes, centres of gravity of curves, surfaces, circles, spheres, conics, and spirals must be found.
Definitions
Commensurable and incommensurable magnitudes. Different types of curves. Elements of the curves.
Procedures
Basic processes for measuring: direct measurement; decompose, recompose and superimpose, respecting the principle of homogeneity. Method of exhaustion (the method of exhaustion inscribes a succession of polygons in the non-rectilinear figure to be squared, from an approximation between inscribed and circumscribed geometric figures of a known measure, which delimit the figure to be determined; this value is assumed as the "limit").
Method of reduction to absurdity.
Propositions
Principle of homogeneity: only magnitudes of the same dimension can be compared.
Results obtained for specific cases of surface measurement and volumetric measurement (for example: the area of the first cycle of a spiral is equal to one-third of the area of the circumscribed circle).
Arguments
The argument consists of the correct application of the method of exhaustion to solve the problem, using integration as the fundamental operation.
Source: own creation.
The ancient Greeks conceived tangency as static and geometric, suitable for calculating the circumference but not a spiral. Hence, Archimedes established two ways of operating with infinity: the mechanical method, incorporating the indivisibles; and the method of exhaustion, infinitely small due to the presumption of the existence of a "limit". In this configuration, one of the basic principles is that of homogeneity. However, it is a principle that presents problems because it is not always possible to fit a figure within another a fixed number of times.
Integration as a Support for Nascent Integral Calculus
Regarding the GEC1 configuration, we highlight the overcoming of the homogeneity principle. Here we quote a primary source, whose contributions generate a breakdown of the integration operation, which influenced other mathematicians of the time, establishing the concept of integral in a more general and abstract way, that is, as a new discipline. The example is in [34], located within the geometric algebra of the Greeks (explains how arithmetic operations can be done using ruler and compass), who broke with tradition by considering that any algebraic expression, for example, a 2 and b 3 represent segments, (for the ancient Greeks, a 2 and b 3 were area and volume, respectively).
On the use of letters in geometry . . . Frequently, it is not essential to draw lines on the paper, designating each of them by a letter was enough. Therefore, to add line BD and GH, I name one a and the other b and write a + b. Therefore, I will write a-b to indicate the subtraction of b from a. Additionally, I will inscribe ab to indicate the multiplication of one by the other; a b to divide a by b; aa or a 2 to multiply a by itself; and a 3 to multiply this result one more time by a, and so on to infinity; in addition, √ a 2 + b 2 is used to obtain the square root of a 2 + b 2 ; finally, √ ca 3 − b 3 + abb obtains the cube root of ca 3 − b 3 + abb , and similarly for others. It should be noted that with a 2 and b 3 , and comparable terminologies, I do not generally conceive but simple lines, although I name them squares or cubes, expressions used in algebra. Likewise, we must consider that all the parts of each line are articulated by an equal number of dimensions when the unit has not been determined in the statement of the problem. Thus, a 3 contains the same dimensions as abb or b 3 , these being the components of the line that I have named √ ca 3 − b 3 + abb . The same does not happen, however, when the unit is determined, because it can always be assumed whatever the dimensions, etc. [34] (p. 66).
Thanks to Descartes' analytic geometry, there is a bridge between geometry and analysis, expanding the domain of the geometric curves, developing new methods to calculate tangents and areas. From these extensions, Kepler modified the method of exhaustion indicating: "any figure or body is represented in the form of a figure by a set of infinitely small parts" [24] (p. 170), and introduced concepts such as indivisible and infinitesimal, that allow us to develop techniques to calculate tangents or make quadratures in a heuristic way; contrary to Cavalieri, who kept integration as an operation reasoning in the Greek style, considering a plane figure made up of a set of lines, and a solid made up of an indefinite number of parallel plane fragments.
During the 16th century, the use of infinitesimal quantities was imposed in the solution of problems of calculation of tangents, areas and volumes. We highlight Fermat, Wallis, Pascal and Barrow as representatives of the era, because they present a conceptual and methodological disruption with the strictly geometric approach of Cavalieri, originating a progressive arithmetization that led to the implicit use of the limit. As an example, we show how Fermat calculated the quadrature of the hyperbola y = x −2 for x ≥ a since they are essential elements of the current definite integral. To facilitate understanding, we will use modern terminology and notation ( Figure 2). Let us choose a number r > 1 and consider the abscissa points a, ar, ar 2 ,... The area of the inscribed rectangles ( Figure 2 The area of the circumscribed rectangles is given by: Let us choose a number r > 1 and consider the abscissa points a, ar, ar 2 ,... The area of the inscribed rectangles ( Figure 2 The area of the circumscribed rectangles is given by: Therefore, calling S the area under the curve, we have that 1 ar < S < r a . Since this inequality is valid for every r > 1, we conclude that S = 1 a . We note that this value is precisely the area of the rectangle OABa. In these quadrature's of Fermat, there are, from a current perspective, three essential aspects of the defined integral: (a) the division of the area under the curve into infinitely small area elements, (b) approximation of the sum of those area elements by infinitesimal rectangles of height given by the analytical equation of the curve and (c) an attempt to express something similar to a limit of said sum when the number of elements increases indefinitely as they become infinitely small. This is in n+1 , for all n in rationals, whit n = −1. In the same direction, we find Wallis arithmetizing Cavalieri's indivisibles, transforming the calculation of quadratures into the problematic of finding the area below the curve by a Cartesian equation. Let us see the way in which the area below the curve y = x k with k = 1, 2, · · · on the section [0, a] was calculated (Figure 3), since this process influenced the works of Newton between 1666-1676 and Leibniz between 1675-1695 that later would formalize the nascent infinitesimal calculation. Wallis considered the PQR region made up of infinitely many parallel vertical lines of length as equal to x k . He divided the segment PQ = AB = a into n parts h = a n long, where n tends infinity. The sum of these infinite lines is prototypical 0 k + h k + (2h) k + · · · + (nh) k . Equally, the area of the rectangle ABCD is a k + a k + a k + · · · + a k = (nh) k + (nh) k + · · · + (nh) k , and the correspondence between the PQR and ABCD areas is areaPQR areaABCD = 0 k +1 k +2 k +···+n k n k +n k +n k +···+n k . In current terms, it can be summarised as: lim , assuming that said limit exists. This process is known as the Wallis interpolation method. This technique shows that the sums necessary to calculate quadratures can be performed arithmetically better than in terms of geometric ratios. This evidences the definitive disruption with the rigor of Greek geometry and with the Aristotelian tradition of avoiding infinity.
Using his incomplete induction method, Wallis generalised the results for finite sums and infinite series (today known as the intuitive use of passing to the limit). Wallis identified that what is considered static can become dynamic, thus defining four important elements in the conceptualisation of the definite integral: (a) the determination of the area of the rectangle as the product of the base by height; (b) the division of the area under the curve into infinitely small area elements (infinitesimal rectangles of height determined by the equation of the curve); (c) approximation to the numerical determination of the sum of those elements; and (d) an attempt to express the equivalent of what will be the limit of this sum when the number of elements increases indefinitely as they become infinitely small.
In 1647, Saint-Vincent, considering these four elements, derived an extension for the definite integral. He studied a generalisation of the notion of integral, as contemplated until then. Analysing the area under the hyperbola y = 1 x , he showed that, if the relation of the successive lengths is constant (Figure 4), i.e., A 1 A 2 = A 2 A 3 , then the areas I, II, and III are equal. He demonstrated that if points are arranged according to a geometric progression on one of the asymptotes of a hyperbola, the areas cut under the curve because they are parallel to the other asymptote are equal, since the areas of the curvilinear trapeziums K are equal when the lengths AA 1 , AA 2 , AA 3 , AA 4 , K are in geometric progression. Therefore, [35] studied in terms of areas the values of what, in current terms, is F(x) = x a f (t)dt, which represents an improper integral; he identified that the function F has restrictions, it must be defined and bounded in a finite interval [a, b].
Subsequently, Barrow, following the direction outlined by Wallis, called "fundamental theorem" the inverse relationship between problems of tangents and squares, based on geometric methods. Ref. [36] mentions that Barrow inferred the use of elements that were later key to the precision of the fundamental theorem of the calculus, breaking with the integration operation, turning it into a new field of work, with definitions, properties and theorems that need to be considered. Barrow did this by representing two curves: y = f (x) e y = g(x), in Figure 5
Demonstration 2.
We draw a straight line FT through F that intersects at T line AD and such that, we want to prove that FT it is tangent to y = g(x) at point F. The horizontal distance KL, from any point L on line EF to line FT is less than the distance IL from said point L to the curve y = g(x). This proves that line FT is always below y = g(x). We have that: FL KL = DF TD = DE. On the other hand: area ADEZ = FD; area APGZ = PI = LD; area PDEG = FD − LD = FL, since area PDEG < rectangle PD·DE. It follows that < PD·DE, then DE > FL PD , and, therefore, FL KL > FL PD → KL < PD = IL. . We deduce that point K is below curve y = g(x); thus, line FT is on one side of the curve. To complete the demonstration, it is necessary to repeat the reasoning, taking points on the right of EF. This proves that TF is tangential to y = g(x) at D and its slope is DE = f (D). In current terms, what Barrow proved is that: d dx x a f (t)dt = f (x). Following these reasoning's Newton (developing absolute theses) and Leibniz (developing relative theses) positioned the integration operation as a generalization of the calculation of quadrature in the field of dynamic physics, establishing the inverse relationship between problems of tangents and quadrature's. Both adhered to the physicalmathematical model for the intellection of the natural world; they synthesized and established a systematic algorithmic instrument known as the Infinitesimal Calculus, the Newtonian equivalent of Leibnizian differential and integral calculus. The three main characteristics of this new calculation are: (1) They unified in two general concepts, the integral and the derived, the great variety of techniques and problems that were addressed with particular methods. (2) They developed symbolism and formal calculation rules that could be applied to algebraic and transcendent functions, independent of any geometric meaning that made the use of these general concepts almost automatic. (3) They recognized the fundamental inverse relationship between referral and integration.
Both Newton and Leibniz understood this new calculation differently; Newton used a mathematical calculation while Leibniz developed a logical calculation. On the one hand, Newton elaborated a "purely mathematical reduction" of the quantifiable relationships entity to entity; and, on the other, Leibniz articulated a "strictly logical construction" from minimal (primitive) concepts of expression. Leibnizian doctrine has a more coherent cut than Newtonian philosophy since it provides universal logical tools, which are independent of the object of analysis and thus achieve "absolutely necessary" legitimacy.
The dynamic physics of these two thinkers makes it possible to scrutinize the roots of a conceptual opposition that ended up becoming the confrontation of two divergent and representative worldviews. We believe there were two archetypal ways of conceiving "reality". For Leibniz, logical calculation is the possible construction of complex concepts from the primitive ones by virtue of reason. Leibniz never neglected consistency in the rational construction of his system, which always respects the demands of his own logical principles. His doctrine is crossed by a total commitment to the principle of sufficient reason. Leibniz elaborated a dynamic that facilitates communication between metaphysical and physical considerations, while Newton's new analysis is founded on the fairly use of infinitesimal magnitudes ("moments"; "indefinite" and "infinitely" small magnitudes created from a steady flow in a given time with Cartesian curve graphs (incorporated since equalities).
The history of mathematics evidences Newton's imprint on calculus and mathematical physics in the eighteenth century, generally judged as negative in comparison with Leibniz's achievement. The paradox according to which Leibnizian calculus made progress in the mathematization of the scheme of gravitation is frequently declared as a clear symbol of crisis in the Newtonian field. Ref. [37] (p. 292) states: "The Principia were to remain a fossilized classic, on the wrong side of the border between past and future in the application of mathematics to physics", since, when Newton used algorithms, he was accused of having developed gross notation, a preference for less general geometric proofs compared to the Leibnizian calculus. Ref. [37] (p. 285) mentions: "The Newtonian version of calculus, the fluxions and series method, was crude in notation and inelegant in methodology".
However, we find Newton who suddenly disapproved Descartes's canon of problem examination and construction by synthesizing (More information on: Galuzzi, M. "I marginalia di Newton alla seconda edizione latina della Geometria di Descartes ei problemi ad essi collegati", in: Belgioioso, G., Climino, G., Costabel, P. and Papuli, G. (eds.) Descartes: il Metodo e i Saggi. Istituto dell' Encyclopedia Italiana, Rome 1990, 387-417) it in the inverse method of fluxions. Through this method he was able to tackle the problem of "squaring curves". When considering a superficial t as generated by the flux of the ordinate and sliding at right angles to the abscissa z, he unstated that the percentage of flux superficial of the area is equal to the ordinate (he declared . t . z = y 1 ). In this way, Newton devised integration as anti-differentiation. His approach was to apply the technique to "equations that will define the ratio of z, where "there will be two equations, the last, that will define the curve, and the first, that will define the area" [38] (p. 197). In Leibnizian terms, he constructed the first integral tables in the history of mathematics, giving importance to the inverse method.
Newton established methods corresponding to integration by parts and substitution. He called "method of series and fluxions" the techniques of expansion of series, determination of tangents and quadrature of curves. This method was a "new analysis" that extended to the themes that Descartes had abolished since his "common analysis"-For example mechanical curves-from the use of infinite series. Newton's "new analysis" is a definitive disruption of the integration operation, making it a new branch of mathematics. Newton, by knowing this, according to the Pappusian canon, knew that this "new analysis" was secondary to the creation, and that it should be carried out in terms independent of algebraic criteria. Hence, Newton's legacy to his followers was complex. Newton devoted his efforts to developing an elaborate algorithm collected in mutual ex-amination between Arithmetic universalis and the new analysis. He carried out to his followers the idea that the Greek classics were greater to modern mathematicians, and that the ancients had out of view heuristic geometric tools that could, be recovered by patiently examining the remaining texts.
From the early to the middle eighteenth century, Newtonian mathematics grew with thinkers for example Taylor, Stirling, Cotes, De Moivre, and Maclaurin who were dedicated to Newton's mathematics. Taylor supplemented Newtonian mathematics because he thought that the new method of fluxions was only a specific situation of a bigger theorythe method of increments, that we now call finite difference calculus, as Newton had envisioned in Methodus differentialis (1711) (A critical edition can be found in Newton, I. The Mathematical Papers. Op. cit., 8, 244-255).
Taylor wrought characteristically with finite variances and found results in the new analysis using limit arguments, leaving finite additions tending to zero. The consequence with these methods was the expansion in Taylor series. In studying infinite series, it was hoped to address quadrature, which, in Leibniz's words, was related to integration. Newton, working on De quadratura, set up a structure of research related to the finite integral which was completed by Roger Cotes in 1714, recorded in his works "Logometria" y "Harmonia mensurarum, sive analysis & synthesis per rationum & angulorum mensuras pro-motae" on 1722. Following the Newtonian legacy, Maclaurin clearly mentions the synthetic method of fluxions, as the "precise and elegant" Newtonian method. His goal was to present this method as a relation to Archimedes' method of exhalation. He defined fluxion as "the speed with which a quantity flows, at any time limit while it is supposed to be generated (Ibid., [38], (p. 57))." Maclaurin, agrees with Berkeley in identifying in Newton's synthetic method of fluxions an ontological basis absent in Leibniz's differential calculus, since infinitesimals do not have "a real existence".
It was even mentioned that Newton's "new analysis" was only a generality of the "Archimedean method". Maclaurin, in his "Treatise", ruled out doctrines entrenched in the Newtonians; he satisfied the need he felt to provide a firmly anchored target for estimates in the method of fluxions; he argued that the theorems of the calculus were not at all about "fictions" or "phantoms of defunct quantities" as Berkeley held, but were considered kinetic in the sense that Newton held when operating with fluxions and fluxions existing in nature" (Op. cit., [38] (pp. 122-123)).
Eighteenth-century European mathematics evolved with the approaches of Newton and generalised by Maclaurin, evidenced in a change of language, new lines of research and new values underlying mathematical practice. This period constituted a significant change for the nascent infinitesimal calculus. Newton had to face a German competitor (Leibniz), who arrived at results similar to his own, promoting a different view of mathematics (The situations involved in the polemic between Newton and Leibniz are pointed out by [26,30]. In general: Newton developed the system of series and fluxions between 1665 and 1669. Leibniz developed his differential and integral calculus around 1675, published articles from 1684.). Leibniz left to his disciples the choice to hold different approaches about the ontological question of the existence of the infinitesimals; he wanted to defend its usefulness as symbols in mathematical calculations. The global epistemic configuration 2 (GEC2) associated with this period is presented in Table 3. Criticisms of Cavalieri's work involve aspects related to the continuum, infinity and its rhetorical exposition, extensive and intricate geometric reasoning, which make it difficult to read and understand. Conceptual and methodological disruption of Cavalieri's geometric approach give an arithmetization that lead to the implicit use of the limit. Analytics has become the appropriate method to replace geometric intuition in counting and measuring processes. Wallis: -Totally disrupted the rigor of Greek geometry and the Aristotelian tradition of avoiding infinity; -Transformed the quadrature calculation problem into the problem of finding the area under the curve; -Identified four elements that were important in the conceptualisation of the defined integral. Newton: -Positioned the integration operation as a generalisation of the calculation of quadratures in the field of dynamic physics; -Adhered to physical-mathematical models for the intellection of the natural world; -Elaborated procedures corresponding to integration-by-parts and substitution; -Provided the first known integral tables in the history of mathematics. Leibniz.
-Postulated logical calculation as the possible construction of complex concepts from primitives by virtue of reason; -Developed logical calculations; -For the new calculation, he provided universal logical tools that are independent of the object of analysis, thus achieving "absolutely necessary" legitimacy.
Newton and Leibniz synthesised and established a systematic algorithmic instrument known as infinitesimal calculus (Newtonian differential calculus and Leibnitzian integral).
Components Description
Relations -Lack of rigor for techniques used in a heuristic way; -Implementation of general algorithms, in algebraic and non-geometric terms, giving rise to new infinitesimal calculations; -Dominant, implicit idea of the "indefinite integral" as an operator whose development focused on problems that gave rise to "definite integrals." These two initiated others, discovered but not formalised in that period: "improper integrals", which would later be formalised as "improper integrals of the first and second species", arose by extending the notion of integral to unbounded intervals, and to unbounded functions on a bounded interval; -Newton (developing absolute theses) and Leibniz (developing relative theses) positioned the integration operation as a generalisation of the calculus of quadratures in the field of dynamic physics, establishing inverse relationships between problems of tangents and quadratures; -Newton and Leibniz adhered to the physical-mathematical model for the intellection of the natural world, synthesising and establishing a systematic algorithmic instrument known as Infinitesimal Calculus with the following characteristics: - The unification of two general concepts, integral and derived, and the great variety of techniques and problems that were approached with specific methods; - The development of symbolism and formal rules of calculus that could be applied to algebraic and transcendent functions, regardless of any geometric meaning; -Recognition of the fundamental inverse relationship between derivation and integration.
Source: own creation.
Integral Calculus Foundation
Ref. [39] mentions that Jacques Bernoulli suggested the name "integral" to Leibniz. This is an epistemologically significant fact, because "with the incorporation of a name to designate a specific operation, a notion that merits special treatment is being identified" (p. 38). The integral was no longer just a tool to solve the general problem of calculus of quadratures until it became a new concept with its own problems and methods.
Revolutionary changes generated by the "new analysis" proposed by Newton and Leibniz in the eighteenth-century mathematics have been presented in three periods [39]: a geometrical period, where geometrical situations and thoughts predominate; analytical or "algebraic", started by Euler in 1740 and reaching the end of the century with Lagrange; and a third period starting in the nineteenth century with Cauchy's writings.
By this time, only a few mathematicians noticed the change from the geometrical to the algebraic period. Ref. [39] mentions that since 1740, Euler was probably the first to think of calculus not as an algorithm for the study of curves or other geometrical objects (as in the works of Leibniz and Newton), but as the study of functions understood as "reasoned expressions composed of variables and constants" (p. 340).
These changes [40] he called the "degeometrization" of 18th century analysis, whose mathematical entities are now functions that can also be multivariate, of the form f (x, y, z, · · · ,), caused by the study of orthogonal ( [40]) trajectories and continuum mechanics ( [41]). Approaches to analytical dynamics from the minimum action or from virtual velocities led Euler to deploy the calculation of variations ( [40,42]). Given Euler's importance in this process of de-geometrization of Leibnizian calculus, we share [43] position of giving this new representative theory the name of "Eulerian calculus" to differentiate it from Leibnizian calculus.
A characteristic of this stage was the lack of formalization of his theory due to rigor problems and consequent theoretical foundation, aspects that marked a new stage in the history of the integral, transforming it into the emerging Integral Calculus, developing and formalizing the concepts of integral defined and its extensions (improper integrals). Euler, in a letter to Goldbach (1744), explains that before writing the manual on Infinitesimal Calculus, he considered that he had to develop a series of previous topics, related to the infinities necessary for the understanding of calculus. He developed progressive tract on differential and integral calculus in the guidelines of his Introductio in analysin infinitorum (1748), in Institutiones calculi differentialis (1755) and in Institutiones calculi integralis (1768-70). Experienced classes of functions, simple and multivariate, as symbolic expressions, the purpose of which was to establish their derivatives and integrals ( [43,44]). These treatises have a double nature: a taxonomic nature, where he proposes a classification of functions, and another of an instrumental nature, where he presents the decomposition of polynomials as a product of simple factors (those corresponding to real roots) or double factors (corresponding to imaginary roots). [45] mentions that Euler devised methods of elimination and decomposition in simple fractions, proposing to eliminate any reference made to geometry in the study of variable quantities, through the concept of abstract or universal quantity. D'Alembert, Lagrange and Laplace following the guidelines proposed by Euler worked on: classes of functions, instead of curves, surfaces, partial differential equations, calculus of variations, analytic mechanics and the algebraic representation of differential and integral calculus. They reached procedures and requirements to operate with symbols and not with geometrical properties, safeguarding the need to move calculus away from geometry (Op. cit., [34] (p. 319)). After Newton and Leibniz, mathematics advanced parallel to the analytical procedure applied in trigonometry, to the discovery of "partial differences" or "partial fluxions", of the "calculus variationum". Ref. [46] cites [47] with the advent of the principle of virtual velocities and its use in Lagrange's Méchanique analytique in 1788. He shows how Lagrange uses these tools in astronomy.
On the other hand, mathematicians European continent acknowledged and approved Leibniz's ideals, where mathematics could be understood as reasoning from symbol manipulation, regardless of metaphysical concerns and specific interpretations. Leibniz allowed his followers to retain different approaches, for example, ontological questions related to the presence of infinitesimals or the roots of negative numbers. He wanted to preserve the use of symbols when performing mathematical calculations. Leibniz urged his followers to ignore interpretative metaphysical questions when working mathematically. Guidelines applied by Euler, Lagrange and Laplace. Ref. [47] (p. 250) mentions that "a feature of the French academy's growing commitment to analytical methods in physics in the course of the eighteenth century was to override the teleological metaphysics of rational mechanics" (p. 250).
Thus, the concept of function took center stage, the problem of series representation is related to the integration problem, facts that radically transformed infinitesimal calculus. We perceive this change thanks to [43] on the transformation of heat's works; he extended the domain of functions beyond the continuous ones, and established the conditions that a function must fulfil to be represented in trigonometric series. One of those conditions was the integrability of the function over a given interval, which made it necessary to reconsider the concept of integral. For those times, the integral was considered a necessary solution tool, but it was not the main concept of study. Fourier provided the notation of the extremes of integration which, in modern notation, f (x) = d dx x a f (t)dt means that the main problem consisted in the asymptotic development of the function x a f (t)dt, (that is, x a f (t)dt = x n + k), considered a variant of improper integration. The above refers to the good definition of the function, relating more to the improper integrals of the second kind. Ref. [48] states that with Fourier the integral is seen as the area under the curve, asking the question "how discontinuous can a function be to make it integrable?" (p. 66); however, we found that the formalization of the improper integral was discussed by DeMorgan, in 1830, with convergent series representing the integral +∞ u x ∝ e −x dx for u > 0 arbitrary. Poisson approached the resolution of an improper integral by extending the complex plane, considering: dx = −i(cos z + isenz)dz, deducing that dx x = [log(−(cos z + isenz))] (2n+1)π 0 [48] (pp. 70-71). Cauchy adopted rigorous methods followed today, such as Cauchy's integral theorem, Cauchy-Riemann's conditions, or Cauchy's sequences. Through the concepts of limit, function, and convergence, he managed to position an analytical definition of an integral defined for continuous functions, proposed the current notation for this type of integrals, replacing the cumbersome Fourier notation f (x) dx formalizing properties of the integral, expressed with the new notation as: x 0 x 0 With these contributions, Cauchy definitively separated the integral from the differential calculus, demonstrating the inverse relation of the derivative and the integral through the fundamental theorem of calculus in its first historical version, and defined it as a limit of sums.
Dirichlet is credited with the modern "formal" definition of a function. With the characteristic function of rational, he reflected on the relationship with the requirement that infinite points of discontinuity must meet in order for a function to be integrable; he established the false condition that for the function to be integrable, it is sufficient that the discontinuity points form a scattered set. However, Riemann, based on the Cauchy and Dirichlet conceptions, incorporated a definition of integral that welcomed highly discontinuous arbitrary functions. He defined an integral that generalized Cauchy's (Cauchy's Integral Theorem, also known as the Cauchy-Goursat theorem in complex analysis, is a statement concerning line integrals for holomorphic functions on the complex plane), gave a precise definition of the integral of a function defined in an interval that must be bounded and closed. This new integral allowed Volterra to demonstrate the presence of a bounded derivative that was not Riemann-integrable, imposing, in this way, a severe limitation to the Fundamental Theorem of Calculus for the Riemann integral, fact that originated a profound revision of the notion of integral. Hankel tried to generalize the integrability condition of the Riemann function in terms of the jump concept of a function, classifying the functions into integrable and non-integrable. [39] (p. 12) indicates that "Hankel's work initiates the conjunctivist approach to the integration theory that allows us to found modern integration theory" We cannot neglect, in reviewing this evolution of the integral, the analytical mechanics of the 18th century, how remote from the applications pure mathematics of that type were.
Ref. [49] mentions that Borel exhibited the results of his book "Leçon sur la théorie des fonction" between 1896-1897 at the Ecole Normale Supérier in Paris, where [50] was his student, adding to the definition of measure the notion of "numerably additive", extending the ordinary length of an interval to open sets, based on the property: every open set is the countable and disjoint union of open intervals, which he called measurable sets; but he did not study its properties. Lebesgue rigorously analysed those properties, obtaining a special collection of "measurable sets", which he called a σ-algebra. The new notion introduced by Borel is the ideal framework in which Lebesgue developed its integral. Lebesgue delved into the Riemann integral, finding its limitations. In response, the integral of Lebesgue in 1901 emerged, broader than Riemann's, whose development is sustained on the notion of "measure", as in the ancient Greeks. In 1904, Lebesgue defined measurable functions as those that allow the development of a much broader and more satisfactory theory of integration than that of Riemann. The path that Riemann, Darboux, and Lebesgue led in the construction of a deeper and more rigorous calculus established the necessary and sufficient conditions for integrability, not only in the Riemann sense, for bounded functions, but also for a significant generalization of the Riemann integral.
The sequential overcoming of these difficulties with the Cauchy and Riemann integrals encouraged the search for a more powerful concept of integral, which Jordan, Borel, and Baire began, and culminated in Lebesgue's definition establishing a solid, strong and structured theory of integration, the latter, based on the idea of changing the partitions of the domain of a function by partitions in the range. Denjoy in 1912, and Perron in 1914 built integrals that managed to integrate any derived function, however, these integrals turned out to be similar. Today, it is named Denjoy-Perron integral. In 1957, Kurzweil, and two years later, Henstock, each defined integral that solve the derivative inversion problem; years later, they resulted equivalent, and it is currently known as the Henstock-Kurzweil integral. Table 4 shows the structure of the GEC3.
Components Description
Problem situations -Lack of the formalization of "new calculus" due to problems of rigor and theoretical foundation; -Reconsider the integrability conditions of the function over a given interval; -Problems representing series related to integration.
Existence of a bounded derivative which is not Riemann-integrable (limitations to the Riemann integral). The teleological metaphysics of rational mechanics is annulled with the works of Euler, Lagrange, and Laplace.
Arguments
Conception of calculus not as a system for studying curves and geometric objects, but as a tool for analysing functions composed of variables and constants. Work with multivariate functions by analysing their orthogonal trajectories and the continuum mechanics concept of integrals based on rigor and precision, reaching generalisation. Necessary and sufficient conditions are established for integrability, not only for limited functions.
Definitive separation of calculus of geometry. Definitive separation of the integral from differential calculus; Cauchy demonstrated the inverse relationship of the derivative and the integral, through the fundamental theorem of calculus in its first historical version. The integral ceases to be a tool to become a new concept with its specific problems and methods.
Rescue of the Leibnitzian legacy: mathematics can be understood as reasoning from the manipulation of symbols, regardless of metaphysical concerns. The usefulness of symbols for mathematical calculations is defended. The concept of "measurable sets" that names a σ-algebra is introduced, and the Lebesgue integral emerged.
Relations
Foundation of Integral Calculus in three periods: -Geometric, when problems and conceptions of geometry predominated, and led to the "degeometrization" of eighteenth-century analysis; -Analytical or "algebraic", which began in 1740 with Euler and was developed at the end of the century with Lagrange; -Classic analysis, which began at the beginning of the 19th century with Cauchy's writings; -"Eulerian calculation", which was detached from the "Leibnitzian calculation"; In OSA the appearance of the secondary object is treated as a global mention of one or several configurations of primary objects, which is describe by the shared effect and produced by the processes associated with five different dualities [5]. The unitary-systemic duality allows considering a unitary configuration, for example, GEC1-3, Tables 2-4; or the set formed by the three configurations, as a systemic entity, Figure 6, given that, when a new topic is studied, what is done is a systemic presentation of the topic, the socio-epistemic configurations are studied and the practices that these configurations allow. However, when a new theme is initiated, the previously studied configuration and the practices that it enables are considered as a whole, as something known and, consequently, formed by unitary (elementary) entities. These same objects have to be considered systemically in order to be learned [5]. Hence, Figure 6 facilitates managing the complexity of the integral by identifying three large (unitary) meanings, which allow the teacher to use them to present partial meanings for the integral, e.g., Barrow's rule for calculating definite integrals, calculating the area under a curve, properties of the integral, methods of integration, among others, which only from a well-structured articulation will it be possible to understand the global context in which integral calculus is applied in extra-mathematical situations. This allows to duplicate the considered object, to identify its illustration and the represented object as different entities; allows the teacher to show that the different configurations (GEC1-3) are partial displays and explanations of the emergent object, emerging from mathematical practices useful to present and formalize specific situations such as: types of integral, extensions to the concept of integral, applications of improper integrals in complex analysis, to name a few. The ostensive-non-ostensive duality approves to reflect that the symbolised object is an ideal object dissimilar from its material representations, while the extensional-intentional duality leads to think, in general, that object as a general "something", the integral calculus, which achieves objectivity by considering the personal-institutional duality. The grouping of these dichotomies produces the appearance of a global reference not only for the integral but also for integral calculus, on which it is possible to carry out certain actions in order to improve the pedagogical practices implemented. In this process, the interaction and intersubjectivity of the subjects who construct and reconstruct their representations are fundamental to enable quality teaching and learning, which is fundamental in higher education.
Results Related to Experimentation with University Students
After analysing the curriculum of the subject and the textbooks proposed in the bibliography, the Calculus 2 syllabus was analysed, seeking to identify partial meanings for the integral. This classification was compared with the meanings identified in the three global epistemic configurations proposed in this study. It was found that the integral appeared through representations, different definitions, propositions, procedures, and arguments in the following order: first, indefinite integrals are presented; then, definite integrals, integration methods, calculations of areas between curves, and improper integrals. In each of these situations there are some applications which end up being more exercises than problems that allow modelling. In the syllabuses analysed, students are expected to master integration techniques and understand the integral basically as an operator, but they should not seek to develop mathematical competences to apply the integral to solve problems in different contexts.
Based on this situation, the programmes were adjusted by designing and implementing a sequence of activities aimed at presenting a representative sample of partial meanings for the integral connected to each other, which would allow mathematical competences to be developed in different contexts. In particular, a balance was sought between the conceptual development of the basic ideas of integral calculus with the appropriate handling of its algorithms, thus offering a global meaning for the integral, where it is identified as a systemic entity, allowing students to develop specific skills such as abstracting, representing, conceptualising, generalising and synthesising; in other words, developing competences in the use of the integral when solving a variety of problems proposed in different contexts. For reasons of space, it is omitted here, as it is explained in detail in [8]. The implementation provided evidence that students connected the interpretation of the integral as an operator with the generalisation of the sum of infinitely small sums (as a continuous sum) and with the notion of function. They also made additional mathematical connections, applying the integral to other contexts: continuity equations present in physics problems, calculation of the quantity of motion, energy, total rate of change of a moving mobile. They recognised the integral as a useful tool for modelling problems in other sciences involving continuously varying quantities, as it facilitates the interpretation of different phenomena observed when performing experimental operations. They calculated continuously varying areas, volumes, velocities, resistances. They accurately applied the integral operator iteratively when working with functions of two and three variables, which allowed them to understand phenomena that require numerical determination, whether to calculate areas of flat surfaces with a double integral, volumes of bodies with a double or triple integral, area of surfaces with a surface integral, centres of gravity and moments of inertia, among others. From their productions, there was evidence that the students in these groups managed to use a representative sample of the global meaning of the integral that allowed them to solve a variety of problems in different contexts. I was also found that, during the following year, the students in the focus groups and now taking Vector Calculus proved for themselves that when applying line integrals in a vector field, this coincides with the line integral of a scalar field. Achievements that the teachers in charge highlighted in these students, aspects referenced in [8] and other extensions in [9].
In Relation to the Complexity of the Integral
In this work we have exposed that emerges a secondary object, called an integral, that plays the global reference role of all the primary object configurations that have allowed us to model the complexity of the integral. This global reference in the mathematical activity takes the form of a specific configuration of primary objects. Therefore, what can be done with this second-level object is determined by this configuration of primary (first level) objects. In OSA, the entity that assumes the role of global reference is seen as simultaneously single and multiple, since metaphorically, it is interpreted as a multiplicity of options opening up from associated primary objects in different configurations. Table 1 shows the characterisation of the complexity of the integral, considering the three established periods. We consider them useful in solving problems in intra and extra mathematical contexts that involve application of the different meanings for the integral. For this reason, we consider it pertinent to know that, although a single meaning is intended for this object, there is an epistemic complexity that requires an articulation of partial (primary) meanings, and that only from a well-structured articulation will be possible to understand the global context in which the integral calculus is applied. Hence, we share [51] position when he indicates that this rethinking leads us to assume that mathematical knowledge is not an objective replica of a single reality external to the subject, but rather a personal and social construction of meanings, the result of a historical evolution, a cultural process in permanent development, located in a specific context. We consider that, in this process, the interaction and intersubjectivity of the subjects who build and reconstruct their representations are essential to enable quality teaching and learning, fundamental in higher education. Hence, one of the contributions of this work is the characterisation of the complexity of the integral object through partial meanings.
This scope leads us to study the complexity of the concept, since the current trend is to consider that mathematics should be applied to extra mathematical contexts (which entails reflection on the complexity of the mathematical objects taught). By sharing this articulation of meanings for the integral with the students group, we demonstrated development of advanced mathematical thinking skills such as: abstract, visualize, estimate, justify, reason under hypothesis, categorize, conjecturing, generalize, synthesize, define, significant advances in demonstrate and formalize, which enables them to know how, that is, for an illustrated doing that implies: enlightened action and performance, transversal use of knowledge, design of appropriate ways to formulate and solve problems, not only in intra and extra-school contexts of their mathematical knowledge, but also expanding their areas of proximal development, by assuming cognitive and volitional challenges and "risks" in their subsequent professional work.
Some Suggestions for Teaching the Integral
The complex look applied to the mathematical object allows to deepen into the connection process between its partial meanings. The complexity, structured in terms of a set of epistemic configurations, specifies which components are to be connected. We agree with [52] on the concept of integrality, considered from the different stages of its historical, development opens a vast range of possibilities to apply it to problem situations in different areas of knowledge; that ratifies the importance of our classification (GEC1, GEC2, GEC3). Such range leads us to study the complexity of the concept, given that the tendency is to consider that mathematics should be applied to extra-mathematical contexts, without realizing that the process involves reflection on the complexity of the mathematical objects.
It is about going from a naive and optimistic point of view, which presupposes that the student will easily carry out the transfer of the mathematical knowledge generated in a single context to other new and different contexts. Another more prudent point of view is that, although it is considered that the possibility of creative transfer can occur, we assume that without a work on a representative sample of the complexity of the mathematical object that is to be taught, involving the articulation and connection of the components of this complexity, the students will hardly be able apply the mathematical object to different contexts.
The above thoughts allow us to share the proposal in [7] that a strategy to ensure students' competence in the use of integral for problem-solving consists of designing sequences of tasks aimed at presenting different partial meanings of the integral connected to each other. We implemented this strategy with three groups of students, where we evidenced development of advanced mathematical thinking skills, mentioned in [8]. Therefore, this work's second contribution was to bring forward briefly a teaching and learning experience of the integral oriented to present a representative sample of partial meanings of the well-connected integral, which allowed us to find evidence of the development of students' competence in use of the integral to solve problems in different contexts.
Limitations of the Study
There were some limitations in the development of the research, among the most important of which are the following: • It was quite difficult to access primary bibliographic sources; therefore, resorting to them was onerous; • Proposing a paradigm shift in the curricular structure that the faculties had in place was not easy. Persuading them to allow the project to be developed by making adjustments to the established curriculum structure, applying it, and looking at its benefits, advantages and difficulties, was hard work; • Systematising the information collected, which covered more than 20 centuries, was a time-consuming task, requiring almost exclusive work and dedication of time for more than three years, in order to be able to organise and present the work exposed here.
The Prospective of the Research
Studies such as the present one, which historiographically traces the evolution of a mathematical concept that, for those who study it, is considered difficult to learn, allows the teacher to: reflect on the complexity of the object to be taught; reorient the instructional process with a view to achieving changes in teaching paradigms traditionally centred on a formal mechanistic approach, which does little to enable students to develop mathematical skills; and recognise that the difficulties linked to the epistemic complexity of mathematical objects are often the origin of multiple errors, difficulties and obstacles that students face when they fail to find connections and articulations of the concepts studied with the everyday problem situations they face, preventing them from being mathematically competent. It can also be interpreted as a model for other researchers interested in improving their pedagogical practices by studying the intrinsic complexity of other mathematical objects that, due to their epistemological nature, are difficult for students to learn.
How to use the results of this study: This work has identified the epistemic complexity of the integral mathematical object, and its evolution and articulation until it became Integral Calculus. Metaphorically, it presents a thorough and detailed examination of the way in which this branch of mathematics was constituted which, step by step, progressed, giving answers to different everyday situations in each historical period in which it was developed, considering its state and the factors that intervened for its progress. In spite of responding to some problematic situations, other unsolved situations remained open, which led to the emergence of other more elaborated concepts that made it possible to respond satisfactorily to different situations and at the same time to receive an adequate foundation that consolidated Integral Calculus as another branch of mathematics. Awareness of this complexity allows teachers to identify different meanings for the integral which, when articulated and well connected when planning their classes, allows them to select specific problem situations that enable students to understand different meanings for the integral, to give them meaning, and to know how and when they can use them to find solutions to everyday situations specific to their professional work; in other words, to develop mathematical competences. Following the model proposed here for the integral can be a guide for developing other similar studies for different mathematical objectives. Institutional Review Board Statement: The authors collected anonymous, non-identifiable participant data. The authors kept the data secure to prevent exposure.
Informed Consent Statement:
The authors collected non-identifying data from participants for the extension phase, which is not part of this manuscript. These data, informed consents and results can be found in reference [8].
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 17,802.4 | 2021-10-02T00:00:00.000 | [
"Mathematics"
] |
Accretion onto some well-known regular black holes
In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi–Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias–Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.
Introduction
At present, the type 1a supernova [1], cosmic microwave background (CMB) radiation [2], and the large scale structure [3,4] have shown that our universe is currently in an accelerating expansion period. Dark energy is responsible for this acceleration and it has the strange property that it violates the null energy condition (NEC) and the weak energy condition (WEC) [5,6] and produces strong repulsive gravitational effects. Recent observations suggests that approximately 74 % of our universe is occupied by dark energy and the rest 22 and 4 % is of dark matter and ordinary matter, respectively. Nowadays dark energy is the most challenging problem in astrophysics. Many theories have been proposed to handle this important problem in last two decades. Dark energy is modeled using the relationship between energy density and pressure by a perfect fluid with the equation of state (EoS) ρ = ωp. The candidates of dark energy are a phantomlike fluid (ω < −1), quintessence (−1 < ω < −1/3), and the cosmological constant (ω = −1) [7]. Other models are also proposed as an explanation of dark energy, like a e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>k-essence, DBI-essence, Hessence, dilation, tachyons, Chaplygin gas, etc. [8][9][10][11][12][13][14][15][16].
On the other hand, the existence of essential singularities [which leads to various black holes (BHs)] is one of the major problems in general relativity (GR) and it seems to be a common property in most of the solutions of Einstein's field equations. To avoid these singularities, regular BHs (RBHs) have been developed. These BHs are solutions of Einstein's equation with no essential singularity; hence their metric is regular everywhere. The strong energy condition (SEC) is violated by these RBHs somewhere in space-time [17,18], while some of these satisfy the WEC. However, it is necessary for those RBHs to satisfy the WEC having a de Sitter center. The study of an RBHs solutions is very important for understanding the gravitational collapse. Since the Penrose cosmic censorship conjecture claims that singularities predicted by GR [19,20] occur, they must be explained by event horizons. Bardeen [21] has done pioneering work in this way by presenting the RBH known as the "Bardeen black hole", satisfying the WEC.
The discussion as regards the properties of the BHs have led to many interesting phenomena. Accretion onto the BHs is one of them. When massive condensed objects (e.g. black holes, neutron stars, stars etc.) try to capture a particle of the fluid from its surroundings, then the mass of condensed object has been effected. This process is known as accretion of fluid by condensed object. Due to accretion the planets and star form inhomogeneous regions of dust and gas. Supermassive BHs exist at the center of giant galaxies, which suggests that they could have formed through an accretion process. It is not necessary that the mass of the BH increases due to the accretion process, sometimes in-falling matter is thrown away like cosmic rays [22]. For a first time, the problem of accretion on a compact object was investigated by Bondi using the Newtonian theory of gravity [23]. After that many researchers such as Michel [24], Babichev et al. [25,26], Jamil [27] and Debnath [31] have discussed the accretion on Schwarzschild BHs under different aspects. Kim and Kang [29] and Jimenez Madrid and Gonzalez-Diaz [30] studied accretion of dark energy on a static BH and a Kerr-Newman BH. Sharif and Abbas [28] discussed the accretion on stringy charged BHs due to phantom energy.
Recently, the framework of accretion on general static spherical symmetric BHs has been presented by Bahamonde and Jamil [22]. We have extended this general formalism for some RBHs. We analyze the effect of the mass of a RBH by choosing different values of the EoS parameter. This paper is organized as follows: In Sect. 2, we derive a general formalism for a spherically static accretion process. In Sect. 3, we discuss some RBHs and for each case, we explain the critical radius, critical points, speed of sound, radial velocities profile, energy density, and the rate of change of the RBH mass. In the end, we conclude our results.
General formalism for accretion
The generalized static spherical symmetry is characterized by the following line element: where X (r ) > 0, Y (r ) > 0, and Z (r ) > 0 are functions of r only. The energy-momentum tensor is considered in terms of a perfect fluid which is isotropic and inhomogeneous and defined as follows: where p is the pressure, ρ is the energy density, and u μ is the four-velocity, which is given by where τ is the proper time. u θ and u φ both are equal to zero due to spherical symmetry restrictions. Here the pressure, the energy density, and the four-velocity components are only functions of r . The normalization condition of the four-velocity must satisfy u μ u μ = −1, and we get where u = dr/dτ = u r [22], u t can be negative or positive due to the square root which represents the backward or forward in time conditions. However, u < 0 is required for the accretion process, otherwise for any outward flows u > 0. Both inward and outward flows are very important in astrophysics. One can assume that the fluid is determined by dark energy or any kind of dark matter. For a spherically symmetric BH, the proper dark energy model could be obtained by generalizing Michel's theory. In dark energy accretion, Babichev et al. [25] have introduced the above generalization of the Schwarzschild black hole. Similarly, some authors [22,31] have extended this procedure for a generalized static spherically symmetric BH. In these works, the equation of continuity plays an important role, which turns out to be where A 0 is the constant of integration. Using u μ T μν μ = 0, we obtain the continuity (or relativistic energy flux) equation Furthermore, we assume p = p(ρ), a certain EoS in this case. After some calculations, the above equation becomes where a prime represents the derivative with respect to r . By integrating the last equation, we obtain where A 1 is the constant of integration. By equating Eqs. (5) and (8), we get where A 3 is another constant, depending upon A 0 and A 1 . Moreover, the equation of mass flux yields where A 2 is the constant of integration. By using Eqs. (5) and (10), we obtain the following important relation: where A 4 is arbitrary constant which depends on A 1 and A 2 .
Taking differentials of Eqs. (10) and (11) and some manipulation lead to In addition, we have introduced the variable If the bracketed terms in Eq. (12) vanish, we obtain the critical point (where the speed of sound equals the speed of the flow), which is located at r = r c . Hence at the critical point, we get and Eq. (12) turns out to be Also, u c is the critical speed of the flow evaluated at the critical value r = r c . We can decouple the above two equations and obtain The speed of sound is evaluated at r = r c as follows: Obviously, u 2 c and V 2 c can never be negative and hence Moreover, the rate of change of the BH mass can be defined as follows [31]: Here a dot is for a derivative with respect to time. We can observe that the mass of the BH will increase for the fluid, ρ + p > 0, and hence the accretion occurs outside the BH. Otherwise, for ρ + p < 0 like a fluid, the mass of the BH will decrease. The mass of the BH cannot remain fixed because it will decrease due to Hawking radiation, while it will increase due to accretion. If we consider the time dependence of the BH mass, then we first assume that it will not change the geometry and symmetry of space-time. Hence the space-time metric remains static spherically symmetric [22].
Spherically symmetric metrics with charged RBHs
In this section, we discuss the spherically symmetric metrics with charged RBHs in which X (r ) = Y (r ). For this assumption, Eq. (16) gives Although our focus is on charged RBHs metrics with event horizons, the present analysis is forbidden for a horizon space-time. In many cases, we are concerned with critical values (critical radius), critical velocities, speed of sound in fluid, behavior of energy density of fluid, radial velocity, and the rate of change of the mass of the accreting objects. So the horizon is not involved anywhere [22].
Charged RBH using Fermi-Dirac distribution
The said RBH solution has the following metric functions [32]: where the Fermi-Dirac distribution function is By replacing x = q 2 Mβr , we can obtain the distribution function as with normalization factor ξ ∞ = 1 2 . Also the distribution function satisfies where r → ∞. Hence the metric functions turn out to be If we set β → 0 and β → ∞, we obtain In both equations, the difference of the factor 2 must be noted [32]. It is possible to integrate the conversation laws and obtain analytical expressions of the physical parameters. For simplicity, we will study the barotropic case where the fluid has the equation p(r ) = ωρ(r ). Using (5) and (11), we obtain The velocity profile for different values of ω is shown in Fig. 1. Here ω = 1, 0, −1 refer to the stiff, dust, and cosmological constant cases, respectively, and −1 < ω < It can be seen that for ω = −1.5, −2 the radial velocity of the fluid is negative and it is positive for ω = −0.5, 0, 0.5, 1. If the flow is outward then u < 0 is not allowed and vice versa. In the case of ω = −1.5, −0.5 the fluid is at rest at x = 10. Figure 2 represents the behavior of energy density of fluids in the surrounding area of the RBH. Obviously the WEC and DEC satisfied by dust, stiff, and quintessence fluids. When the phantom fluid (ω = −1.5, −2) moves toward the RBH then the energy density decreases and the reverse will happen for dust, stiff, and quintessence fluids (ω = −0.5, 0, 0.5, 1). Asymptotically ρ → 0 at infinity for ω = −1.5, −0.5, while it approaches the maximum at x = 1.2, 1.3, 1.8 and near the RBH.
Using this metric, Eqs. (19) and (29), the rate of change of the mass of the RBH due to accretion becomeṡ Figure 3 represents the change in RBH mass for different values of ω. The mass of the RBH will increase near it and at x = 1.2, 1.3, 1.7 for ω = 1, 0.5, 0, respectively. On the other hand, the mass of the RBH decreases near it and at x = 1.7 for ω = −2. Hence the mass of the RBH increases due to the accretion of quintessence, dust, and stiff matter, while it decreases due to the accretion of phantom-like fluids.
The critical values, critical velocities, and speed of sound are obtained for different values of the EoS parameter in Table 1. The critical radius is shifting to the left when ω ≥ 0 increases. Thus, the in-falling fluid acquires supersonic speeds closer to RBH. The same critical radius is obtained for ω = −2, 0 and ω = −1.5, −0.5 with the same critical velocities but in an opposite direction. We get a negative speed of sound at x = 7.5044 and a positive speed of sound for the remaining critical radius. Also, the speed of sound increases near the RBH. For this metric, we find that β Mr + Mr
Charged RBH using logistic distribution
The logistic distribution function is [32] ξ in which we replace x = 2q 2 Mβr ; then we obtain the distribution function with normalization factor σ ∞ = 1 4 . Also the distribution function satisfies where r → ∞. The horizons can be obtained for β = 1 where q = 1.055M. The metric function can be written as If we set β → 0, then we obtain the Schwarzschild BH, and if we set β → ∞ we get It is noteworthy that this metric function corresponds to an Ayon-Beato and Garca BH [32]. The radial velocity and energy density for the metric (37) using Eqs. (5) and (10) is given by The velocity profile for different values of ω is shown in Fig. 4. It can be observed that for ω = −1.5, −2 the radial velocity of the fluid is negative and it is positive for ω = −0.5, 0, 1. If the flow is inward then u > 0 is not allowed and vice versa. In the case of ω = −2, 0 the fluid is at rest at x ≈ 5. Figure 5 represents the behavior of energy density of fluids in the surrounding area of the RBH. Obviously the WEC and DEC are satisfied by dust, stiff, and quintessence fluids. When a phantom-like fluid (ω = −1.5, −2) moves toward a RBH the energy density decreases and the reverse will happen for dust, stiff, and quintessence fluids (ω = −0.5, 0, 0.5, 1). TheṀ of an RBH for distinct EoS parameters is obtained by using (19), Figure 6 represents the change in the RBH mass against x. It is evident that the mass of the RBH increases due to quintessence, dust, and stiff fluids and it decreases due to phantom fluids.
The critical radius, the critical velocity, and the speed of sound are obtained for different values of EoS parameter in Table 2. The critical radius is shifting to the right when ω ≥ 0 increases. Thus the in-falling fluid acquires supersonic speeds closer to the RBH. For a phantom-like fluid, quintessence, dust, and stiff matter the critical radius and critical velocities are explained in Table 2. Same critical radius is obtained for ω = −2, 0 and ω = −1.5, −0.5 with the same critical velocities but different in sign. We obtained a negative speed of sound at x = 1.36375, 3.777412 and positive speed of sound at x = 1.12974, 1.1850. Near the RBH the speed of sound will increase. For this metric we find that Also, the condition (18) yields
Charged RBH from nonlinear electrodynamics
We use the line element Here the function and its associated electric field source is where q and M represent the electric charge and the mass, respectively [33]. The solution elaborates RBH and its global structure is like R-N BH. The asymptotic behavior of the solution is The radial velocity and energy density for this metric are given by The absolute value of the velocity profile for different values of ω is shown in Fig. 7. It can be observed that for ω = −2 the radial velocity of the fluid is negative and it is positive for ω = 0.5, 0, 1. If the flow is inward then u > 0 is not allowed and vice versa. In the case of ω = −2, 0 the fluid is at rest at x ≈ 5. Figure 8 represents the energy density of fluids in the region of the RBH. It is apparent that the WEC and DEC is satisfied by phantom fluids. When the phantom fluids moves toward the RBH the energy density increases; on the other hand it decreases for dust and stiff matter. The rate of change of the mass is given bẏ The rate of change of in the RBH mass against x is plotted in Fig. 9. Due to accretion of dust and stiff matter the mass of the RBH will increase for small values of x and vice versa for phantom fluids. It is also noted that the maximum rate of the RBH mass increases due to ω = 1 followed by ω = 0.5, 0, −2.
The critical values, critical velocities, and speed of sound are obtained for different values of the EoS parameter in Table 3. The critical radius is shifting to the right when ω ≥ 0 increases. The speed of sound is negative at x = 3.685523529 and near the BH the speed of sound will increase. For this RBH we find that 4r 2 +q 2 tan h 2 q 2 2Mr −1 +6+ Mr tan h q 2 2Mr −1 . (54) 3.4 Kehagias-Sftesos asymptotically flat BH KS studied the following BH metric: In the frame work of Horava theory, where m is the mass, b is the positive constant related to the coupling constant of the theory. The metric asymptotically behaves like the usual Schwarzschild BH [34], with 2bM 2 ≥ 1 [34]. The radial velocity and energy density are given by The radial velocity for different values of ω is shown in Fig. 10. The radial velocity is negative for a phantom-like fluid and positive for quintessence, dust, and stiff matter. The evolution of the energy density of the fluids in the surrounding area of an RBH is plotted in Fig. 11. The energy density for phantom fluids is negative, while the energy density for stiff, dust, and quintessence fluids is positive.
For this RBH, rate of change of the mass becomeṡ M = 4π A 2 2 A 4 (ω + 1) Figure 12 represents the rate of change in an RBH mass against x. We see that the RBH mass will increase for ω = −0.35, 0, 0.5, 1, and it will decrease for ω = −2.
The critical values, critical velocities, and speeds of sound for different values of ω are presented in Table 4. For quintessence matter, we obtain a very large critical radius. Similarly to the case before, we obtain the same critical radius for dust and phantom-like fluids and the same critical velocities but different in sign. If we increase the EoS parameter then the critical radius is shifted near RBH. It is evident that the critical velocity is negative for a phantom-like fluid and positive for quintessence, dust, and stiff matter. The speed of sound is negative at x = 30267.74 and positive for the remaining critical radius. For this metric, we find that The condition (18) becomes
Concluding remarks
In this work, we have investigated the accretion onto various RBHs (such as an RBH using the Fermi-Dirac distribution, a RBH using the logistic distribution, an RBH using nonlinear electrodynamics, and a Kehagias-Sftesos asymptotically flat RBH) which asymptotically leads to Schwarzschild and Reissner-Nordstrom BHs (most of them satisfy the WEC). We have followed the procedure of Bahamonde and Jamil [22] and obtained the critical points, critical velocities, and the behavior of the speed of sound for the chosen RBHs. Moreover, we have analyzed the behavior of the radial velocity, the energy density, and the rate of change of the mass for RBHs for various EoS parameters. For calculating these quantities, we have assumed the barotropic EoS and found the relationship between the conservation law and the barotropic EoS. We have found that the radial velocity (u) of the fluid is positive for stiff, dust, and quintessence matter and it is negative for phantom-like fluids. If the flow is inward then u < 0 is not allowed and u > 0 is not allowed for outward flow. Also, we have seen that the energy density remains positive for quintessence, dust, and stiff matter, while it becomes negative for a phantom-like fluid near RBHs. In addition, the rate of change of the mass of the BH is a dynamical quantity, so the analysis of the nature of its mass in the presence of various dark energy models may become very interesting in the present scenario. Also, the sensitivity (increasing or decreasing) of the BHs' mass depends upon the nature of the fluids which accrete onto it. Therefore, we have considered the various possibilities of accreting fluids, such as dust and stiff matter, quintessence, and phantom. We have found that the rate of change of the mass of all RBHs increases for dust and stiff matter, and quintessence-like fluids, since these fluids do not have enough repulsive force. However, the mass of all RBHs decreases in the presence of a phantom-like fluid (and the corresponding energy density and radial velocity become negative) because it has a strong negative pressure. This result shows the consistency with several works [22,31,[35][36][37][38][39][40][41][42][43][44][45][46][47]. Also, this result favors the phenomenon that the universe undergoes the big rip singularity, where all the gravitationally bounded objects are dispersed due to the phantom dark energy.
Although we have assumed the presence of a static fluid, this may be extended for a non-static fluid without assuming any EoS and thus can be obtained more interesting results. This is left for future considerations. | 5,000.8 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
Epidemiologic, Postmortem Computed Tomography-Morphologic and Biomechanical Analysis of the Effects of Non-Invasive External Pelvic Stabilizers in Genuine Unstable Pelvic Injuries
Unstable pelvic injuries are rare (3–8% of all fractures) but are associated with a mortality of up to 30%. An effective way to treat venous and cancellous sources of bleeding prehospital is to reduce intrapelvic volume with external noninvasive pelvic stabilizers. Scientifically reliable data regarding pelvic volume reduction and applicable pressure are lacking. Epidemiologic data were collected, and multiple post-mortem CT scans and biomechanical measurements were performed on real, unstable pelvic injuries. Unstable pelvic injury was shown to be the leading source of bleeding in only 19%. All external non-invasive pelvic stabilizers achieved intrapelvic volume reduction; the T-POD® succeeded best on average (333 ± 234 cm3), but with higher average peak traction (110 N). The reduction results of the VBM® pneumatic pelvic sling consistently showed significantly better results at a pressure of 200 mmHg than at 100 mmHg at similar peak traction forces. All pelvic stabilizers exhibited the highest peak tensile force shortly after application. Unstable pelvic injuries must be considered as an indicator of serious concomitant injuries. Stabilization should be performed prehospital with specific pelvic stabilizers, such as the T-POD® or the VBM® pneumatic pelvic sling. We recommend adjusting the pressure recommendation of the VBM® pneumatic pelvic sling to 200 mmHg.
Introduction
With a proportion of 2-8%, pelvic fractures represent a rare injury. They occur most frequently in the 2nd and 3rd life decades [1] and are often the result of high-energy trauma, thus appearing in up to 20% of polytrauma patients [1,2]. Complex pelvic ring fractures are associated with a mortality rate of 5-42% [3][4][5]. In many cases, the high energy trauma causes extra and intrapelvic concomitant injuries, which can be life threatening [1,6]. Despite achieving improved survival rates in recent years, the mortality of open pelvic fractures is reported at up to 70% [1,[7][8][9]. The immediate risk to life is linked to the possible occurrence of refractory hemorrhagic shock, with associated major coagulation disorders [3,8,10].
The pelvic ring is anatomically connected to many blood vessels [10]. Three main sources of hemorrhage are described. These include arterial bleeding from the great arterial pelvic vessels, the venous vascular system, and exposed cancellous fracture surfaces of the posterior pelvic ring [8,[11][12][13]. Auto tamponade is unlikely due to torn retroperitoneal structures with potentially massive intrapelvic or retroperitoneal blood loss, which may lead to exsanguination [1,8]. It is important for modern priority-guided trauma management to detect the leading injury and source of immediate life threat ("treat first what kills first"). Therefore, in the first part of this study we analyzed the epidemiology of genuine pelvic injuries referring to the cause of death and primary bleeding source. The application of external non-invasive external pelvic stabilization is recommended in several guidelines and is nearly the only measure to treat unstable pelvic injuries in a prehospital setting [11,14,15].
A theoretical way to minimize bleeding, especially venous and spongy sources, is to reduce the intrapelvic volume with the approximate reduction of the fracture ends and closure of the anterior/posterior pelvic ring using external noninvasive pelvic stabilizers [13,16]. This hypothesis is based on clinical experience and two studies. Tan et al. revealed improved blood pressure in a small case series after the application of external non-invasive pelvic stabilization [17]. Grimm et al. showed increased retroperitoneal pressure in an artificial cadaver pelvic injury model via closed reduction with an external fixation of the pelvic ring and an infusion of the retroperitoneum [18]. Furthermore, several studies reveal the biomechanical impact of external non-invasive pelvic stabilization in artificial models of pelvic injuries [4,[19][20][21]; however, the proof to reduced intrapelvine volume in unstable pelvic injuries is still missing.
Therefore, in the second part of this study, we analyzed for the first time the effects of different external noninvasive pelvic stabilizers on intrapelvic volume in real pelvic injuries using a postmortem CT scan and compared their effects on pelvic biomechanics.
Materials and Methods
This study is divided into a retrospective case series to collect epidemiologic data and a prospective intervention study to analyze the effect and biomechanical properties of external noninvasive pelvic stabilizers (Ethics Vote EA1/250/11). Substantial elements of this script are based on the dissertation by Dr. Haussmann [22].
In the retrospective part, all cases of deceased patients with mechanically unstable pelvic injuries in the archives of the Institute of Legal Medicine and Forensic Sciences, Charité-Universiätsmedizin, Berlin, were analyzed (n = 91). The survey period was 3 January 2012 to 30 September 2013. In addition to age and sex, accident mechanism, preclinical measures, and the place of death were investigated. Furthermore, the autopsy protocols were used to ascertain the cause of death, the leading bleeding source, and vascular injury in the abdominal and pelvic regions. Peripelvic bleeding was defined as bleeding sources around the bony pelvis, including vessel branches of the internal/external iliac vessels, muscles, soft tissue, and skin.
The second prospective interventional part of the study with the application of external pelvic stabilization and CT-guided measurements were performed from 3 January 2012 to 30 September 2013 (n = 36).
The inclusion criteria for both subprojects were legally authorized autopsy, death after traumatic injury, pelvic instability in physical examination, minimum age of 18 years, preserved integrity of the peripelvic soft tissues, and the absence of osteosynthetic treatment. Exclusion criteria were the emergency operations and invasive pelvic stabilization. The external pelvic stabilization devices tested were the following: All devices were applied according to the study of Bottlang et al. at the level of the greater trochanters [4].
The breaking of rigor mortis was followed by the placement of the three devices for provisional external pelvic stabilization for the respective computed tomographic documentation of the compression effect of the fractured pelvis. The cranial limit of the scan area was chosen for these scans at the level of the third lumbar vertebrae and caudally at the level of the middle of the femur. A native image of the selected scan area was taken immediately prior to application of the corresponding device in each case to ensure a direct before and after comparison. All CT scans of the pelvic region were taken with a slice thickness of 0.5 mm (Activion 16; Toshiba).
To measure the traction forces, the different devices were prepared and a tension spring (Kraftaufnehmer OCDZ 0-3000N; Wazau Mess-und Prüfsysteme GmbH) was integrated ( Figure 1).
The calibration of the equipment was performed by a biomechanist of the Julius-Wolff-Institute of the Charité-Universitätsmedizin.
The documentation of the applied traction forces with the pelvic sling in place during the performance of CT scans was performed at four defined time points: 45 s (t1), 80 s (t2), and 120 s (t3). The maximum achieved traction force (Fmax)-independent of the time point-was also recorded.
Before and after application of the respective pelvic stabilization device, the volumetry of the pelvic ring, area of the pelvic entrance plane, distances between the centers of both femoral heads, the Köhler's tear figures, the sacroiliac joints ventrally and dorsally, and the distance/width of the symphysis were measured. The program OsiriX ® (vers. 4.1, Pixmeo, Bernex, Switzerland) was used for image analysis.
For the volumetrics, the pelvis was primarily standardized in all three planes and distances were defined as follows ( Figure 2): − cranial to caudal: the junction between lumbar vertebrae 4 and 5 to the caudal end of the ischiatic tuber − the area of pelvic entrance plane: the transverse plane between the lower edge of sacral vertebra 1 and the upper edge of the symphysis − the symphysis width: the point of greatest distance between the pubic bones − the femoral head distance: the distance between the centers of the femoral heads (in frontal plane) − the distance between the Köhler's tear figures: the shortest distance between the most caudal poles (in frontal plane) − the distance between the sacroiliac joints (SIJ): the ventral portions of the SIJ space, and for the dorsal distance, the most dorsal bony border of the Os ileum. Data processing was performed using IBM SPSS Statistics 22 ® (IBM Corporation Armonk, NY, USA) and Microsoft Office Excel 2007 ® (Microsoft Corporation, Redmond, WA, USA). The Wilcoxon and Kolmogorov tests were applied for non-normally distributed variables and a paired t-test for normally distributed variables. The significance was assumed at a p < 0.05.
CT scans were again taken in the identical, standardized sequence on a total of six patients with mechanically unstable pelvic findings in the external cadaveric examination. In addition, the measurements were now supplemented by the application of the SAM Sling ® .
All four pelvic slings were prepared with tension springs for the CT scans to document the acting tension forces after proper application.
Epidemiological Data
Epidemiologic data was analyzed for 91 casualties, 36 from the prospective interventional study collective, and 55 patients after file review in the archives of the Institute of Legal and Forensic Medicine, Charité-Universitätsmedizin, Berlin.
The mean age of the 91 patients was 49 ± 19 years (range 18 to 92). 67 of the patients were male. 13% (n = 12) of deaths were caused by a traffic accident, in 23% (n = 21) by train rollover trauma, and in 64% (n = 58) by a fall from a substantial height. 79 patients (87%) died prehospital, and 12 patients (13%) died in the hospital ( Figure 3). The leading sources of bleeding were thoracic, followed by peripelvic bleeding, hemorrhage of the liver, aortic rupture, and destruction of the heart. External sources of bleeding-due to transfemoral amputation-occurred in only one case. In 34% of the patients, a clear assignment of the main source of bleeding was not possible during autopsy due to multiple injuries ( Figure 4).
As shown in Figure 4, thoracic concomitant injuries led the way, followed by peripelvic injuries and traumatic brain injury ( Figure 5).
The cause of death was mainly multiple trauma (93%, n = 85). In 6% of the cases (n = 5), death occurred by exsanguination. In one patient (1%), traumatic brain injury was the cause of death.
Based on our postmortem CT scans, pelvic ring injuries were classified according to AO. Type C pelvic injuries were the most common in the studied population, and their distribution is shown in Figure 6.
Type B pelvic injuries were not detected in the studied collective. One patient with a type A pelvic injury was included in the study because of the clinical impression of the unstable pelvis during clinical stability testing for Ala fracture. A postmortem CT analysis revealed that it was a type A injury. Overview of the main sources of hemorrhage in the examined collective: thoracic (25%, n = 23), followed by peripelvic (19%, n = 17) bleeding, hemorrhage of the liver (12%, n = 11), aortic rupture (7%, n = 6), and destruction of the heart (2%, n = 2). External sources of bleeding-due to transfemoral amputation-occurred in only one case (1%, n = 1). In 34% (n = 31) of the patients, a clear assignment of the main source of bleeding was not possible during autopsy due to multiple injuries. (Source: Haussmann [22], modified).
Figure 6.
Distribution of type C pelvic injuries of the studied collective. Leading were type-C1 injuries, followed by type-C3 pelvic injuries (source: Haussmann [22], modified).
Effects on Pelvic Bioarchitecture of External, Non-Invasive Pelvic Stabilizers
The following results refer to the CT scans of the study population. Table 1 serves as a summary of the average intrapelvic volume reduction results of the external noninvasive pelvic stabilizers tested: Table 1. Overview of the reduction results based on the mean values and standard deviation in direct comparison of the tested pelvic stabilizers to the native scans (PS1 = pneumatic pelvic sling VBM ® with applied pressure of 100 mmHg; PS2 = pneumatic pelvic sling VBM ® with applied pressure of 200 mmHg, TP = T-POD ® ; TS = cloth sling, SAM = SAM Sling ® ; SD = standard deviation; * p < 0.001; source: Haussmann [22], modified). All applied external non-invasive pelvic stabilizers were able to achieve a reduction of the area in the pelvic entrance plane, symphysis width, reduction of the femoral head distance, and reduction of the distance of the Köhler's tear figure, reduction of the ventral and dorsal distance of the ileosacral joint.
Reduction of
With regard to the reduction of intrapelvic volume (PV), the pneumatic pelvic sling VBM ® with an applied pressure of 200 mmHg showed the best results (for detailed data please see Tables 1 and 2). This was followed in descending order by the VBM ® at 100 mmHg the T-POD ® and the cloth sling.
The results of the area reduction of the pelvic entrance level (PA) are, in descending order: pneumatic VBM ® pelvic sling with 200 mmHg pressure, followed by the same with 100 mmHg, T-POD ® and the cloth sling.
The comparison of the cloth sling with the pneumatic VBM ® pelvic sling at an applied pressure of 100 mmHg showed significantly lower reduction results, regarding the reduction of symphysis width (SW).
Regarding femoral head distance (FH), the VBM ® pneumatic pelvic sling at 200 mmHg also achieved the best reduction results. This was followed by VBM ® at 100 mmHg, T-POD ® (p = 0.001), and the cloth sling. The reduction of the distance between Köhler's tear figures (KT) was again significantly better achieved by the pneumatic pelvic sling VBM ® with 200 mmHg than with an applied pressure of 100 mmHg. The T-POD ® followed. The cloth sling achieved significantly lower results.
Reduction of ventral ileosacral joint distance (SIJv) with the VBM ® pneumatic pelvic sling at 200 mmHg also produced the best results, followed by 100 mmHg, T-POD ® , and the cloth sling.
A comparison of the reduction in dorsal ileosacral joint (SIJd) distance among the pelvic stabilizers showed no significant differences.
Biomechanical Force Measurement of the Acting Tensile Forces after Pelvic Stabilizer Device
The average peak tensile force achieved by the VBM ® pneumatic pelvic sling with a pressure of 100 mmHg was 73.4 ± 33.1 N. However, at already 45 s after application, the acting tensile forces were reduced to 54.9 ± 28.6 N, after 80 s to 49.4 ± 27.6 N and after 120 s to 46.4 ± 33.1 N. The average peak tensile force achieved at 200 mmHg was 81.5 ± 37.7 N. Here, too, the tensile forces decreased rapidly, as shown in Table 3. Table 3. Mean values of the acting tensile forces of the five measurements with each of the four pelvic slings (PS1 = pneumatic sling VBM ® with pressure of 100 mmHg, PS2 = pneumatic sling VBM ® with pressure of 200 mmHg; TP = T-POD ® ; CS = cloth sling; SAM = SAM Sling ® ) at the defined time points (Fmax = maximum force; t1 = 45 sec after application; t2 = 80 sec after application; t3 = 120 s after application; source: Haussmann [22], modified)). The peak tensile force of 109.9 ± 40.5 N achieved by T-POD ® was also reached immediately after application. Here, too, there was a decrease in the acting forces during the defined measurement times, which was more noticeable than in the case of the VBM ® pneumatic pelvic sling ( Table 3).
PS1
The peak tensile force of the cloth sling was on average 105.2 ± 72.6 N. It was noticeable that although there was a rapid reduction in the tensile force after the cloth loop was applied, the other measured values determined over time remained relatively stable. The SAM Sling ® was able to achieve high peak tensile forces, which-analogous to the other pelvic stabilizers tested-became apparent shortly after installation. The rapid decrease in the acting tensile forces was disproportionately strong: After only 45 s, an average of 30.6 ± 25.5 N was recorded. After 80 s, the average values were 29 ± 25.2 N and after 120 s 27.9 ± 24.9 N.
The peak force was reached immediately after application for all pelvic stabilizers. The measured tensile forces decreased rapidly over time for all external, non-invasive pelvic stabilizers. This was most evident with the cloth sling and the SAM Sling ® . The VBM ® pneumatic pelvic sling was able to demonstrate the lowest loss of traction over time at an applied pressure of 200 mmHg.
Discussion
To our knowledge, this work is the first to describe the biomechanical effects of noninvasive external pelvic stabilizers on the bony structures and intrapelvic volume of pelvic ring injuries. In addition, an epidemiologic analysis of concomitant injuries and autopsy results is performed.
Epidemiology of Pelvic Trauma
The epidemiologic results of this study are consistent with those reported by other authors. The majority of our collective was male (67%). In international study collectives, the polytrauma patient is male in approximately 70% of cases and has an average age of 38 to 47 years [23][24][25]. The determined mean age of 49 years may be due to demographic trends.
In contrast, data from recent years show that high-altitude falls account for an average of only 20-25% of trauma patient deaths [26,27]. Traffic accidents, to which rollover traumas from trains have often been added, are shown to cause death more frequently, up to 72%, especially in less densely populated areas [27,28].
Parreira et al. illustrated that, for a study population of 103 patients with unstable pelvic injuries, traffic accidents lead with 79% and falls from greater heights with only 17% [29]. These results were confirmed in further studies of patients with unstable pelvic fractures for both the Asian and Australian regions [30,31]. In a previous work, we were able to show already, in a 2010 study collective of trauma-related deaths that falls from heights appear as a frequent death-causing mechanism typical for Berlin [32].
However, our collective included only patients who died in the trauma setting; accordingly, a higher overall injury severity can be assumed. In a large proportion of our patient population (60%, n = 55), death occurred immediately as a result of the accidental event or shortly thereafter (before arrival of the emergency medical services). Consecutively, it can be assumed that the trauma mechanisms identified are causative for the higher overall injury severity in the collective evident in the autopsies.
The majority of patients in the studied collective (87%) died in the prehospital period, which can also be explained by the trauma severity and the high proportion of patients already with certain signs of death showing on finding. In an international comparison, the data on prehospital mortality of trauma patients show a wide range of 41-85% [27,[32][33][34][35], which also seems to depend on the localization of the accident. While 72% of patients in rural areas died at the scene of the accident, only 41% did so in urban settings [35].
"Treat First What Kills First" in Pelvic Trauma
The patient collective with unstable pelvic injuries additionally showed very high incidences of injuries to or in the thorax (96%), peripelvic soft tissue injuries (86%), and traumatic brain injury (84%). However, unstable pelvic injury was shown to be the leading source of bleeding in only 19%; thoracic injuries were shown to be the main source of bleeding in our collective (25%).
This represents essential information, which has enormous influence on the prioritization of emergency and surgical management in patients with unstable pelvic injury. Several retrospective studies were found to be congruent with our findings: both Parreira et al. and Poole et al. postulated that although unstable pelvic injury carries a tremendous risk for the development of hemorrhagic shock, the outcome of patients is essentially dependent on concomitant injuries [29,36]. Poole et al. showed that of the 236 patients studied, 18 died, seven because of hemorrhagic shock [36]. However, only one patient was shown to have a pelvic major source of bleeding, whereas the remaining six patients died from extrapelvic major sources of bleeding [36].
For example, in our collective, 12% of patients showed injury to the liver alone as the leading major source of bleeding. Therefore, unstable pelvic injury should always be considered as an indicator of severe internal injury and bleeding until proven otherwise. Consequently, in the case of abdominal major sources of bleeding, for example, due to severe liver injury, clear preference should be given to laparotomy. These findings should be considered in ATLS/ETC concepts and applied to trauma management.
However, it should be kept in mind that in case of a necessary packing of the abdominal cavity in case of surgically uncontrollable bleeding, e.g., from the hepatic stromal area, the surgically stabilized pelvis is a better abutment.
Reproducibility of Genuine Pelvic Trauma with Artificial Pelvic Trauma Models
The case of misinterpretation of a type A fracture of the Ala ossis ileum as unstable pelvic injury shows some limitations of physical examination to determine pelvic ring instability. The definite diagnosis of a type B or type C pelvic ring injury is only possible by radiological imaging [37]. With a sensitivity of up to 93% [38,39], the result of the physical examination should nevertheless be relied upon and, if there is the slightest suspicion of an unstable pelvic injury, the stabilization of the pelvis by an external, noninvasive pelvic stabilizer should already be performed preclinically. The use of a pelvic stabilizer is indicated in cases of mechanically unstable pelvic ring fractures and simultaneous hemodynamic instability. If the pelvis is mechanically stable during the manual examination, pelvic instability is unlikely. If hemodynamics does not stabilize after application, arterial intrapelvic and extrapelvic sources of bleeding must be sought [40].
It is interesting to note that all other patients in our study collective had only Pennal and Tile type C unstable pelvic fractures. Type B fractures were not observed in our collective. The high applied forces during trauma will be causative for this. This fact should be further investigated and, if necessary, lead to a reevaluation of the artificial model and its applicability mostly using artificial type B injuries.
Effect of External Pelvic Stabilization in Real Pelvic Trauma
The results obtained with this study represent the first quantitative data on the effectiveness of external pelvic stabilizers in reducing various parameters of the pelvic area in non-artificial unstable pelvic injuries.
Most of the data published are studies of cadavers with artificially induced pelvic fractures and single case reports [19,20,[41][42][43][44][45][46][47]. For the first time, we can present data on quantitative changes in intrapelvic volume, pelvic entrance area, and acting traction forces after the application of external noninvasive pelvic stabilizers to unstable pelvic injuries in real injured patients using computed tomographic imaging and biomechanical measurements.
All external non-invasive pelvic stabilizers achieved a reduction of intrapelvic volume. Therefore, if an unstable pelvic injury is suspected, an external, non-invasive pelvic stabi-lizer should be applied already in a prehospital setting. The T-POD ® succeeded best on average (333 ± 234 cm 3 ) but with higher average peak traction force (110 N) compared to the tested devices in this study. The reduction results of the VBM ® pneumatic pelvic sling consistently showed significantly better results at an applied pressure of 200 mmHg than at 100 mmHg, with negligible differences in traction force (peak traction force 82 vs. 73 N). In terms of reduction of area in the pelvic entrance plane, the VBM ® pneumatic pelvic sling demonstrated the greatest reduction effects in our study at 200 mmHg.
Since the peak traction forces differed only minimally at both pressures, as a result of this study, the recommendation for the adjustment of the recommended pressure level on the manometer of the VBM ® pneumatic pelvic sling was adjusted to 200 mmHg by the manufacturer.
The distances between the femoral head and Köhler's tear figures showed excellent reduction results. In particular, the VBM ® pneumatic pelvic sling was able to significantly reduce these parameters.
All pelvic stabilizers succeeded in reducing the symphysis width.
The results regarding the femoral head distance confirmed in each case the results of the area reduction in the pelvic entrance plane. Thus, for clinical practice, femoral head distance can be used as a simple surrogate parameter to control a sufficient reduction of the pelvis by means of pelvic overview imaging in pelvic ring fractures.
The comparison of reduction of the pelvic inlet area and the distances between the femoral heads and the Köhler's tear figures showed almost congruent results with mostly significantly better results of the pneumatic pelvic sling VBM ® . Thus, these parameters are shown to be suitable as a measure for assessing the reduction of the anterior pelvic ring.
The relatively small reduction of the dorsal SIJ distance in combination with the predominantly good and congruent reduction results of the parameters of the ventral pelvic ring (femoral head distance, distance between the Köhler's tear figures, and symphysis width) suggests that external, non-invasive pelvic stabilizers ostensibly influence the ventral pelvic ring.
Assessing the reduction of dorsal SIJ distances for the effectiveness of external noninvasive pelvic stabilizers is severely limited. It can be speculated that especially instabilities in the area of the ventral pelvis can be sufficiently reduced by the application of an external, non-invasive pelvic stabilizer.
Stabilization by means of a conventional sling already has reducing effects. The lack of a pneumatic sling VBM ® , T-POD ® , SAM Sling ® , or other devices designed for this purpose on emergency vehicles or rescue helicopters, should not and must not be considered as an argument against pelvic stabilization already performed prehospital.
Knops et al. demonstrated with their study that the T-POD ® required the lowest traction forces compared to the SAM Sling ® and the Pelvic Binder for a sufficient reduction of the symphysis width with an average of 43 N [20]. In our study, the T-POD ® also required significantly lower peak traction forces than the SAM Sling ® (130 N) with an average of 110 N. However, these were still well above the maximum force determined for the VBM ® pneumatic pelvic sling (73 and 82 N at 100 and 200 mmHg, respectively).
The characteristic of reaching the highest peak force immediately after application was shared by all four tested external non-invasive pelvic stabilizers. However, our study revealed that the measured tensile forces then decreased rapidly during the time course.
This effect was most evident with the cloth sling and the SAM Sling ® . For example, the SAM Sling ® showed a reduction in tensile forces of almost 80% after just two minutes. Knotting in the ventral pelvis or the nature of the material with consecutive loss of pressure is the main limitation of the use of a cloth sling. The results (high maximum force, rapid loss of force) can be well explained by the narrow design of the cloth sling: The cloth sling shows a width of only a few centimeters in the anterior region-if applied correctly-due to the ventral knotting technique. Interestingly, the high maximum tensile force measured did not have a higher reductive effect on the quality or effectiveness of the reduction of intrapelvic volume or pelvic entrance area. Our study was able to show that the reduction results of the cloth sling were significantly worse than those of the specific pelvic stabilizers. DeAngelis et al. demonstrated a significantly better reduction in symphysis width with the T-POD ® compared to the cloth sling [19]. Our results correspond to previously published international literature: Knops et al. showed in their cadaver study that the SAM Sling ® required significantly higher traction forces compared to the T-POD ® and the Pelvic Binder for sufficient fracture reduction in type B as well as type C pelvic injuries (average 112 N vs. 43 N and 60 N, respectively), whereas the T-POD ® succeeded in this reduction already at one third of the traction force [20].
Nevertheless, pelvic stabilization using a cloth sling should not be omitted, as it has proven beneficial effects on fracture reduction and presumably also on patient hemodynamics. If it is possible to use a specific device designed for pelvic stabilization, such as a pneumatic pelvic sling or the T-POD ® , this should be preferred in any case.
All pelvic stabilizers exhibited the highest peak tensile force shortly after application, with rapid decreases in tensile forces over time (two minutes). This was most evident with SAM Sling ® and cloth Sling.
Causes of pressure loss are likely to be pressure distribution in soft tissue, redistribution of interstitial and lymphatic fluid and blood into venous capacity vessels, and reduction of the pelvic injury.
As can be seen from the data in Table 3 and Figure 7, the forces generated by T-POD ® and SAM Sling ® were only initially the greatest and then quickly decreased. The pneumatic sling VBM ® , on the other hand, was able to maintain the initially generated forces over a longer period of time. This could be an explanation for the better radiological reduction results. Figure 7. Kinetics of the pelvic stabilizers. Maximum and mean forces achieved at the various predefined times after application of the respective pelvic sling: The SAM Sling ® was able to achieve the highest peak tensile forces, but that these fell rapidly and disproportionately. The most constant tensile force of all pelvic slings at the time points examined was demonstrated by the VBM ® pneumatic pelvic sling with an applied pressure of 200 mmHg (source: Haussmann [22], modified).
In terms of intrapelvic volume reduction, the VBM ® pneumatic pelvic sling showed a similar volume reduction compared to the T-POD ® and predominantly achieved the best results in a comparison of all pelvic stabilizers, despite initially lower force application. Its advantage could be the arrangement of the pneumatic pads, which, when correctly placed, are each placed dorsolaterally on the pelvis. This achieves the compression of both pelvic sides, which could lead to more effective reduction results than purely circumferential application of force to the pelvis.
The T-POD ® , on the other hand, achieved comparable reduction results, but required a significantly higher maximum force to do so. Over time, however, the reduction in intrapelvic volume is maintained, even under lower pressures. This could be explained by the vector of the force but also by the amount of the initial applied pressure, which leads to an initial reduction. The lower forces over time could be sufficient to ensure the retention.
Critically, the exact time points of application of the external pelvic stabilizers after fracture or until the CT scan were not part of the data collection. Also, the experimental design could influence the results in that the VBM ® pneumatic pelvic sling was applied prior to the T-POD ® . It remains unclear to what extent some residual retention is maintained on the cadavers due, for example, to rigor mortis.
Bony fractures, possibly also with existing osteoporotic bone structure, were not observed after application of the various devices.
For clinical handling, a readjustment of the pelvic stabilizers may therefore be necessary. The pressure manometer of the VBM ® pneumatic pelvic sling has the advantage that the user can measure the pressure and easily readjust.
Conclusions
Unstable pelvic injuries must be seen predominantly as an indicator of serious, especially thoracic and abdominal, concomitant injuries. In only a fifth of the analyzed cases, the pelvic injury was the leading bleeding source. This has a direct impact on clinical management and prioritization of emergency surgery within multiple trauma management.
Stabilization of unstable pelvic injuries should be performed as soon as possible using specific pelvic stabilizers, such as a T-POD ® or pneumatic pelvic sling VBM ® . A cloth sling should be used only in the absence of specific external pelvic stabilizers. To achieve optimal reduction results using a pneumatic pelvic sling VBM ® , we advocate an adjustment of the recommended pressure application to 200 mmHg. The extent to which regular readjustment of the pelvic stabilizers is required should be further investigated in future studies.
The specific external, non-invasive pelvic stabilizers pneumatic pelvic sling VBM ® and T-POD ® could, for the most part, show significantly better reduction results compared to the conventional cloth sling. Therefore, the provision of these specific pelvic stabilizers should be demanded. The DIN standard for the equipment of rescue devices should be adapted. | 7,508.2 | 2021-09-24T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Chemical identification of serine 181 at the ATP-binding site of myosin as a residue esterified selectively by the fluorescent reagent 9-anthroylnitrile.
The esterification reagent 9-anthroylnitrile (ANN) reacts with a serine residue in the NH2-terminal 23-kDa peptide segment of myosin subfragment-1 heavy chain to yield a fluorescent S1 derivative labeled by the anthroyl group (Hiratsuka, T. (1989) J. Biol. Chem. 264, 18188-18194). The labeling was highly selective and accelerated by nucleotides. In the present study, to determine the exact location of the labeled serine residue, the labeled 23-kDa peptide fragment was isolated. The subsequent extensive proteolytic digestion of the peptide fragment yielded two labeled peptides, a pentapeptide and its precursor nonapeptide. Amino acid sequence and composition analyses of both labeled peptides revealed that the anthroyl group is attached to Ser-181 involved in the phosphate binding loop for ATP (Smith, C. A., and Rayment, I. (1996) Biochemistry 35, 5404-5417). We concluded that ANN can esterify Ser-181 selectively out of over 40 serine residues in the subfragment 1 heavy chain. Thus ANN is proved to be a valuable fluorescent tool to identify peptides containing the phosphate binding loop of S1 and to detect the conformational changes around this loop.
Myosin subfragment-1 (S1) 1 is the globular head region of the myosin molecule that contains the sites for ATPase and actin binding (1,2). During ATP hydrolysis in the actomyosin complex, transient conformational changes in the ATPase site take place and are thought to be transmitted to the remote actin-binding site, thereby controlling the interaction of S1 with actin (3). On the other hand, the binding of actin to S1 is supposed to affect the conformation of the ATPase site, resulting in the acceleration of product release. Thus, elucidation of the mechanism of the energy transduction process in muscle contraction requires detailed knowledge of conformational changes of S1 that are involved in such communication between the ATPase-and actin-binding sites.
One approach for monitoring the ligand-induced conformational changes around a specific residue in proteins is to attach a sensitive fluorescent probe covalently to the residue. Because of its inherent high sensitivity, such a method is a powerful technique to obtain information about the conformational changes in proteins. However, in the case of tyrosine, threonine, and serine residues it is difficult to label them by a fluorescent reagent because the chemical reactivity of their hydroxyl groups in aqueous solution is low. This is also the case for S1. Although a series of photoactive analogs of ATP can modify Ser-181, Ser-243, and Ser-324 of S1 heavy chain (4,5) (residue numbers in chicken skeletal S1, Ref. 6), there had been no fluorescent reagent available for the labeling not only of serine but also threonine and tyrosine residues in S1.
ANN appeared as a unique example of the serine-directed fluorescent reagent (7). ANN can esterify a serine residue of S1 located within the NH 2 -terminal 23-kDa peptide segment of the heavy chain (residues 2-204). It is characteristic that the labeling is highly selective and accelerated by nucleotides. The extrinsic fluorescence from the AN group labeled to S1 is sensitive to the binding of nucleotides and ATP hydrolysis (8,9). Thus, the AN group is useful not only as a fluorescent tag for the 23-kDa segment of S1 heavy chain (10, 11) but also as a fluorescent conformation probe for the S1 ATPase (8,9). However, the chemical determination of the exact location of the labeled serine residue has yet not been done.
In the present study, we isolate the pentapeptide and its precursor nonapeptide containing the AN group-labeled Ser-181, which is involved in the phosphate binding loop at the ATPase site of S1 (residues 179 -186 of the heavy chain) (1,2,6). The results suggest that Ser-181 in S1 exhibits unusual reactivity as well as the reactive serine at the active site in serine peptidases (12). Labeling with ANN promises to be a valuable method to identify the peptides containing the phosphate binding loop of S1 and to detect ligand-induced conformational changes around this loop.
Protein Preparation-Rabbit skeletal myosin and chymotryptic S1 were prepared as described previously (8). AN-S1, S1 labeled with ANN in the presence of ATP, was prepared as described previously (7). The labeled S1 contained 0.9 mol of AN group/mol of S1.
The lyophilized AN-S1 (160 mg) was dissolved in 26 ml of buffer A and dialyzed against the same buffer at 4°C overnight. The AN-S1 was digested with trypsin (1.6 mg) at 25°C for 1 h. The reaction was terminated by the addition of soybean trypsin inhibitor (2.4 mg). Dithi-* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
§ To whom correspondence should be addressed: Dept. of Chemistry, Asahikawa Medical College, Midorigaoka higashi 2-1-1-1, Asahikawa 078-8510, Japan. Fax: 81-166-68-2782; E-mail<EMAIL_ADDRESS>1 The abbreviations used are: S1, myosin subfragment-1; ANN, 9-anthroylnitrile; AN, 9-anthroyl; SerNAc, N-acetyl-DL-serine; LEP, lysylendopeptidase; CT, ␣-chymotrypsin; V8, Staphylococcus aureus V8 protease; V i , orthovanadate; Gdn, guanidine; HPLC, high performance liquid chromatography; PTH, phenylthiohydantoin. othreitol, Gdn-HCl, and EDTA were added to 2 mM, 6 M, and 2 mM, respectively, and the reaction mixture was incubated for 30 min at 25°C. Then 3 volumes of cold ethanol were added to remove the COOHterminal 20-kDa fragment and most of the 50-kDa fragment of S1 heavy chain and light chains. After the sample was left for 9 h at Ϫ30°C, the resulting precipitate containing the 23-kDa fragment and a small amount of the 50-kDa fragment was collected by centrifugation, dissolved in 5 ml of buffer B, and dialyzed against the same buffer overnight. The dialysate was then applied to an Ultrogel AcA44 gel filtration column (2.2 ϫ 84 cm) pre-equilibrated with buffer B. Elution with buffer B was carried out at a flow rate of 17 ml/h, and fractions of 3.4 ml were collected. Fractions containing the fluorescent 23-kDa tryptic fragment detected by SDS-PAGE were pooled and dialyzed against buffer C. The dialysate was then passed through a Dowex 1ϫ2 column preequilibrated with buffer C. Fractions showing fluorescence were collected and pooled, followed by dialysis against water for 2 days. The dialysate was lyophilized and subjected to isolation of the labeled peptide by HPLC.
Isolation of Labeled Peptides-The fluorescent 23-kDa fragment (4.5 mg/ml) was first digested by LEP (0.18 mg/ml) in 7.4 M urea, 2 mM dithiothreitol, and 20 mM imidazole, pH 7.0. Digestion was performed at 25°C for 24 h and terminated by the addition of trifluoroacetic acid to pH 2. After passage through a membrane filter (Millipore Ultrafree-MC, 0.45 m), the digest was subjected to reverse-phase HPLC on a TSK-gel (ODS-80TS) packed column (4.6 ϫ 250 mm) connected to a TOSOH SC-8020 HPLC system. Peptides were eluted at a flow rate of 1 ml/min with a linear gradient of 0 -60% acetonitrile containing 0.1% trifluoroacetic acid at 25°C. The elution was monitored at 225 nm (for peptides) and 360 nm (for the labeled AN group). The labeled peptide was eluted from the column as a single peak. Aliquots of the peptide (48 M with respect to the AN group) were next digested at 25°C with CT (2.3 M) in 20 mM Tris-HCl, pH 8.1, for 23 h. After another addition of 2.3 M CT, digestion was further continued for 25 h, followed by the addition of 0.5 M ammonium acetate (pH 4.0) and V8. Final concentrations of the peptides, the ammonium acetate buffer, and V8 were 36 M, 0.1 M, and 1.5 M, respectively. Digestion with V8 was performed at 25°C for 24 h and terminated by the addition of trifluoroacetic acid. After passage through the membrane filter, the digest was applied to HPLC as described above. The labeled peptides were eluted from the column as two peaks. Each peptide fraction was subjected to spectral measurements and amino acid sequence and composition analyses.
Amino Acid Sequence and Composition Analyses-The amino acid sequence of a peptide was determined from the amino terminus using a pulse-liquid protein sequencer, Procise 492 (Applied Biosystems). The amino acid composition of a peptide was determined using an AccQ-Tag TM amino acid analysis system (Waters) after vapor phase acid hydrolysis with 7 M HCl, 10% trifluoroacetic acid, 0.1% phenol at 160°C for 30 min.
Spectral Measurements-Absorption spectra were measured at room temperature with a Hitachi U-3210 spectrophotometer. Amounts of the AN group labeled to S1 and peptides were estimated using an absorption coefficient of 8.4 ϫ 10 3 M Ϫ1 ⅐cm Ϫ1 at 361 nm (7). Fluorescence emission spectra (uncorrected) were recorded at 25°C in a JASCO FP-770 spectrofluorometer as described previously (8).
RESULTS
It has been well established that ANN reacts with S1, selectively labeling a serine residue in the NH 2 -terminal 23-kDa peptide segment of the heavy chain, to yield fluorescent AN-S1 (7). In the present work, to identify an exact labeled residue in the 23-kDa segment, the labeled 23-kDa peptide fragment and its proteolytic products, labeled oligopeptides, were isolated. AN-S1 was digested with trypsin, fractionated with ethanol in the presence of 6 M Gdn-HCl, and applied to Ultrogel AcA44 gel filtration in the presence of SDS (Fig. 1). The fluorescent 23-kDa fragment detected by SDS-PAGE was obtained in the second peak (indicated by a solid bar). After removal of SDS, the labeled 23-kDa fragment was subjected to further digestion as described below.
The 23-kDa tryptic fragment of rabbit skeletal S1 contains 12 serine residues (6). Thus the fluorescent 23-kDa tryptic fragment was first digested with LEP, and the digest was subjected to reverse-phase HPLC. When the AN group attached to the peptides was monitored by the absorption at 360 nm, the AN group-labeled peptide denoted by L eluted as a single peak at 49 min ( Fig. 2A). This labeled peptide was isolated and further digested with CT and subsequently with V8. The digest was subjected to HPLC to yield two labeled peptides denoted by the letters V and C (Fig. 2B). HPLC inspections of aliquots of the reaction mixture revealed that peptide C, which was produced by CT digestion of peptide L, was converted to peptide V upon digestion by V8 (not shown).
Peptides V and C were characterized by sequencing (Fig. 3). For both peptides, lysine was identified at the last cycle and no further PTH-amino acids were found. The peptide V was a pentapeptide with a sequence of SGAGK, identical to residues 181-185 in the chicken skeletal S1 heavy chain (6) (Fig. 3A). The peptide C was a nonapeptide with a sequence of ITGES-GAGK, identical to residues 177-185, a precursor of peptide V (Fig. 3B). These results were consistent with the observation that peptide V was produced from peptide C by V8 digestion (Fig. 2B). However, it should be noted that yields (6 -10%) for PTH-Ser-181 of both peptides were significantly lower than that (31%) for PTH-Glu-180 (peptide C) and those (26 -31%) for PTH-Gly-182 (both peptides). The results are reminiscent of the previous reports that an abnormally low yield of PTH derivative for serine residue can be considered an indication of the presence of the chemically modified serine residue in peptides (13)(14)(15). Using procedures and instruments similar to those in the present analysis, 5-10-fold lower yields of PTH derivatives for the esterified serine residues but not for nonesterified ones have been reported (13)(14)(15). Thus, the sequence data for peptides V and C suggest that Ser-181 has been esterified by ANN.
To further confirm this point we next determined the amino acid compositions of both these peptides (Table I). The analysis revealed the existence of 0.8 -0.9 mol of serine residue/mol of each peptide, which was not fully detected by the sequencing (Fig. 3). The results suggest that serine residues are regenerated from the labeled residues by acid hydrolysis treatment for the amino acid analysis. This was confirmed by a control experiment in which the attached AN group of AN-SerNAc was found to be released by such a treatment. Thus, we concluded from the sequence data (Fig. 3) and the amino acid compositions (Table I) that a single serine residue in peptides V and C, Ser-181, was labeled with ANN.
Although data are not shown, the fluorescence and absorption spectra of peptide V were measured and compared with those of a model compound, AN-SerNAc. There was no signif- icant difference in the fluorescence spectrum in 80% acetonitrile between peptide V and AN-SerNAc with fluorescence emission peaks at 474 and 471 nm, respectively. This was also the case for the absorption spectra. The spectrum of AN-Ser-NAc exhibited four maxima at the longer wavelengths (332, 346, 363, and 382 nm) that had been established to be associated entirely with the AN group (7). The spectrum of peptide V also exhibited similar four maxima at 333, 349, 364, and 384 nm. This spectrum was identical to that of peptide C. Because both peptides V and C contain no chromophoric amino acid residues (Table I), the amounts of the AN group labeled to peptides can be estimated from the absorption spectrum of the AN group. The two obtained values, 0.73 and 0.72 mol of AN group/mol of peptide for peptides V and C, respectively, were essentially identical (Table I). The results indicate again that peptide C is a precursor of peptide V, supporting our assignment of the peptide sequences (Fig. 3). These spectral data for the labeled peptides strengthen the conclusion that only Ser-181 is selectively labeled with ANN. DISCUSSION In the present study, we have shown that Ser-181 in S1 heavy chain is specifically esterified with ANN. A specific reaction of Ser-181 was already reported by Cremo et al. (4). The UV irradiation of the S1 complex with Mg⅐ADP⅐V i , which is believed to mimic either the metastable ADP⅐P i state or the transition state for ATP hydrolysis (16), resulted in a specific V i -induced photooxidation by which Ser-181 is converted to a "serine aldehyde" (17). Because V i in this complex mimics the ␥-phosphate in the S1 complex with ATP, such a V i -induced reaction served to identify a critically important serine residue in the coordination of the ␥-phosphate of ATP. This result was verified by the x-ray structure of the S1 complex with Mg⅐ADP⅐V i (18). The x-ray structure revealed that Ser-181 is a The labeled peptide, which was obtained by HPLC as specified in Fig. 2B, was subjected to amino acid analysis as described under "Experimental Procedures." b The positions of the amino acid sequences are expressed as corresponding numbers of chicken skeletal S1 heavy chain (6).
c The concentrations of AN group were determined spectrophotometrically. Peptide concentrations were determined by amino acid analysis. included in the phosphate binding loop for nucleotides and its side chain is involved in coordination of V i that mimics the ␥-phosphate of ATP.
It is surprising that the labeling reaction of S1 with ANN is highly selective. Even when a 50-fold molar excess of ANN over S1 was used, only one serine residue, Ser-181, out of the 42 serine residues in the S1 heavy chain was labeled (Ref. 7 and present results). In the case of the photo-modification reaction (4), it is reasonable that Ser-181 is specifically oxidized by the V i -induced reaction, because Ser-181 is adjacent to V i coordinated with its oxygen atom in the S1 complex with Mg⅐ADP⅐V i (18). However, it is unusual that ANN with structure dissimilar to a nucleotide can approach Ser-181 and result in a selective esterification of the residue. One possible explanation for such an unusual specificity for the ANN labeling is that ANN forms a non-covalent complex with S1 prior to the reaction in which the functional group of ANN is in a favorable position for a subsequent esterification of Ser-181. In fact, the selective labeling of Ser-181 was not observed upon the labeling of S1 with a structural isomer of ANN, 1-anthroylnitrile, in which the functional group acylnitrile is at C1 of the anthracene ring instead of its C9 in ANN, 2 suggesting that its functional group was not situated near Ser-181. It is also likely that a hydrophobic pocket near Ser-181 composed of Ser-243, Ser-244, Phe-246, Gly-247, Ala-465, Gly-466, and Phe-467 (1, 2), interacts with the anthracene moiety of ANN and helps the acylnitrile moiety to approach Ser-181, resulting in the specific labeling of the residue.
The first pentapeptide sequence in the phosphate binding loop of the S1 ATPase site (GESGA, residues 179 -183 of S1 heavy chain) is equivalent to a highly conserved pentapeptide sequence GXSXG/A found around active serine residues in serine peptidases (12). Serine peptidases occur in three distinct structural families, represented by CT, subtilisin, and wheat serine carboxypeptidase II. The CT-like enzymes have glycine in the fifth position of the pentapeptide sequence. On the other hand, the latter two enzymes have another small amino acid, alanine, in this position. This is also the case for the peptide sequence of S1. It should be emphasized that among all the 42 serine residues in the S1 heavy chain only Ser-181 is in the sequence GXSXG/A (6). Such pentapeptide segments form a tight turn in motifs of strand-turn-helix, which confers a reactive nature of the serine residue in the third position (12). This may be why Ser-181 of S1 has unusual reactivity against ANN.
It is interesting to compare the enzymatic properties of AN-S1 (7) with those of the photo-modified S1 (19). In both S1 derivatives, Ser-181 is chemically modified. AN-S1 had Mg 2ϩ -ATPase activity 2.2-fold higher than that of the control S1, whereas the K ϩ -and Ca 2ϩ -ATPase activities were below 30% of the control. The photo-modified S1 had Ca 2ϩ -ATPase activity 4 -5-fold higher than that of the control S1, whereas the K ϩ -ATPase activity was below 10% of the control. It is clear that a chemical modification of Ser-181 itself does not result in abolishment of the S1 ATPase activity. Thus, the Mg 2ϩ -ATPase reaction of AN-S1 can be monitored continuously by changes in fluorescence emitted from the AN group attached to S1 (8,9).
In the course of the present study, we became aware of a study carried out by Szarka et al. (20). To obtain information about the labeling sites with ANN in S1, they measured the distances between the AN group labeled to S1 and known positions of S1 (Lys-553 and Cys-707) using the technique of fluorescence resonance energy transfer. They suggested that the most probable labeled residue is Ser-181, consistent with our present result. | 4,310.6 | 2003-08-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The Rebuilding of “Greater Russia”: From Kievan Rus’ to the Eurasian Union (Note 1)
The purpose of the present examination is 1) to summarize briefly the evolution of historical Russia as the amalgam of multiple ethnic and cultural communities into a growing imperial domain; 2) to outline more specifically the policies pursued by the tsarist and communist regimes to integrate minority communities into the Russian majority; 3) to examine the impact on Russia of the collapse of the former USSR; and 4) to trace current efforts by the Russian government to reintegrate the disparate parts of the former USSR, including especially regions of other post-Soviet states with a significant ethnic Russian population, into a new “Greater Russia.” Although it will touch on Soviet integration policies that targeted national minorities who, by 1989, represented half of the population, the focus will be on recent and current policies intended to “Greater Russia.”
Introduction
Most Westerners who learn of a revival of Russian nationalism or of President Putin's commitment to protect the interests of ethnic Russians in post-Soviet states where they represent a minority, do not think of the fact that the Russian population is quite diverse ethnically. This concerns not only the twenty percent of the population that is officially listed as non-Russian, but also the ethnic Russian population which results from the mixture and merger of various communities over the course of the last millennium. The first Russian state, Kievan Rus', combined Eastern Slavs and Norsemen, or Vikings; in the Middle Ages Finnic groups in the far north and later Turkic populations in the south and east were absorbed by the expanding Russian state. While many remained culturally distinct from the ethnic Russian community, others were absorbed into that community over the course of later centuries.
But the initial gulf between the ruling elite and the masses of the population was one that dominated Russian/East Slavic politics in later centuries.
By the middle of the twelfth century, a century before the Mongol conquest, this new state was in great disrepair, primarily because of the lack of an effective system of succession and the splitting of the political system into three major parts. The one in the northwest was eventually incorporated into the Lithuanian state, later Lithuanian-Polish, while the poorest in the northeast eventually became the center for a revitalized East Slavic state, Russia. Here the population was primarily Finnic, but was rapidly inundated and absorbed by Slavic immigrants, eventually emerging as Great Russians. Thus, even before the Mongol conquest of Kiev in 1240 "Russians consisted of a mixture of peoples, with the non-Slavs dominated and culturally overwhelmed; this diversity would be increased significantly in the following centuries.
With the emergence of Muscovy as a political force in the fourteenth and fifteenth centuries large numbers of non-Russian and non-Slavic peoples came under the control of the Russian state. In the north and northwest the majority of these groups were ethnically Finnish. Although the Mongol invasion in 1240 and indirect occupation for the next century and a half brought new Turkic (Tatar) populations into what would later emerge as Russia, it was not really until the sixteenth century and the rapid expansion under Ivan Grozny of Russian military control east and southeast to the major cities of the Golden Horde, Kazan and Astrakhan, in the 1550s that large numbers of Muslim Tatar or Turkic people came under the domination of Moscow. Russia viewed its culture and religion as superior to those of the peoples whom it conquered and generally treated these peoples as inferiors and resulted often in serious confrontations (see Khodarkovsky, 2004, pp. 34-39).
Writing of the emergence of Muscovy and its conquest of other small East Slavic principalities in the fifteenth century-"the gathering of the Russian lands" in the euphemistic words of the Russian chronicles of the age-Marshall T. Poe points out that the rulers of Muscovy of the fifteenth and sixteenth centuries expanded the kingdom's borders east beyond the Volga River, south to the Caspian Sea, west to the Dnieper River, and north to the White Sea. In so doing they came to rule peoples who had never been part of Kievan Rus'-Mordvinians, Chuvash, Mari, Samoyeds, Bashkirs, Tatars, Balts, Finns, Germans, Lithuanians, Poles, Cossacks, and Turks, among others. The once homogeneous Muscovite state . . . became a huge multiethnic empire (Poe, 2003, p. 34). (Note 5) Over the course of the next three centuries Russia continued to expand; in the west into the Baltic area, Finland and Poland; in the south by absorbing Ukraine and then systematically incorporating portions of the Ottoman Empire. Territorially the expansion across Siberia and the Far East and, ultimately, the conquest of the Muslim polities in Central Asia brought millions of non-Russians, non-Europeans, and non-Christians into the empire by the 1870s. The colonial empire was virtually complete and at this point the government in St. Petersburg began to pursue a policy of explicit russification as the means to absorb and integrate this population into the Russian state.
Throughout its entire history-from Kievan Rus' to Muscovy and the eventually the Russian www.scholink.org/ojs/index.php/ape Advances in Politics and Economics Vol. 4, No. 2, 2021 25 Published by SCHOLINK INC.
Empire-the population at large, whether ethnic Russian or one of the growing number of conquered peoples, had no voice in the political system. As Richard Pipes notes: Once an area had been annexed to Russia, whether or not it had ever formed a part of Kiev, and whatever the ethnic and religious affiliation of its indigenous population, it immediately joined the "patrimony" of the ruling house, and all succeeding monarchs treated it as a sacred trust which was not under any circumstances to be given up (Pipes, 1974, p. 79) Not only the territory, but also the population on that territory was seen as part of the extended patrimony of the monarch with no political rights. It is this view of the virtual ownership relationship between the monarch, the state, and the population that centuries later, although modified, continues to lie at the root of the Russian political system, for both ethnic Russian and non-Russian alike.
Integration' of National Minorities in Late Imperial and Communist Russia
Russian historians of the imperial era focused on Russia's right and duty to expand the boundaries of civilization and Christianity in dealing with the Muslim and other non-Russian peoples who now comprised a substantial part of the population (Khodarkovsky, 2002, p. 3). Religion served geopolitical purposes in relations with the various non-Orthodox Christian peoples of the steppe and the Caucasus.
Ever since the fifteenth century the idea of Russia and its political and security interests were intertwined with the idea of expansion, which was justified on both ideological and theological grounds (Khodarkovsky, 2002, p. 49). This meant an ever-increasing number of non-Russians within the Russian Empire. The culmination of this process came in the years 1860 to 1880 with the final conquests of the various peoples of the Caucasus and the khanates of Central Asia. Given the repressive nature of the Russian political system, the new ethnic and religious minorities who were forcibly added to the population had no voice and were the object of repressive governmental policies. Since the middle of the sixteenth century, but especially during the eighteen century, major efforts were made to settle and civilize the regions taken from the Kamnyks and other Muslim peoples, thereby pushing the original population out of the region entirely. In addition, the Russians pursued a policy of forced conversion to Christianity and russification in much of the territory that they conquered (Khodarkovsky, 2002, pp. 142-161). Throughout the eighteenth century this resulted in virtually permanent conflict between the Russians and the native populations on both sides of the ever-moving frontier. As Russia imposed its control, substantial numbers of the locals fled their homeland to beyond the Russian frontier areas to escape Russian military and administration and forced conversion (Khodarkovsky, 2002, pp. 201-206;Mironov, 1988)-generally fruitlessly, since the frontier continued to follow them.
Not until the reign of Catherine the Great did a degree of religious tolerance enter Russian policy (Khodarkovsky, 2002, p. 196 minority cultures within Russia should be eliminated and replaced with a Great Russian identity (Weeks 2004(Weeks , 2006. (Note 6) To be a loyal subject of the tsar one had to be a Russian. Russian became overwhelmingly the language of education, even in the vast areas of the country where it was not the dominant language. In portions of the empire, as in Armenia, non-Russian Orthodox religious schools were closed. Throughout the Central Asian lands of the empire major efforts were made to russify the population. However, Aneta Pavlenko concludes that russification measures were carried out only sporadically as an attempt to subjugate Polish and later Baltic German elites, to preserve the unity of the state, and to replace Polish, German, and Tatar with Russian as a high language. . . . These measures failed to turn peasants into Russians . . . . Most importantly, by imposing the russification measures late in the 19th century, the Russian empire created the pre-conditions for the consolidation of nations which would eventually turn against it (Pavlenko, 2011).
On the whole the efforts were a failure and at the outbreak of World War I, the Russian government faced widespread resistance to its policies ("How Successful", n.d.; see, also, Weeks, 2004) When the Bolsheviks seized power in fall 1917, they came with a clear view that past Russian policies toward ethnic minorities had been oppressive and exploitative and must be reversed for, in Lenin's words, Russia was "the prison house of nations" (cited in Weeks, 2004 (Fainsod, 1963, pp. 57-58). In fact, independent states that had emerged in Ukraine and the South Caucasus were forcibly incorporated into the emerging Soviet state.
In the early years of the Soviet state various institutional arrangements were introduced that were meant to give the national minorities a political voice and autonomy. But, Lenin and Stalin soon broke on the issue of the treatment of minorities and, since Lenin died soon thereafter, Stalin set the framework for Soviet nationalities policy-a framework that permitted little or no autonomy below the central government. Stalin eliminated communist officials in Georgia, Ukraine and Central Asia who opposed what they viewed as the assertion of central, Russian, domination over nationality affairs (see Daniels, 1960, pp. 177-187).
As the new Soviet state system emerged in the period after Lenin's death, a pseudo-federal system of government was established-pseudo in the sense that, de facto, political power and political decisions emanated from the top and were dispersed throughout the system. (Note 7) Moreover, the highly centralized communist party was the major source of power, not the formal institutions of the federal governmental system. Within this system the major units were republics named after the dominant titular population-Ukraine, Armenia, Kazakhstan, etc. Second and third level political units were also established for smaller nationalities which represented minorities in the larger republics. Although the communist party encouraged cultural development of backward peoples within the overall federation, that cultural development was to occur only within the context of a monolithic communist culture, which was built substantially on Russian nationalism (Fainsod, 1963, p. 363;okhy, 2017, pp. 245 ff.).
In the mid-1920s a major confrontation occurred between Josef Stalin, the new head of the Soviet Communist Party, and Mirsaid Sultan-Galiev, a Tatar Bolshevik who advocated a single Muslim republic across Central Asia. He was charged with nationalist deviations and arrested and eventually executed in 1940 during Stalin's Great Purge (Baker, 2011). Important for our concerns is the fact that Stalin divided the Muslim areas of Central Asia into five small republics that, presumably, would be easier to deal with from Moscow, rather than a single large and unified Muslim republic.
It was not really until the 1930s that Soviet policy concerning national minorities shifted dramatically away from the attacks on Great Russian chauvinism and support for local and regional cultures. During the massive purges of the 1930s, although no national group was primarily targeted, de facto the impact of the purge was greater among national minorities than it was among Great Russians. However, as Dmitry Gorenburg (2006) points out, throughout the entire history of the Soviet Union an internal contradiction drove Soviet nationality policy. "The establishment of ethno-federalism, indigenization, and native language education were paired with efforts to ensure the gradual drawing together of nations for the purpose of their eventual merger." (Note 8) Parallel to this is the fact that among Western students of Soviet nationality policy there are those who maintain that the Communists in effect strengthened the cultures of the minorities and those who focus on the Soviet russification and assimilation (Lapidus, 1984;Gorenburg, 2006).
In the late 1950s, during the Khrushchev era, the Soviets introduced a new education policy which expanded the teaching of Russian in non-Russian areas and, de facto, cut into the teaching of local languages Bilinsky (1962) and Gorenburg (2006) concludes that "linguistic assimilation and reidentification in the Soviet Union were promoted by a combination of two factors, urbanization and the reduction of native language education." Similar findings are presented in the research of Brian Silver (1974) and numerous other scholars. Although titular languages were taught, they were pursuing its foreign and security policy interests. But, it also indicated that Russia saw itself as a pole in the international system separate from and in conflict with the West. It is at roughly this time that Moscow also began to assert itself rhetorically in response to Western charges that it was corrupting or abandoning democracy. (Note 22) The Russian response was the assertion that Russia was not bound by Western definitions of democracy and that, in fact, it was in the process of establishing a superior form of "sovereign democracy" that was characterized first and foremost by independence from external standards or influences. In other words, Russian democracy is sui generis and will not be bound by any external criteria or rules. (Note 23) But, more than a framework for political developments in Russia, "sovereign democracy" was presented as a model for other countries and a justification of the type of top-down management that Vladimir Putin has fashioned in Russia. For authoritarian or semi-authoritarian political leaders across Eurasia, the arguments underlying "sovereign democracy" have proven to be quite attractive. (Note 24) In the wake of Russia's invasion of Georgia and Moscow's formal recognition of the breakaway republics of South Ossetia and Abkhazia, then President Dimitri Medvedev laid out the "principles" on which Russian policy was to be carried out. These principles included "protecting the lives and dignity of our citizens, wherever they may be" and the claim that "there are regions in which Russia has privileged interests. These regions are home to countries with which we share special historical relations and are bound together as friends and good neighbours" (Medvedev, 2008). Given the continued large Russian minorities in some of the post-Soviet states and Russia's policy of granting citizenship to large numbers of those living outside the Russian Federation, the first of these two principles de facto justifies intervention throughout most of former Soviet territory. The second calls for a sphere of Russian influence across Eurasia in which Russia has the right to protect its interests, including by economic coercion or military intervention. (Note 25) By the end of 2008 all the pieces were in place for Russia's "taking back" at least some of the area that it was contesting with the West. By then Russia had rebuilt its economy. It had effectively moved to strengthen the economic dependence of most of the post-Soviet states on Russia-primarily via energy dependence, including increasing Russian ownership of the energy infrastructure of these states (Nygren, 2007). (Note 26) Presidents Putin and Medvedev had provided the rhetorical foundations on which to base the conflict by noting the threat to regional and global peace that the United States represented (Putin, 2007) and by emphasizing Russia's legitimate role in the affairs of neighboring states (Medvedev, 2008). The Foreign Policy Concept issued in 2008 focused on external, rather than internal, challenges to Russian security-with U.S. global dominance at the very top of the list. In line with the extensive discussion of "sovereign democracy" in Russia, the Concept stipulated that global competition was acquiring a civilizational dimension, which suggested competition between different value systems and development models within the framework of universal democratic and market economy principles. The new foreign policy concept maintained that the reaction to the prospect of loss by the historic West of its monopoly over global processes now found its expression, in particular, in Moscow had already demonstrated through the use of economic pressures that the Russian leadership was quite willing to use its economic clout to achieve political goals. Finally, in Georgia it demonstrated that the use of military power was also an acceptable weapon in competing with the West for influence in the regions of "privileged" Russian interest.
It is roughly at this time that Moscow began to push a variety of potential programs aimed at integrating post-Soviet space more effectively and, thus, reducing or expelling entirely Western involvement and influence (see, for example, Russell, 2012). In addition to the call of Dimitri when the president of Armenia announced that Armenia would abandon its negotiations with the European Union, in order to pursue membership in the Eurasian Union, it was reported that Moscow had threatened to reduce its security support for Armenia in its ongoing conflict with Azerbaijan, deny work permits to the tens of thousands of Armenian citizens working in Russia, reduce the flow of subsidized energy to Armenia, and generally make economic life more difficult for the landlocked and beleaguered country (Peter, 2013). Similar pressures were reported in the discussions between Russia and Ukraine in the run-up to President Yanukovych's announcement in November 2013 that Ukraine also would opt for membership in the Eurasian Union rather than continue to pursue closer ties with the European Union .
Russians present the Eurasian Union as the means to integrate and modernize the economies of the former Soviet republics, so that they can compete more effectively in the global economy (Lomagin, 2014). However most Western analysts see the Eurasian Union primarily as a political tool for Moscow's re-imposition of control over as broad a swath of post-Soviet territory and people as possible (Adomeit, 2014 Russia has prevented Ukraine from pursuing membership in the European Union and/or NATO, it has also eliminated Ukraine as a realistic candidate for Eurasian Union membership (see Fedorov, 2019).
Elsewhere in post-Soviet states-usually in areas with significant ethnic Russian or regional minority populations-Russia has also intervened, facilitated secession and granted some form of political recognition to the new secessionist statelets. (Note 31) In many respects, once the Russians had rebuilt their domestic economy and decided that focusing on reestablishing their dominant role in former Soviet space rather than integrating into Europe, they had clear advantages in competing with the West and for attracting other former Soviet republics into closer ties. Most important was the economic and, especially, energy dependence of most of the other states on Russia-and Moscow's willingness to use that dependence to its advantage. Only Azerbaijan, with its energy wealth-plus several resource-rich Central Asian states-is in a position easily to resist Russian "invitations." For countries such as Moldova and Georgia efforts to resist the Russian embrace and pursue stronger relations with the Europeans have continued and even expanded after Russian military intervention in Crimea (Secrieru, 2014). As noted by Thomas Ambrosio, "Russia has sought to create near-exclusive spheres of influence within the former Soviet space, excluding the Baltics" (Ambrosio, 2019).
Concluding Comments
The Russia that emerged after the collapse of the Soviet Union was a Russia that had never existed over the course of the past millennium. Shorn of all but a small portion of the ethnic minorities that had always comprised such a large portion of the population of the country, including the fellow Slavs in Belarus and Ukraine, the Russian Federation was no longer the imposing international actor that it had been for most of the past two centuries, or more. Russia's history had been one of continual expansion since the fourteenth century and the imposition of Russian policy domination and Russian culture on peoples usually viewed as backward and less developed. That was the history of the last century of the Tsarist regimes, as well as of the Soviet regime.
Although the Russian Federation still extended across eleven time zones, had the largest population in Europe, and possessed nuclear weapons, it was no match, in terms of global clout, for the lost Soviet state. Moreover, the West took advantage of Russia's weakness by extending its involvement and influence into areas in Central and Eastern Europe that were viewed as an integral part of the Russian sphere of influence. This is precisely the set of developments that Vladimir Putin set out to correct in a policy termed revanchist by Matthew Sussex (2015) because it aims at undoing major geopolitical developments of the past quarter century. A central aspect of the policy that he has pursued over the course of the past decade has been the attempt to reestablish an integrated economic, political and security space in the area of the former USSR somewhat akin to "the gathering of the Russian lands" by Muscovy in the fifteenth century.
How the Eurasian Union will evolve, what the nature of Russia's relations with the other former Soviet states and populations will be, how the crisis in Ukraine will unfold-none of these questions can be fully answered today. Yet, it does appear clear that the intention of the current Russian leadership under Vladimir Putin is to bring together into a close economic, political and security union as much of former Soviet territory as possible in order to strengthen Russia's economic and political position as it vies for a position as one of the poles in a new multipolar world intended to replace the current international system dominated by the United States and the West. It is hard to imagine such an integrated system much different from the Greater Russia discussed by Bertil Nygren (2008) in which the smaller states and their populations are subordinated to Russia, in ways similar to the way that their ancestors once were in Tsarist and Communist Russia. | 5,199.6 | 2021-02-28T00:00:00.000 | [
"Economics"
] |
Changes of selected biochemical parameters of the honeybee under the influence of an electric field at 50 Hz and variable intensities
Two-day-old honeybee workers (± 6 h) were placed in cages and supplied with sucrose solution (1 mol/dm3) ad libitum. Subsequently, the cages with bees were placed in an electric field (E-field) exposure system with field intensities of 5.0 kV/m, 11.5 kV/m, 23 kV/m, and 34.5 kV/m. The duration of exposure was 1 h, 3 h, and 6 h. The biochemical parameters SOD (superoxide dismutase), CAT (catalase), FRAP (ferric ion reducing antioxidant power), and also acidic, neutral, and alkaline proteases in the worker bee hemolymph were analyzed. The E-field increased activities of antioxidant systems, especially SOD, and also the proteolytic systems. In the groups: 11.5 kV/m–time 6 h, 23.0 kV/m–time 1 h, and 34.5 kV/m–time 1 h, FRAP levels were decreased in comparison with the control samples. These findings are discussed in context with possible consequences for honeybee health in urban and rural environments.
INTRODUCTION
All living organisms are exposed to electromagnetic fields (EMF) emitted by electrical and electronic devices, power lines, and RF (radio frequency) radiation from wireless devices such as cordless telephones or the antenna of mobile network base stations (Hardell and Sage 2008). Life on earth is adapted to the natural electromagnetic field emitted by lightning, other planets, and animals (Coe et al. 1995). The constantly developing technologies and the enlarging demand for electricity contribute to the increase of artificial electromagnetic fields in the environment (reviewed in Kim et al. 2019). Bees, against negative factors, have evolved defense mechanisms at individual (immunity of 1-innate: humoral and cellular; and 2-acquired) and colony (especially behavior as a "superorganism") levels. Both types of reactions complement each other (Simone-Finstrom 2017; reviewed in Strachecka et al. 2018). The first line of individual defense is anatomicphysiological barriers and the cascade of proteolytic enzymes that are activated in the cuticle and the body fat when the immune system detects a pathogen or non-self-substance (Bíliková et al. 2001;Evans and Lopez 2004). When the pathogen breaks down the anatomical-physiological barriers, the humoral response, which includes the antioxidant and proteolytic systems, is activated in the insect organism. Insect enzymes block pathogen enzymes by competitive and/or non-competitive inhibition and activate processes in the body fat whose effect is the translation and transformation of immune proteins (Nazzi et al. 2014;Evans et al. 2006;Nazzi and Pennacchio 2018). During these reactions, reactive oxygen species (ROS) are formed, which are inactivated and/or detoxified by the antioxidant system, especially by superoxide dismutase (SOD). SOD catalyzes the dismutation of superoxide radicals to hydrogen peroxide (H 2 O 2 ) and, with the further help of catalase (CAT), to water (H 2 O) and molecular oxygen (O 2 ). (Harrison and Bonning 2010;Farjan et al. 2012;Słowińska et al. 2016). Schuà (1954) showed that the honeybee avoided being exposed to a static EF (1.5 kV/m) on feeding places. Altmann (1959) established that honeybee oxygen consumption increased by 15% in a static EF (1.4-2.8 kV/m). Bindokas et al. (1988aBindokas et al. ( , 1988b proved that exposure to an intense EF (7 kV/m) caused a disturbance only if the bees were in contact with a conductive substrate. Honeybees exposed to conductive tunnels contributed to increased mortality and abnormal propolization. Honeybee exposure to 60-Hz EF > 150 kV/m caused vibrations of the wings, antennae, and body hairs (Bindokas et al. 1989). Shepherd et al. (2018) showed that exposure to an extremely low-frequency EMF impaired learning ability and flight dynamics, reducing the success of foraging and feeding by the honeybee. Odemer and Odemer (2019) proved that mobile phone radiation may modify honeybee pupal development but no further impairment is manifested in adulthood. This suggests that the increase in environmental pollution by artificial electromagnetic fields poses a further challenge to the honeybee which it has to face every day. Keep in mind that the loss of bees contributes to the impairment of fruit and vegetable productivity, resulting in economic and, above all, environmental impacts. If an EMF exerts such an effect on the behavior of bees, then we hypothesize that this factor may also contribute to lowering the sensitivity of the biochemical protective barriers of the body. Therefore, we hypothesized that an electric field (E-field) at 50 Hz has a non-negligible effect on the antioxidant and proteolytic systems of honeybee workers (hypothesis 1). The longer exposure of the bees to the E-field, the deeper changes in their organisms (hypothesis 2). The overall aim of our studies was to show the effect of a 50-Hz E-field on honeybee workers in controlled laboratory conditions.
Test organisms
The experiment was carrying out from 15 June to 15 August 2019. Twenty honeybee (Apis mellifera carnica ) colonies from the Wroclaw University of Environmental and Life Sciences apiary were treated against Varroa destructor using amitraz fumigation four times at 4-day intervals, a 12.5 mg/tablet (Amitraz® Biowet Pulawy), before starting the experiment. To monitor the number of Nosema spp. spores, the hemocytometer method was used (30 bees per hive in three repetitions). Eight-day-old A. mellifera carnica L. queens originating from the same mother-queen were artificially inseminated with the semen of drones from the same father-queen colony for standardized research material. Inseminated queens were individually introduced into the queen-less colonies and kept in isolators. Each colony contained one empty comb. After 12 h of egg laying, the queens were released and isolators containing combs with eggs were left within the colonies for further worker-brood rearing. On the 19th day of apian development, the combs with the already sealed worker brood were transferred to an incubator (temperature of 34.4°C ± 0.5°C and relative humidity of 70% ± 5%) in which they were maintained within individual chambers for 1-day-old workers to emerge.
Experimental setup
For adaptation, 1-day-old workers were placed within 150 wooden cages (20 × 15 × 7 cm), each containing 100 workers and two inner feeders with sucrose solution (Chempur®) at a concentration of 1 mol/dm 3 ad libitum. The adaptation process lasted 24 h at a temperature of 25°C ± 0.5°C and relative humidity of 70% ± 5%. Caged bees were maintained in the incubator in the same conditions described above until being used for the experiment. Dead bees were utilized. Bees were divided into 12 groups (Table I). Each group consisted of ten cages. Bees in the experimental groups were exposed to the following Efield intensities: 5.0 kV/m, 11.5 kV/m, 23.0 kV/ m, and 34.5 kV/m for 1 h, 3 h, and 6 h. The control groups were not treated to the artificial electromagnetic field, they were under the influence of an electromagnetic field < 2.00 kV/m. The group name is the combination of the E-field intensity and the exposure time. For example, the name of group with bees exposed to a 5 kV/m E-field for 1 h is 5 kV/m1h. The control group was marked with the letter C. Bees from the control group were collected at the same time as the other groups.
E-field exposure
A homogeneous 50-Hz E-field was generated in the exposure system in the form of a plate capacitor with distance of 20 cm between two electrodes constructed as a squared cage made out of wire mesh (dimension 100 cm × 100 cm) with a 10 mm × 10 mm grid diameter. Electrodes were connected to a high-voltage transformer powered by an autotransformer, which allowed the voltage to be adjusted and the E-field intensity to be changed. The field intensity was fixed at 5.0 kV/m, 11.5 kV/m, 23.0 kV/m, and 34.5 kV/m. Changes in the homogeneity and stability of the E-field intensity were no higher than ± 5% in the emitter, to which the bees were exposed during the whole experiment. The E-field intensity and the homogeneity in the test area were verified by measurements made by an LWiMP accredited testing laboratory (certification AB-361 of Polish Centre for Accreditation) using a ESM-100-meter No. 972153 with calibration certificate LWiMP/ W/070/2017 dated on 15/02/2017 issued by the accredited calibration laboratory PCA AP-078. The measurements were done in the points of 10 × 10 × 5 cm 3 mesh inside an empty emitter (without experimental cages). The stability of the electric field was maintained by permanently monitoring the voltage applied to the exposure system using a control circuit (Figure 7).
Biochemical analyses
Hemolymph was taken from 100 live bees from each group after exposure by removing the antennae with sterile tweezers. Hemolymph was collected in sterile glass capillaries with a volume of 20 μl end-to-end without anticoagulant. The prepared capillaries were placed in 1.5-ml Eppendorf tubes filled with 150 μl of 0.6% NaCl (sodium chloride). The test tubes were placed on the cooling block during this procedure. The prepared tubes were transferred to a cryo-box and then frozen at -80°C . Determinations of the acidic, neutral, and alkaline protease activities were done according to the Anson method (1938) modified by Strachecka and Demetraki-Paleolog (2011). Superoxide dismutase (SOD) activities were determined using a commercial Sigma-Aldrich 19160 SOD determination kit. Catalase activities (CAT) were determined using a commercial kit from EnzyChromTMCatalase Assay Kit (ECAT-100). The antioxidant capacity of the hemolymph was determined by FRAP (Ferric ion reducing antioxidant power), and the determination of this parameter was carried out according to a procedure developed by Benzie and Strain (1996) and subsequently modified by Thaipong et al. (2006). Fifteen bees were taken from all groups for each parameter.
Data evaluation
The normality of the data distribution was analyzed using the Shapiro-Wilk test. The statistical significance of data within groups and between groups was determined by the Kruskal-Wallis test. For all tests, RStudio (R Core Team, 2018) and a significance level of α = 0.05 was used.
Superoxide dismutase activity
In all groups, SOD activities were higher in comparison with the control group. The highest SOD activities were recorded in the 34.5 kV/m1h and 34.5 kV/m6h groups and they were, respectively, 4 and 3.5 times higher than in the control group ( Figure 1). The lowest increase in activity, at 25%, was for the 11.5 kV/m3h group. A linear increase of superoxide dismutase activities as a function of time was observed in the 5 kV/m groups. In the case of the other groups, SOD activities changed periodically-high activities were observed for 1-h exposures, then they decreased at 3 h and increased at 6 h.
Catalase activity
The highest CAT activities were recorded in the 34.5 kV/m1h group and were 50.31% on average higher than the control group ( Figure 2). The 5 kV/m groups had similar values to the control group. In the other groups, these values were higher. In all experimental groups, especially in the 11.5 kV/m and 23.0 kV/m groups, CAT activities changed in waves similar to the SOD activities.
Total antioxidant potential
FRAP levels did not show statistically significant differences between groups or within groups ( Figure 3). The biggest difference (0.306 μmol/l) was observed between the 11.5 kV/m6h group and the 23 kV/m3h group, which was a fluctuation of 0.6%.
Protease activity
In all experimental groups, the acidic and neutral protease activities were higher than in the control group (Figures 4 and 5). In the case of alkaline protease activities, this tendency was observed between the 23 kV/m and 34.5 kV/m groups and the control group. A linear increase of acidic protease activities as a function of time (the longer the exposure time) was observed in the 5 kV/m, 11.5 kV/m, and 23 kV/m groups and in the 5 kV/m groups in the case of alkaline protease (but not all results were statistically significant) ( Figure 6). The group name is the combination of the E-field intensity and the exposure time. For example, the name of group with bees exposed to 5 kV/m E-field for 1 h is 5 kV/m1h. The control group was marked with the letter C. The control groups were not treated in the artificial electromagnetic field, they were under the influence of electromagnetic field lower than 2.00 kV/m Changes of selected biochemical parameters...
Superoxide dismutase activity
The effect of the electromagnetic field on the bees can be determined using the hemolymph antioxidants ( Figure 1) which are the first defensive mechanism of bees (De la Rúa et al. 2001;Farjan et al. 2012;Słowińska et al. 2016). The results of our analysis show that exposure to an Efield impacts the honeybee's antioxidant system (Figure 7). In all groups, superoxide dismutase Figure 1. Superoxide dismutase (SOD) activity in honeybee hemolymph after exposure to the E-field (50 Hz and different intensities). The group name is the combination of the E-field intensity and the exposure time. For example, the name of the group with bees exposed to a 5 kV/m E-field for 1 h is 5 kV/m1h. The control group was marked with the letter C. The control groups were not treated in the artificial electromagnetic field, they were under the influence of an electromagnetic field lower than 2.00 kV/m. Different (capital) letters are statistically different at the P ≤ 0.05 significance level. activities were higher than in the control group. Nearly fourfold higher SOD activities in the groups treated with an electromagnetic field indicate that this factor may deregulate energy metabolism in mitochondria through interactions with complex I, similar to the acaricides (Motoba et al. 2000). Moreover, the electromagnetic field, as a stressor factor, probably intensifies processes connected with intercepting reactive oxygen forms and binding ions (Augustyniak and Skrzydlewska 2004). In addition, Felton and Summers (1995), Poljšak and Fink (2014), and Schieber and Chandel (2014) established that harmful factors (i.e., pesticides) contributed to an increase in free-radical oxidation Changes of selected biochemical parameters... abilities which damages the cellular membranes, lipids, proteins, carbohydrates, RNA, and DNA.
In our study, the values of SOD activities in the hemolymph of 2-day-old bees in the experimental groups ( Figure 1) were similar to the experiment of Strachecka et al. (2014) when the workers were between 18 and 30 days old. The hemolymph of our workers had higher SOD activities than bees at the same age from other studies (Słowińska et al. 2016;Strachecka et al. 2014Strachecka et al. , 2016. Słowińska et al. (2016) suggested that old bees are less susceptible to the harmful factor and the 1day-old workers tried to overcome the stress by increasing their antioxidative potential. Only Weirich et al. (2002) presented higher values of antioxidant (in some cases ten times higher) in different tissues (muscle, hemolymph, ventriculi, sperm, and spermatheca) of three castes (queen, worker, drone) in comparison with our research. These differences may have resulted from age and seasonal variations between the workers used in the studies. SOD activities increased in the bee trials from industrial and urbanized areas-the reference sample was bees taken from rural areas (Nikolic et al. 2015). This increase may be caused not only by environmental pollution but also by greater exposure of bees to the effect of the electromagnetic field in urbanized areas than in rural areas. In turn, Ercal et al. (2001) and Farmand et al. (2005) suggested that metals are accumulated in the tissues and increase the formation of free radicals, which are the cause of the increase in SOD and CAT activities. Therefore, the electromagnetic field, by impairing the worker antioxidative defense, consequently may make them more sensitive to this factor, which is in agreement with the suggestions of Gregorc et al. (2020) on the subject of imidacloprid influence.
Catalase activity
High catalase activities in groups exposed to the intensities of an E-field from 11.5 to 34.5 kV/ m may indicate a body's increased need for this antioxidant enzyme (Figure 2). Catalase is extremely important in animal cells because it breaks down hydrogen peroxide into water and oxygen, thus protecting it from damage. When water deficiency occurs, catalase shows peroxidase activity. Then, hydrogen donors (ethanol, methanol, etc.), thanks to which catalase removes hydrogen peroxide from the cells while oxidizing these compounds, are necessary. Our results for the control group are characteristic for 4-day-old bees. Akyol et al. (2015) presented that catalase activities change very slowly with the age of bees and, according to Orr and Sohal (1992), this may not affect their lifespan. The opposite situation was shown in the studies of Strachecka et al. (2014Strachecka et al. ( , 2016 and Farjan et al. (2012), where CAT activities increased with the age of the workers and affected their vitality and longevity. Thus, CAT participates in the aging and senescence of the organism (Münch et al. 2008). So, the increased catalase activities demonstrated in our studies may shorten the life of honeybee workers. Corona et al. (2005) showed that old workers, as foragers, which have the highest levels of flight activity, also have the highest level of antioxidant gene expression, thereby their life gets shorter. The relationship between flight activity, ROS production, and longevity was also similar in the case of Musca domestica (Yan and Sohal 2000). An increase in catalase activities was observed during inflammatory reactions in the organism and the action of mutagenic agents (Adachi et al. 1993). Dussaubat et al. (2012) showed similar increases Changes of selected biochemical parameters... in catalase and glutathione activities during Nosema spp. infections in the bee midgut. This may indicate that honeybees respond to an electromagnetic field in a similar way as they do to a pathogen.
Total antioxidant potential (FRAP)
In our studies, there were no statistically significant differences in the FRAP level between groups (Figure 3). A similar observation was in Strachecka et al.'s (2016) studies, where the FRAP levels in the hemolymph of 14-and 21day-old workers treated with bromfenvinphos did not change. In other studies by Strachecka et al. (2017), differences in the hemolymph of Osmia rufa L. females and males were not seen during the winter months of the diapause. No changes in the FRAP levels are most likely related to the capacity of a system to reduce or break down reactive oxygen species. In our research, the model was 2-day-old workers, which probably did not suffer considerable damage from oxidative stress. In addition, not only do enzymatic antioxidants have an impact on the FRAP value but so do nonenzymatic ones. The latter have not been studied by us. Therefore, when setting the direction of future research, we believe that the concentrations of non-enzymatic antioxidants should also be determined.
Protease activity
We suppose that electromagnetic fields disturb the functioning of the proteolytic system and the processes in which it is involved: phagocytosis, melanization, cellular adhesion molecule recognition, generation of the reactive intermediates of oxygen and nitrogen, activation of pro-apoptotic molecules, synthesis of cytokines and antimicrobial peptides, enzyme activation, and molecular and hormonal signaling, as well as pathogen protein degrading (Bode et al. 1999;Grzywnowicz et al. 2008). To accurately confirm this assumption, older bees should also be examined in addition to the young ones in our study. The stronger the electromagnetic field and the longer the exposure time, alkaline protease activities were higher than in the control group ( Figure 6) This tendency is similar as in the studies by Strachecka et al. ( 2 0 1 6 ) w h e r e b e e s w e r e t r e a t e d w i t h bromfenvinphos and the reverse was true where coenzyme Q10 and curcumine were used (Strachecka et al. 2014(Strachecka et al. , 2015. According to Strachecka et al. (2010), the proteolytic system on the body surface reacts to changes in the degree of environmental pollution. Moreover, Strachecka et al. (2016) presented that the activities of these proteases increased in the hemolymph of workers treated with bromfenvinphos. Łoś and Strachecka et al. (2018) reported that during contact with unfavorable or harmful factors, protease activities increase drastically on the first of the 3 days and then decrease, and the activities in old bees were lower in the treated group than in the control group. These tendencies are also observed in our studies in young bees (Figure 4). The reverse tendencies are observed when bees are treated with biostimulators and vitamins and they have higher activities of proteases . Ashihara and Crozier (2001) reported that substances such as caffeine, coenzyme Q10, and others have long been considered to constitute the chemical defense against biological stressors. Kim et al. (2008) suggest that these substances stimulate the production and/or translocation of proteases and their inhibitors, and signaling molecules. The enzymes involved in the proteolytic system are essential components of insect resistance barriers and are associated with the activities of the antioxidant systems. The honeybee's proteolytic system, like other organisms, plays an important role in the extracellular and/or intracellular digestion of proteins and as activators of biological processes. It was also shown that the honeybee is characterized by a lower activity of proteases than, for example, the fruit fly (Drosophila melanogaster L.) or mosquitoes of the Anopheles gambiae complex, and it is probably caused by a greater share of the bee's social immunity. This dependence also applies to other social insects ). Nevertheless, the interaction of acidic, neutral, and alkaline proteases is a very important element that creates and activates the response cascade of the bee organism to environmental changes. Their presence and activity are closely related to the expression of relevant genes . Our research showed a significant increase in the activities of acidic and neutral proteases in all experimental groups in relation to the control group.
The effectiveness of the processes that allow the fight against free radicals depends on the activities of non-enzymatic antioxidants (coenzyme Q10, lycopene, vitamin E, etc.) and enzymatic antioxidants (SOD, GPx, CAT, etc.) (Seung-Kwon et al. 2013). Variable activities of the antioxidant barrier under the influence of various environmental factors were observed in homogenates of the mitochondrial sperm and muscle fractions (Weirich et al. 2002), and its presence was also confirmed in the venom glands (Peiren et al. 2008) and the hemolymph of honeybee workers (Strachecka et al. 2014(Strachecka et al. , 2016.
CONCLUSION
Our research showed a negative effect of all selected intensities of an E-field at 50 Hz on the organism of honeybee workers in laboratory conditions. The length of exposure to this factor contributes to changes in the activities of antioxidants and proteases. We propose that the electromagnetic field is the factor to be included as potentially threatening the honeybee. This phenomenon should be taken into account in future research.
All biochemical barriers are designed to protect the body from pathogen penetration. When a pathogen enters, the barrier should prevent its spread and destruction of the body. When the functioning of this complex defense system is disturbed, "gates" are created and various stressors can get inside the body. All biochemical mechanisms have one main goal-to maintain homeostasis of the body. Such mechanisms include, among others, antioxidant and proteolytic systems.
C O M P L I A N C E W I T H E T H I C A L STANDARDS
Conflict of interest The authors declare that they have no conflict of interest.
OPEN ACCESS
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 5,521.6 | 2020-06-19T00:00:00.000 | [
"Physics",
"Agricultural And Food Sciences"
] |
Somatodendritic Dopamine Release Requires Synaptotagmin 4 and 7 and the Participation of Voltage-gated Calcium Channels*
Somatodendritic (STD) dopamine (DA) release is a key mechanism for the autoregulatory control of DA release in the brain. However, its molecular mechanism remains undetermined. We tested the hypothesis that differential expression of synaptotagmin (Syt) isoforms explains some of the differential properties of terminal and STD DA release. Down-regulation of the dendritically expressed Syt4 and Syt7 severely reduced STD DA release, whereas terminal release required Syt1. Moreover, we found that although mobilization of intracellular Ca2+ stores is inefficient, Ca2+ influx through N- and P/Q-type voltage-gated channels is critical to trigger STD DA release. Our findings provide an explanation for the differential Ca2+ requirement of terminal and STD DA release. In addition, we propose that not all sources of intracellular Ca2+ are equally efficient to trigger this release mechanism. Our findings have implications for a better understanding of a fundamental cell biological process mediating transcellular signaling in a system critical for diseases such as Parkinson disease.
Dopamine (DA), 4 like other monoamine neurotransmitters, is released from the cell body and dendrites in addition to axon terminals (1). This process, called somatodendritic (STD) release, is important in the ventral tegmental area (VTA) for induction of behavioral sensitization to amphetamine through activation of local D1 receptors (2,3) and in the substantia nigra (SN) for control of motor performance (4,5). In addition, STD DA release modulates DA neuron firing activity through D2 autoreceptor activation (6,7) and increases firing activity of SN pars reticulata ␥-aminobutyric acid-releasing neurons, a process that might activate feedback signals regulating DA neuron activity (8), thereby influencing axonal DA release.
Two mechanisms have been proposed to mediate STD DA release: reversal of the DA transporter (9) and a vesicular exocytotic-like mechanism. In agreement with the second mechanism, STD DA release is activity-dependent (6,10), sensitive to depletion of vesicular stores with reserpine (6,11,12), and Ca 2ϩ -dependent (6,10,12,13). Moreover, disruption of SNARE proteins with botulinum toxins blocks STD DA release (10,13). Vesicular exocytosis requires the concerted action of SNARE proteins and a synaptotagmin (Syt). During release, SNAREs have a direct role in vesicle-membrane fusion, and Syt acts as a Ca 2ϩ sensor. Of the 15 Syt isoforms identified so far, Syt1, 2, 3, 5, 6, 7, 9, and 10 have been reported to drive Ca 2ϩdependent vesicular fusion (14), and only Syt1, 2, and 9 are confirmed as Ca 2ϩ sensors for synaptic neurotransmitter release from axon terminals (15).
One of the hallmarks of STD DA release is its relative persistence at reduced levels of extracellular Ca 2ϩ concentrations: although release from axon terminals is drastically reduced at extracellular Ca 2ϩ levels lower than 1 mM, STD DA release persists at Ca 2ϩ levels between 0.5 and 1 mM (Refs. 10, 12, and 13; but see also Ref. 16). This differential Ca 2ϩ sensitivity between STD and terminal DA release suggests differences in either SNARE composition or Ca 2ϩ sensors between these compartments.
Here, we focused on the role of Syt isoforms in mediating STD DA release. We determined the repertoire of Syt isoforms expressed in DA neurons, examined their distribution, and analyzed their implication in STD DA release triggered by basal firing activity. We also investigated the participation of different Ca 2ϩ sources in triggering basal STD DA release. Our results propose a novel function for Syt7 as a critical mediator of STD neurotransmitter release and show that Syt7, together with Syt4, is part of the exocytotic machinery that controls STD DA release.
EXPERIMENTAL PROCEDURES
Animals and Tissue Processing-All of the experiments were performed using the TH-EGFP/21-31 mouse line; it carries enhanced GFP under the control of the tyrosine hydroxylase (TH) promoter (17) in a C57BL/6 genetic background. SN and VTA containing slices were prepared from mice at postnatal day 0 (P0), P14, and P45. P0 pups were cryoanesthesized prior to decapitation, and brains were collected in ice-cold dissociation solution (90 mM Na 2 SO 4 , 30 mM K 2 SO 4 , 5.8 mM MgCl 2 , 0.25 mM CaCl 2 , 10 mM HEPES, 20 mM glucose, 0.001% phenol red, pH 7.4). P14 and P45 mice were anesthetized with Halothane (Sigma-Aldrich) and decapitated, brains were collected in oxygenated saline solution (130 mM NaCl, 20 mM NaHCO 3 , 1.25 mM KH 2 PO 4 , 1.3 mM MgSO 4 , 3 mM KCl, 10 mM glucose, 1.2 mM CaCl 2 ). All of the animal handling procedures were approved by the Université de Montréal animal ethics committee.
Mesencephalic Cell Suspension, Cell Sorting, and DA Neuron Cultures-Detailed procedures were described previously (18 -20). Briefly, acutely dissociated mesencephalic cell suspensions were obtained by trituration of the slices mentioned above using glass pipettes of decreasing diameter after enzymatic treatment, and dead cells and debris were eliminated by differential gradient centrifugation. When the material was intended for FACS purification (supplemental Fig. S1, A and B), cells were resuspended in PBS with 1% FBS and injected into a BD FACSAria (BD BioSciences; San Jose, CA) flow cytometer/cell sorter. Double immunocytochemistry on cells plated on polyethyleneimine-coated coverslips after FACS purification showed that 96% of purified cells (n ϭ 5) expressed both GFP and TH. No expression of GAD-67 (␥-aminobutyric acid neuron marker) or GFAP (astrocytic marker) was detected under RT-PCR on RNA obtained from FACS-purified cells (supplemental Fig. S1C).
When cells were intended for cultures, P0 mesencephalic cell suspensions were plated onto astrocytic monolayers (standard cultures), whereas cells were plated at a density of 350,000 cells/ml for cultures used for evaluation of DA release, the cells were plated at 50,000 cells/ml (low density cultures) for cultures used to evaluate expression and localization of Syts. In these cultures, DA neurons are identified by the expression of GFP; however, ectopic expression of GFP can be found in a small percentage of non-DA neurons, when evaluated by single-cell RT-PCR. In 17-day-old cultures, 90% (18 of 20) of GFP expressing neurons were found to be TH-positive when evaluated by immunocytochemistry. In 21-day-old cultures, over 95% of GFP positive cells were also TH-positive (results not shown).
Brain Slices-250-m-thick horizontal brain slices were prepared from decapitated mice deeply anesthetized with halothane. The brain was quickly removed and placed in ice-cold artificial cerebrospinal fluid (125 mM NaCl, 25 mM NaHCO 3 , 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 2 mM CaCl 2 , 2 mM MgCl 2 , 23 mM glucose) saturated with 95% O 2 , 5% CO 2 . The slices were cut using a vibrating microtome (Leica Microsystems Canada Inc.) in ice-cold oxygenating artificial cerebrospinal fluid and left to recuperate in artificial cerebrospinal fluid at room temperature until use.
RNA Extraction and RT-PCR-Total RNA was extracted from FACS-purified cells as well as from tissues used as controls and standard primary culture coverslips by using TRIzol (Invitrogen). To purify RNA from FACS-purified cells, 5 g of glycogen (Invitrogen) was added. Total RNA was dissolved in diethylpyrocarbonate-treated water and stored at Ϫ80°C until use. cDNA synthesis was carried out for 1 h using 0.5 mM dNTP mix (Qiagen), 2.5 M random hexamers (Applied Biosystems, Foster City, CA), 10 mM DTT, 40 units of RNaseOUT, 200 units of Moloney murine leukemia virus RT (Invitrogen), 50 mM Tris-HCl, 75 mM KCl, and 3 mM MgCl 2 , pH 8.3. RT enzyme was denatured, and the cDNAs were stored at Ϫ80°C until use. PCRs were performed using 1.5 mM MgCl 2 , 0.5 mM dNTP mix, 10 pmol of each primer, 2.5 units of Taq DNA polymerase (Qiagen), 20 mM Tris-HCl, 50 mM KCl, pH 8.3, and 35 cycles with 55°C of annealing temperature. All of the PCRs were resolved in 1.5% agarose gels.
Multiplex Single-cell RT-PCR-GFP-expressing neurons were randomly selected to avoid a selection bias toward cells that express high levels of GFP. Neurons were individually collected under RNase-free conditions using autoclaved borosilicate patch pipettes; each cell was collected by applying light negative pressure to the pipette. The content of each pipette was transferred immediately into individual prechilled 200-l tubes containing 6 l of a freshly prepared solution of 20 units of RNaseOUT and 8.3 mM DTT; the samples were immediately frozen on dry ice until use. Single cells were processed as described previously (18). Briefly, frozen samples were thawed on ice and subjected to the RT reaction as describe above but using 1.25 M random hexamers, 100 units of Moloney murine leukemia virus RT, and 20 additional units of RNaseOUT. A first round of PCR was done as mentioned above using half of the RT reaction in 25 l of final volume and 28 cycles. A second round of PCR was performed using 10% of the first PCR in 15 l of final volume and 35 cycles. All of the PCR products were resolved in 1.5% agarose gels. The identity of each PCR products was confirmed by sequencing. Primers were designed based upon sequences deposited in the GenBank TM data base. They do not interact with each of the other primers in the multiplex PCR. Right primers are followed by left primers: GFP, 5Ј-aagttcatctgcaccaccg-3Ј and 5Ј-tgctcaggtagtggttgtcg-3Ј; -actin, 5Ј-ctcttttccagccttccttctt-3Ј and 5Ј-agtaatctccttctgcatcctgtc-3; TH, 5Ј-gttctcaacctgctcttctcctt-3Ј and 5Ј-ggtagcaatttcctcctttgtgt-3Ј; TH-nested, 5Ј-gtacaaaaccctcctcactgtctc-3Ј and 5Ј-cttgtattggaaggcaatctctg-3Ј; GAD-67, 5Ј-atatcattggtttagctggtgaatg-3Ј and 5Ј-gtgactgtgttctgaggtgaagag-3Ј; GFAP, 5Ј-agaagctccaagatgaaaccaa-3Ј and 5Ј-ctttaccacgatgttcctcttga-3Ј; SNAP, 5Ј-gtaatgaactggaggagatgcaga-3Ј and 5Ј-atttaagcttgttacagggacacaca-3Ј; Syt1, 5Ј-gaaagacttagggaagaccatgaa-3Ј and 5Ј-tggacttttgtctcaaacttcttct-3Ј; Syt2, 5Ј-agaaacatcttcaagaggaaccag-3Ј and 5Ј-aggttctctggctctttctcct-3Ј; Syt4, 5Ј-atatctacccagaaaacctaagtagcc-3Ј and 5Ј-aaaacctgtcaaaactcagaactgtaa-3Ј; Syt7, 5Ј-cttagcgtcactatcgtcctctg-3Ј and 5Ј-gtagccaacactgaactggattc-3Ј; Syt9, 5Ј-aaccagagttatacaaacagaggtca-3Ј and 5Ј-tcaaagtcatacacagagaagtgaag-3Ј; and Syt11, 5Ј-tcgatgagaccttcaccttctac-3Ј and 5Јtgacataaggattacctgagagacc-3Ј. For the single-cell RT-PCR experiments, only TH detection required the use of a nested reaction in the second round of PCR. Syt7 primers were designed to detect its three reported variants in mouse.
Immunocytochemistry-Cultures of cells plated on polyethyleneimine-coated coverslips and brain slices were fixed with 4% paraformaldehyde in PBS and processed using protocols published elsewhere (10,18). A mouse monoclonal anti-TH antibody (Sigma-Aldrich) was used in combination with the following rabbit polyclonal antibodies: anti-GFP (Abcam), anti-Syt1 (Synaptic Systems, Goettingen, Germany), anti-Syt4 (developed by Mitsunori Fukuda in Japan), anti-Syt7 (Synaptic Systems), or anti-SNAP25 (Synaptic Systems). Because GFP fluorescence is negligible after paraformaldehyde fixation, TH was detected using a secondary antibody coupled to Alexa-488, whereas synaptotagmins and SNAP25 were detected using a secondary antibody coupled to Alexa-647. In experiments localizing the position within the mesencephalon of collected single cells, biocytin was added to the patch pipette and was then detected using Alexa-labeled streptavidin. Cellular localization of Syt was performed using low density cultures. The images were acquired using a point-scanning confocal microscope, equipped with 488 argon and 633-nm helium-neon lasers (Prairie Technologies LLC, Middleton, WI). The images were analyzed with Metamorph 4.5 (Universal Imaging Corp, Downingtown, PA).
DA Release Detection by Radioassay-STD DA release evoked by spontaneously firing DA neurons was detected by a radioassay using standard cultures on coverslips plated at a concentration of 350,000 cells/ml. This technique was selected because other approaches such as cyclic voltammetry cannot detect DA release evoked by spontaneous firing. After two rinses with Krebs-Ringer buffer (KRB) (140 mM NaCl, 5 mM KCl, 2 mM MgCl 2 , 2 mM CaCl 2 , 10 mM HEPES, 10 mM glucose, 6 mM sucrose, pH 7.35, and 305 mOsm), coverslips were incubated at 37°C in KRB containing monoamine oxidase inhibitors (5 M clorgyline and 100 M pargyline, KRB-CP) and the D2 autoreceptor antagonist sulpiride (4 M) for 5 min. Intracellular DA stores were next emptied by depolarization with 40 mM KCl. Labeling of intracellular DA was performed by incorporation of 0.2 M (20 Ci/ml) of L-[2,3,5,6-3 H]tyrosine (GE Healthcare) for 30 min in KRB-CP at 37°C. Unincorporated radioactivity was removed by three rinses with KRB-CP, and the cells were left to rest for 3 min at room temperature. The coverslips were then placed in wells containing 400 l of KRB-CP, and every 3 min a sample of 100 l was collected (and immediately replaced with fresh KRB-CP) and mixed with scintillation mixture to determine radioactivity. The coverslips were fixed and the DA neurons were counted after immunocytochemistry. The values reported throughout the paper were obtained as follows: background cpm values from glial monolayers were subtracted from values of neuronal cultures and then divided by the number of DA neurons, counted blindly following TH immunocytochemistry performed after each experiment. When striatal (␥-aminobutyric acid) neurons were used, 8.6 Ϯ 4.3 cpm/cell were detected at 2 mM Ca 2ϩ , whereas in DA neuron cultures, 148 Ϯ 11.3 cpm/cell were detected.
The identity of the released radioactive substance as DA was confirmed by the observation that when radioactive labeling was performed without blockade of D2 autoreceptor (but with clorgyline and pargyline to prevent transmitter catabolism), therefore allowing feedback inhibition of TH activity (21), the release of radioactively labeled DA was decreased over 80 times (supplemental Fig. S1D). This observation is incompatible with the possibility that some of the radioactivity we measured in our experiments comes from 3 H 2 O as a byproduct of tyrosine metabolism, especially from the tritium molecule in position C5. Thus the vast majority of radioactivity recovered was thus [ 3 H]DA. Some experiments were also performed using 14 Clabeled tyrosine (Moraveck Biochemicals, Brea, CA) obtaining results similar to those with L-[2,3,5,6-3 H]-Tyrosine (data not shown). When experiments involved constant treatment, the drugs were added 20 min before collection and were present in the bath until the end of the experiment. When experiments required acute treatment to measure time course responses, the drugs were dissolved into the 100 l of replacement volumes and administered after the control period. The composition of KRB was modified according to experiments to maintain osmolarity and concentration of divalent cations; NaCl concentration was lowered by 40 mM for the KRB containing 40 mM KCl, and MgCl 2 was raised to 3.5 mM for experiments in 0.5 mM Ca 2ϩ . In all of the experiments, DA radioactivity labeling was performed in 2 mM Ca 2ϩ , whereas resting period and sampling was performed at 0.5 or 2 mM Ca 2ϩ , as required.
Small Interfering RNA-All of the siRNA, except GFP siRNA (Santa Cruz Biotechnology, Santa Cruz, CA) and Cy3-labeled scrambled siRNA (Ambion, Austin, TX), were synthesized in vitro using the Silencer siRNA construction kit from Ambion. Targeted sequences were chosen from published reports: Syt1 (23), Syt4 (24), and Syt7 (25). The siRNA against Syt7 acts against its three reported isoforms in mouse; it binds within exon 2, 145 bases before the alternative splicing site located after exon 4. The cells were transfected at day 17 in culture, and the effects on DA release were measured 4 -5 days after. Transfection was performed using 40 pmol of each siRNA and Lipofectamine 2000 (Invitrogen). siRNA against GFP was used as a heterologous control. Nontransfected controls followed the transfection protocol, but no siRNA was added. The transfection efficiency was evaluated 24 -36 h after transfection by detecting the Cy3-labeled scrambled siRNA; this evaluation was performed by counting the proportion of GFP expressing cells that also contained the Cy3 signal.
Calcium Imaging-The cells were incubated with 5 M Fura2-AM and 0.02% pluronic acid (Invitrogen) for 45 min at room temperature in KRB. The coverslips were washed and left to stabilize for 15 min to allow for de-esterification of Fura2. Image ratio pairs (340/380 nm) were taken every 10 s under perfusion with KRB using a Hamamatsu Orca-II digital cooled CCD camera with a Lambda DG-4 excitation system (Sutter Instruments, Novato, CA) controlled with the AFA (Advanced Fluorescence Acquisition) module of the Image Pro Plus suite (Media Cybernetics) and analyzed with the ratio macro of the AFA module. At the end of each experiment, neurons that did not respond to a 40 mM KCl depolarization were considered as nonviable and discarded. The minimal Fura-2 ratio was calculated using 0 mM extracellular Ca 2ϩ and 4 mM EGTA, and the maximal ratio was determined using 2 mM Ca 2ϩ and 5 M of ionomycin (Sigma).
Statistics-The data shown throughout are the means Ϯ S.D. Statistically significant differences among conditions were analyzed either by a one-way analysis of variance with Dunnett's multiple comparison test or with a t test, as appropriate.
RESULTS
DA Neurons Express Synaptotagmin 1, 4, and 7-Because several Syt isoforms have been reported to be expressed in the SN and VTA (26 -28), we first evaluated which of them are indeed expressed by DA neurons. We performed RT-PCR on RNA from FACS-purified DA neurons obtained from TH-GFP transgenic mice (supplemental Fig. S1, A and B). We found that at P14, purified DA neurons express Syt1, 4, and 7 (Fig. 1A), whereas Syt9 and 11 were detected in RNA from mesencephalon but not in RNA obtained from FACS-purified DA neurons (Fig. 1A) and thus are likely to be expressed by non-DA mesencephalic neurons. Syt2 was undetectable in RNA from mesencephalon but was found in whole brain RNA, used as control (Fig. 1A). We next confirmed our findings by performing multiplex singlecell RT-PCR using cytoplasm of individual GFP-expressing neurons obtained in horizontal slices of P14 mice; the samples were aspirated during patch clamp recordings in whole cell configuration from both SN and VTA. Once again, Syt2, 9, and 11 could not be detected. However, Syt1, 4, and 7 were found in DA neurons (Fig. 1B). In addition, these neurons contained mRNA for SNAP25 (Fig. 1B), a SNARE protein necessary for exocytosis that plays a fundamental role in basal STD DA release (10).
Given that some of the Syt isoforms reported to be expressed in the mesencephalon could not be detected at P14, we decided to examine DA neurons at different developmental time periods. The Syt expression profile was thus assessed by single-cell RT-PCR using neurons acutely dissociated from newborns (P0) and young adults (P45). We again detected no expression of Syt2, 9, or 11 ( Fig. 1C and data not shown) at these ages. However, we found a marked age-dependent increase in the proportion of DA neurons expressing Syt7 (Fig. 1, C and D). In comparison, there was no notable change in the proportion of DA neurons expressing Syt1, Syt4, and SNAP25.
Because further experiments aiming to determine the mechanism of basal STD DA release required the use of primary DA neuron cultures, we next evaluated whether the Syt expression profile of DA neurons was the same in culture as in vivo. Using single-cell RT-PCR, we confirmed that DA neurons cultured for 10 or 17 days expressed only Syt1, 4, and 7 ( Fig. 2A). Moreover, we found that the proportion of cells expressing Syt7 increased over time as it does in vivo ( Fig. 2A). Furthermore, when evaluated by immunochemistry in low density cultures, all of the DA neurons expressed Syt1 (13 of 13 neurons) and Syt4 (18 of 18 neurons), whereas Syt7 was found in 66% of DA neurons (12 of 18 neurons).
Syt4 and 7 Are Localized in the Somatodendritic Compartment of DA Neurons-If Syt1, 4, or 7 is involved in triggering STD DA release, these proteins should accordingly be localized in the STD compartment of DA neurons. We thus determined their subcellular localization in low density cultures using immunocytochemistry (Fig. 2, B-F). Syt1 was not found within the cell body or dendrites of DA neurons (Fig. 2B). It was found to be restricted to fine axonal-like varicosities (Fig. 2C). However, Syt4 and Syt7 were always found in the soma and major dendrites of DA neurons (Fig. 2, D and E). In addition, Syt7 was found in a subset of axon terminals (supplemental Fig. S2A). Interestingly, SNAP25 was immunodetected in the STD compartment of DA neurons (Fig. 2F) in addition to its expected localization to axonal-like processes. Quantitative analysis of the colocalization of Syt1, Syt4, Syt7, or SNAP25 with TH ( Fig. 2G) corroborated the presence of Syt4, Syt7, and SNAP25 in the STD compartment. In addition, as expected, no significant presence of Syt1 was found within the STD compartment, but both Syt1 and SNAP25 were colocalized with TH in axonal-like varicosities. Syt4 and Syt7 Are Necessary for Basal Somatodendritic DA Release-To evaluate the participation of Syt isoforms in basal STD DA release, we used a recently developed mesencephalic primary DA neuron culture system (10) in which activity-dependent (10, 29, 30) STD DA release is selectively isolated from terminal DA release by reducing extracellular Ca 2ϩ levels to 0.5 mM (10, 13) and can be measured by radioassay or HPLC, both giving similar results (10). As previously demonstrated, we found that lowering extracellular Ca 2ϩ from 2 mM to 0.5 mM reduced but did not prevent the extracellular accumulation of DA (supplemental Fig. S2B). In addition, accumulation of extracellular DA at 0.5 mM of extracellular Ca 2ϩ was reduced by preventing firing with TTX (supplemental Fig. S2C) and increased in response to depolarization with 40 mM KCl (supplemental Fig. S2D). Finally, treatment with 1 M of GBR1209 to block DA transporter resulted in increased DA levels at 0.5 mM Ca 2ϩ , showing that release did not occur by reverse transport (supplemental Fig. S2E). Also confirming complete blockade of axonal neurotransmitter release in the presence of 0.5 mM of extracellular Ca 2ϩ , glutamate-mediated synaptic currents were completely blocked under these conditions, leaving only action potential-independent miniature synaptic currents (supplemental Fig. S2F). Interestingly, when evaluating cultures of 7-28 days of maturation, we found a time-dependent increase in STD DA release (Fig. 3A), a time course reminiscent of that of Syt7 expression.
To determine the participation of Syt isoforms in basal STD DA release, we down-regulated their expression using siRNA. We first evaluated the efficiency of each siRNA (Fig. 3B). Maximal inhibition of Syt1 mRNA was after 3 days of transfection; for Syt4, it was after 4 days, and for Syt7, it was after 48 h. As heterologous control, we used a siRNA against GFP, which showed maximal inhibition at 4 days post-transfection (results not shown). In all cases, mRNA levels were still down-regulated for at least 5 days post-transfection (Fig. 3B). In addition, the protein levels were evaluated by immunocytochemistry to confirm their inhibition (Fig. 3C).
By using the DA radioassay at 0.5 mM of extracellular Ca 2ϩ , we next evaluated the effect of Syt down-regulation on STD DA release (Fig. 3D). We found that Syt1 down-regulation did not affect extracellular DA levels, whereas down-regulation of siRNA against Syt4 or Syt7 had a marked effect, with Syt7 down-regulation being the most efficient. Moreover, downregulation of Syt4 and 7 together almost completely blocked STD DA release. Transfection of siRNA against GFP or scrambled siRNA had no effect on extracellular DA levels (Fig. 3D). We also evaluated the effect of Syt4/Syt7 knockdown on STD DA release evoked by potassium depolarization, a stimulus that causes a marked increase in intracellular Ca 2ϩ , even in 0.5 mM extracellular Ca 2ϩ (Fig. 3E). We found STD DA release evoked in this way was also completely blocked by Syt4/7 knockdown (Fig. 3F), thus showing that these isoforms are critical for both basal and stimulated STD DA release.
The lack of effect of Syt1 down-regulation on STD DA release is compatible with its localization in axon terminals and with the fact that axonal DA release is blocked under 0.5 mM Ca 2ϩ under our conditions. It is also compatible with the previous observation that Syt1 does not trigger neurotransmitter Cultured DA neurons were transfected at day 17 in culture using isoform-specific siRNA. The efficiency of transfection was 95%, and its effects on RNA (B) and protein (C) levels were assessed by semiquantitative RT-PCR and immunochemistry, respectively, at different days after transfection, as indicated. B, a representative gel of each Syt mRNA levels after siRNA transfection is shown. The presence of -actin mRNA was evaluated to confirm the presence of mRNA and as a loading control. Note that Syt7 can be expressed as three different isoforms in mouse; DA neurons preferentially express Syt7a, the smallest one. C, summary graph presenting the Syt/TH ratio measured at different days after transfection, as indicated. The cells were fixed and subjected to immunochemistry, 10 images were captured from random fields at 40ϫ (n ϭ 6) and quantified with Metamorph software. release at 0.5 mM of external Ca 2ϩ (31). However, to control for the functional effectiveness of Syt1 down-regulation, we evaluated its effect on DA release in 2 mM extracellular Ca 2ϩ , which allows both axonal and STD DA release. We found that under these conditions, Syt1 siRNA reduced extracellular DA levels by ϳ60%, thus confirming the important role of Syt1 in axonal DA release (Fig. 3G). In contrast, compatible with an effect limited to STD DA release, Syt7 down-regulation caused an only 25% decrease on DA release measured in 2 mM Ca 2ϩ (Fig. 3G). As expected, no effect on extracellular DA levels was observed when cells were transfected with siRNA against GFP or scrambled siRNA (Fig. 3G).
Somatodendritic DA Release Is Selectively Dependent on Calcium Influx through N and P/Q Voltage-gated Calcium Channels-Because Ca 2ϩ sensor such as synaptotagmins typically require coupling to a local source of intracellular Ca 2ϩ influx, we next examined the role of voltage-gated Ca 2ϩ channels in STD DA release. Although the broad spectrum voltagegated Ca 2ϩ channel (VGCC) blocker cadmium has been reported to be effective at reducing STD DA release (32,33), the specific Ca 2ϩ channel subtypes driving STD DA release remain unclear. In addition, recent work has suggested a partial implication of intracellular Ca 2ϩ ([Ca 2ϩ ] i ) stores in STD DA release evoked by action potential trains (33). Therefore, we tested the participation of both extracellular and intracellular Ca 2ϩ in triggering STD DA release. Because it has been reported that N-, P/Q-, and L-type VGCC account for 85% of peak Ca 2ϩ current in the cell body of DA neurons (34,35), we evaluated the effect of the L-type Ca 2ϩ channel blocker nifedipine (20 M), the N-type blocker -conotoxin GVIA (100 nM) and the P/Qtype blocker -agatoxin IVA (100 nM). We found that blocking of N-or P/Q-type channels efficiently reduced STD DA release (Fig. 4A). In contrast, the L-type blocker nifedipine failed to reduce extracellular DA levels (Fig. 4A). The lack of effect of nifedipine was not caused by inefficient block of Ca 2ϩ channels because, compatible with previous reports (35), we found that this antagonist blocks half of the total Ca 2ϩ current recorded in cultured DA neurons (Fig. 4, B and C).
To examine the involvement of [Ca 2ϩ ] i stores, we used thapsigargin, an inhibitor of the sarcoplasmic/endoplasmic reticulum Ca 2ϩ -ATPase. We reasoned that if basal STD DA release relies on ER Ca 2ϩ stores, their depletion with thapsigargin (1 M) should impair STD DA release. We found that preapplication of thapsigargin failed to cause a significant change in extracellular DA levels (Fig. 5A). Furthermore, although an acute application of thapsigargin caused the expected transient increase in [Ca 2ϩ ] i in cultured DA neurons (Fig. 5, B and C), this treatment did not cause a significant increase in DA levels 3 min after thapsigargin application (Fig. 5D).
The finding that Ca 2ϩ influx through L-type channels is unable to trigger STD DA release argues that this form of DA release cannot be nonselectively triggered by any [Ca 2ϩ ] i elevation. To further test this hypothesis, we used the neuropeptide neurotensin, a modulator well known to elevate [Ca 2ϩ ] i in DA neurons through activation of nonselective cationic channels of the transient receptor potential family and through [Ca 2ϩ ] i mobilization (22). In the presence of TTX (0.5 M) to prevent action potential firing and secondary Ca 2ϩ influx through VGCC, preapplication of neurotensin (10 nM) failed to induce any increase in STD DA release. To the contrary, it actually induced a significant decrease in extracellular DA levels (Fig. 6A). In addition, although acute application of neurotensin caused a robust elevation of [Ca 2ϩ ] i (Fig. 6, B and C), it failed to enhance STD DA release (Fig. 6D). However, in normal extracellular Ca 2ϩ , total DA release was elevated by neurotensin, compatible with the well known ability of this peptide to enhance DA release from axon terminals. These results suggest that any elevation of [Ca 2ϩ ] i cannot indiscriminately trigger STD DA release: an elevation at release sites of a sufficient amplitude and duration is likely to be required.
DISCUSSION
Modulation of DA neuron activity is central to the encoding of motivated behavior and reward prediction. DA is also critical for the response to antipsychotic drugs, to psychostimulants used in the treatment of attention deficit disorder, and to drugs of addiction. STD DA release is a key mechanism regulating DA neuron activity because it activates D2 autoreceptors, leading to slow inhibitory synaptic responses in DA neurons (6,29,36,37). However, identification of the underlying mechanism has proven to be controversial. Although reversal of the DA transporter has been suggested to be required for STD DA release evoked by glutamate (9) or veratridine (38), we and others (6,29) have found that DA transporter is not required for STD DA release evoked by spontaneous firing or short trains. Compatible with the implication of a form of exocytosis, it was found that STD DA release requires SNAP25 and synaptobrevin (10,13). Here we further explored the mechanism of STD DA release by evaluating the implication of synaptotagmin iso-forms and Ca 2ϩ sources in an effort to explain the differential Ca 2ϩ sensitivity of STD and terminal DA release. Our results further strengthen the hypothesis of the implication of an exocytotic mechanism by demonstrating the selective involvement of Syt4 and Syt7 in STD DA release. Furthermore, we find that although Ca 2ϩ is important for triggering STD DA release, not all sources of Ca 2ϩ have the ability to trigger basal STD DA release caused by spontaneous firing. N-and P/Q-type Ca 2ϩ channels play a preferential role, perhaps because of their known coupling to SNARE proteins.
Differential Roles of Synaptotagmin 1,4,and 7 in Terminal and STD DA Release-The Syt isoforms that we detected in DA neurons are known to regulate exocytosis. The role of Syt1 in fast Ca 2ϩ -triggered axonal neurotransmitter release is very well documented (39). Our findings argue that Syt1 is the primary Ca 2ϩ sensor in dopaminergic axon terminals; its lack of presence in the STD compartment makes it unlikely to participate in STD DA release. Syt7 regulates Ca 2ϩ -triggered exocytosis of lysosomes in fibroblast (40,41), of dense core vesicles (25), and of glucagon in pancreatic cells (42). Syt4 regulates glutamate release from hippocampal astrocytes (24), as well as from dense core vesicles in LT2 cells (43) and from auditory ribbon synapses (44). It also appears to modulate vesicular transport in the trans-Golgi network (45) and to regulate the release of retrograde signals at the neuromuscular junction in Drosophila (46). Here, our findings argue that Syt7 is a critical regulator of STD DA release. Moreover, the fact that Syt7 has a 10-fold higher affinity than Syt1 for binding phospholipids and syntaxin (47) and can trigger neurotransmitter release at intracellular Ca 2ϩ concentrations as little as 300 nM versus the 3 M that Syt1 requires (48,49) provides a possible explanation for the remarkable difference in Ca 2ϩ requirement between terminal and STD DA release sites (10,13). Our finding of robust Syt4/ 7-dependent STD DA release evoked by K ϩ depolarization elevating [Ca 2ϩ ] i to values between 500 and 700 nM fit with this general hypothesis. Our finding of a partial requirement for Syt4 in STD DA release is less easily reconcilable with the hypothesis of a lower Ca 2ϩ sensitivity of STD DA release. Syt4 is thought to be a nonfunctional Ca 2ϩ -dependent Syt and proposed to act as a negative modulator of exocytosis by competing with Syt1 for interaction with SNARE proteins (50); the knockout of Syt4 increases synaptic vesicle exocytosis, whereas Syt4 overexpression decreases it (Ref. 51; but see Ref. 52). However, because Syt1 is not present in the STD compartment of DA neurons, a Syt1-Syt4 interaction should not occur. In addition, based on a recent proposal (53), it is possible that Syt4 acts to position vesicles close to release sites in a Ca 2ϩ -insensitive manner. Considering the affinities of Syts for Ca 2ϩ , one can argue that lowering extracellular Ca 2ϩ favors artificially the activity of Syt7 over Syt1 in STD DA release. However, the absence of Syt1 within the STD compartment makes this scenario improbable.
Although Syt1, 4, and 7 are also expressed by non-DA neurons ( Fig. 1A and data not shown), it is unlikely that the results obtained in our siRNA experiments result indirectly from modified synaptic inputs to DA neurons because (i) neither Syt4 nor Syt7 participate in fast exocytosis at synapses and (ii) at 0.5 mM Ca 2ϩ , the presence or absence of Syt1 does not change STD DA release.
Participation of Voltage-gated Ca 2ϩ Channels in STD DA Release-The participation of VGCCs in STD DA release has remained unclear, and contradictory results have been obtained using different models. STD DA release in response to stimulation trains in adult mouse mesencephalic slices has suggested the involvement of N-type Ca 2ϩ channels (6). Using acutely dissociated rat DA neurons from P9 -P14 animals, others have reported that KCl-evoked STD DA release is reduced by ϳ50% by the broad spectrum VGCC blockers cobalt and cadmium (30). Using adult guinea pig mesencephalic brain slices, cadmium was found to completely block STD DA release evoked by stimulus trains (33); however, STD DA release evoked by single pulses was found to be unaffected by VGCC antagonists (54). Finally, using in vivo microdialysis in adult rats, it was found that the basal nonstimulated STD DA release is reduced only by T-and R-type VGCC blockers (55,56). However, P/Q-type channels were reported to be critical for STD DA release evoked by KCl depolarization in the same preparation (38). Here we show that under conditions of nonstimulated spontaneous firing activity, N-and P/Q-type channels support a substantial component of STD DA release in cultured mouse DA neurons. Our findings are in agreement with other studies conducted in mouse tissue (6,16). Species differences could perhaps explain some of the inconsistencies observed across different models. In addition, it is possible that different stimulation protocols such as single pulses, spontaneous firing, action potential trains, or KCl depolarization differentially recruit various Ca 2ϩ channel subtypes. However, the involvement of N-or P/Q-type channels observed here fits well with earlier findings that these channel subtypes selectively associate with SNARE proteins to support neurotransmitter release at synapses in various brain regions (57)(58)(59). Despite their known abundance in the cell body of DA neurons, the inability of L-type channels to associate with SNARE proteins could explain their inability to support STD DA release.
Sources of Calcium for STD DA Release-Our findings that only [Ca 2ϩ ] i elevations occurring via Ca 2ϩ influx through Nand P/Q-type channels trigger STD DA release raise the hypothesis that Ca 2ϩ elevation occurring in close proximity to SNARE proteins and Syt7 are required to trigger STD DA release. Although our findings suggest that global elevations in [Ca 2ϩ ] i through the opening of transient receptor potential-like channels or through mobilization of intracellular stores are unable to trigger exocytosis in this cellular compartment, it is possible that the amplitude and duration of Ca 2ϩ elevation in the vicinity of the STD DA release sites is more important than the source of Ca 2ϩ influx: perhaps large, local elevations near release sites are efficient, whereas elevations with a slower rise time are relatively inefficient. Nonetheless, we hypothesize that mobilization of [Ca 2ϩ ] i stores might act as a positive regulator of STD DA release as recently demonstrated (33). Such global [Ca 2ϩ ] i elevations could, for example, lead to enhanced priming of vesicles, as previously demonstrated in hypothalamic neurosecretory cells (60).
Using mouse brain slices and electrical stimulation trains (five stimuli at 40 Hz), it has been shown recently that both axonal and STD DA release appear to display rather similar dependences on extracellular Ca 2ϩ (16). Nonetheless, a statistically higher proportion of STD DA release remained in 0.5 mM Ca 2ϩ in comparison with release from axon terminals. The fact that in this study, the Ca 2ϩ dependence between the two compartments was not as striking as in the previous studies of Rice and co-workers (12,13) is surprising, and additional experiments will be required to examine this further and explore the contributions of Syt isoforms in this preparation. Our present findings, together with the previous demonstration of a requirement of SNAP25 and synaptobrevin for STD DA release, clearly show that STD DA release is mediated through a form of regulated exocytosis similar to that occurring in axon terminals. However, the requirement for Syt4 and 7, but not Syt1, distinguishes the release mechanism occurring in these two compartments. It also provides a possible molecular explanation for the differential Ca 2ϩ sensitivity of these two forms of DA release. Because STD DA release is important for regulation of the excitability and function of SN/VTA DA neurons (6,16), as well as for the development of amphetamine (2, 3) and alcohol (61) addiction, the determination of its mechanisms may lead to novel insights in the basic function and pathophysiological perturbation of this neurotransmitter system. | 8,661.8 | 2011-05-16T00:00:00.000 | [
"Biology"
] |
Legal restrictions on foreign institutional investors in a large, emerging economy: A comprehensive dataset
The dataset presented in this article contains information on imposition or relaxation of legal restrictions on foreign investment, by the authorities in a large, emerging economy- India. These restrictions are referred to as capital controls because they act as controls on the capital account of an economy. Legal instruments such as regulations, circulars and notifications published on the websites of the relevant regulatory authorities, have been used as the source of the data. In particular, the dataset discerns information from these legal instruments to identify whether the instrument tightens or eases capital controls on investment by foreign institutions in different asset classes such as debt, equity and derivatives.
a b s t r a c t
The dataset presented in this article contains information on imposition or relaxation of legal restrictions on foreign investment, by the authorities in a large, emerging economy-India. These restrictions are referred to as capital controls because they act as controls on the capital account of an economy. Legal instruments such as regulations, circulars and notifications published on the websites of the relevant regulatory authorities, have been used as the source of the data. In particular, the dataset discerns information from these legal instruments to identify whether the instrument tightens or eases capital controls on investment by foreign institutions in different asset classes such as debt, equity and derivatives.
© 2019 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/).
Data
The dataset quantifies the legal regulations applicable to foreign portfolio investors interested in investing in the Indian financial markets. 1 These regulations are referred to as capital controls because they act as a control on the capital account of an economy. The capital account is a summary of inflows and outflows of foreign investment to and from the host country, which in this case is India. This data records the easing and tightening of capital controls on foreign portfolio investors. Fig. 1 shows the number of capital control events by year. Fig. 2 shows the number of easing and tightening capital control events by year. Fig. 3 shows the year-wise distribution of capital control events by the type of capital control involved. Fig. 4 shows the easing and tightening of the various types of capital controls. Fig. 5 describes these capital controls by the asset classes that they affect. Fig. 6 shows the number of easing and tightening of capital controls across the different asset classes. 2 The traditional approach to measuring capital controls has relied on cross-country de-jure measures such as the Chinn-Ito index and the Schindler index [2,3]. These measure the level of capital account openness using the summary classification tables published by the International Monetary Fund in the Annual Report on Exchange Arrangements and Exchange Restrictions (AREAER). While these measures are useful for cross-country comparisons, they have limited utility for understanding a country-specific legal framework dealing with capital controls. This is especially true for a country like India, which has an elaborate legal and administrative framework governing capital Specifications
Data format
Raw and analysed
Experimental factors
The data-set classifies capital control measures into different categories and scores these measures depending on whether they relax or restrict investment by foreign institutional investors in the Indian economy.
Experimental features
Foreign investors are critical for financing investment in emerging economies such as India where domestic saving falls short of the investment requirements.
Data source location
Websites of regulatory agencies Data accessibility Raw data is available here: http://ifrogs.org/releases/Pandeyetal2019_Legalrestrictions_ ForeigninstitutionalInvestors.html
Value of the Data
The data described in this article allows us to build a time-series of capital control measures that have been imposed in India. This can be used to understand the extent to which India's capital account has gradually opened up since the economic liberalisation reforms of the mid 1990s, over a period of more than 20 years. The data can help construct indices for measuring the de-jure capital account openness of India with respect to foreign portfolio investment. The data can also be used to analyse the circumstances in which these instruments were introduced to evaluate their impact on outcomes such as foreign investment inflows into India, currency volatility, inflation and cost of capital in the economy. The capital controls dataset and related statistics presented in this article will give policy makers an overview of the evolution of legal restrictions on foreign portfolio investment in Indiaover time, the frequency of changes that have been brought about and their impact on policy objectives. The data presented in this article will allow finance practitioners and foreign investors to understand the current state of capital account openness in India which in turn may help them undertake investment decisions. 1 The dataset can be accessed here: http://ifrogs.org/releases/Pandeyetal2019_Legalrestrictions_ ForeigninstitutionalInvestors.html. 2 In all the graphs the y-axis shows the number of capital control events. controls. The cross-country indices of capital controls detect a movement towards capital account openness only when a specific category of controls is dismantled. In India, while the structure of controls is intact, many restrictions have been administratively or procedurally eased leading to greater access for foreign investors. As an outcome, these indices assign a score to India that has not changed from 1970 to 2017. To address these difficulties, the recent literature has shifted focus from level of capital controls to the precise measurement of capital control actions (CCAs). This paper is part of that emerging strand of literature. As an example, Pandey, Pasricha, Patnaik, and Shah (2016) [1] analyse legal documents to construct a dataset on restrictions on foreign currency borrowings by Indian firms. Foreign currency borrowing by firms (referred to as External Commercial Borrowings, ECB) is subject to a complex framework of capital controls on each aspect of borrowing, such as ceiling on the interest rate that can be paid, caps on the magnitude of borrowing, the uses that the borrowed amount can be put into etc. The authors construct a fine-tuned dataset tracking the easing and tightening of controls on all aspects of borrowings. Similarly, Forbes, Fratzscher, and Straub (2015) [4] analyse the motivations for imposing capital controls by constructing a dataset that tracks increases and decreases in controls on capital inflows, controls on capital outflows, and macro-prudential measures at a weekly frequency for 60 countries from 2009 through 2011.
The dataset was built by hand collecting qualitative information on the entire gamut of capital controls that were either imposed or relaxed by the concerned regulatory authorities in India with respect to foreign portfolio investment (henceforth, FPI) in India. Foreign portfolio investors are institutional investors. The concerned regulatory authorities include either the central bank of India, the Reserve Bank of India (RBI), or the securities regulator, the Securities and Exchange Board of India (SEBI). Our data spans a period of 18 years commencing on 1stJanuary, 2000 and ending on December During this period, the total number of legal instruments issued with regard to FPI capital controls was 112. Separate instruments issued by the RBI and SEBI which have the same effect on capital controls are counted only once. Often, a single instrument makes multiple interventions in relation to capital controls. Each such intervention is referred to as a capital control event. We exclude interventions for which a legally binding source cannot be traced. Once all the changes are considered as separate events, the total number of capital control events is 151. On this basis, on average, India has faced roughly 8 or 9 capital control events annually during the period of this data. Capital control actions can be of two typesdeasing and tightening. In the rest of the paper, we refer to them as FPI easing events and FPI tightening events, respectively. FPI easing events denote events that have the effect of relaxation of existing controls or any action that makes it easier for foreign investors to invest in the host country. Conversely, FPI tightening events denote events that have the effect of increasing the capital controls or any actions that make it harder for foreign investors to invest in the host country. We find that for the full period of the dataset, the easing events are substantially higher in number at 99, compared to the tightening events which were 27 in number. For all the years, except 2003 and 2006, the number of easing events is higher than the number of tightening events. Fig. 2 shows the annual distribution of easing and tightening events. The events that are (i) neither easing nor tightening or (ii) partially easing and partially tightening are classified, as null events. The maximum number of FPI easing events took place in 2018 (14 in number) followed by 2008 (11 in number) and 2013 (10 in number). The maximum number of FPI tightening events also took place in 2018 (9 in number) followed by 2008 (4 in number). 25 events are classified as null events.
We next consider two types of classifications of every capital control action: one based on the intended end-objective of the capital control action and the other based on the kind of assets to which the capital control action would apply. For each of these classifications, we further divide the events into easing and tightening events.
In the first classification scheme, we divide the capital controls into four categories depending on their intended end objective. These categories are: Eligibility: This category refers to capital control actions that decide the kind of foreign investors who might be eligible or ineligible to invest in the Indian financial markets. Investment condition: This category refers to capital control actions that govern the conditions of the investments that are undertaken by foreign investors. As an example, in 2004, FPIs were allowed to issue offshore derivative instruments against underlying securities held by them in the Indian stock exchange. Investment limit: These are capital control actions that deal with monetary limits up to which investments are permitted by FPIs. Procedure: The law on capital controls prescribes an elaborate administrative procedure that foreign investors need to follow in order to invest in the Indian debt and equity markets. For example, a change in the procedure for registration of an FPI with regulatory authorities in India will be classified in this category. Fig. 3 depicts the FPI capital control actions classified into the above mentioned four categories during the sample period. We find that about 60% of the capital control actions during the period of the dataset relate to investment conditions, and 20% are 'procedure' related changes. The remaining 20% of the capital control actions relate to eligibility criteria or investment limits. The year 2018 witnessed the highest number of capital control actions in relation to investment conditions (18 in number), followed by 2008, and 2012.
In Fig. 4, we depict the number of easing and tightening events across the four categories. Of the FPI easing events during this period, more than half pertained to 'investment conditions' and the next largest chunk pertained to 'investment limits'. The highest number of FPI tightening events was with respect to 'investment conditions'. Thus, statistics show that majority of the capital control actions during the period from 2000 to 2018 have been in the domain of 'investment conditions'.
In the second classification scheme, we split the capital control events into four main asset classes, namely-Debt, Derivatives, Equity and General. This classification helps understand which kind of assets witnessed the most capital control actions over the last two decades, as far as foreign institutional investment is concerned. "Debt" refers to investment in both corporate and government bonds. "Derivatives" include products such as equity futures and options, commodity derivatives etc. 3 "Equity" refers to investment in the stocks and shares of Indian companies. Finally, those capital control changes that do not relate to changes in the asset class of "debt", "equity" or "derivatives" but relate to, for example easing/tightening of procedures across all asset classes or easing/tightening of eligibility of FPIs across all asset classes, are grouped under the category of "general".
We also create another category called "other" to track changes in asset classes other than "debt", "equity" and "derivatives". For example, FPI investments in Mutual Funds and Collective Investment Schemes (CIS) would be captured under the "others" category. As shown in Fig. 5, the "General" category saw the highest number of capital control actions (69) followed by "Debt" (64) and "Derivatives" (24). "Equity" (2) saw the least. The residual category of "Others" contained 4 capital control actions.
In Fig. 6, we plot the number of FPI easing events vs. FPI tightening events across the various asset classes. We find that the category "Debt" saw the highest number of capital controls easing whereas the "General" category faced the maximum tightening of controls.
Experimental design, materials, and methods
Cross-border capital flows coming into India are governed by FEMA and the rules and regulations made under it. India is currently a partially capital account convertible economy. Hence, under the current design of the legal framework, all capital account transactions in India are prohibited unless explicitly permitted. The permissions are granted through a set of legal instruments issued primarily by the central bank (RBI) and also by the securities market regulator (SEBI). Restrictions differ according to the type of foreign investor, the type of asset class, the intended recipient of foreign capital, the end use of foreign capital, etc.
In this article, we hand-construct a new dataset about one class of capital controls, those that affect the investment into India by foreign portfolio investors (FPIs). Changes to capital controls are published by the RBI and SEBI in their circulars which are publicly available. We analysed the text of these circulars to construct our dataset on capital controls governing foreign portfolio investments. The dataset classifies each capital control change as "easing" or "tightening". Easing events are marked as "þ1" and tightening events are marked as "e1". The changes that are ambiguous or those that primarily relate to procedural changes are marked as "0".
Since the liberalisation of India's economy in the 1990s, parts of its capital account has been liberalized and FPIs have been allowed to invest in Indian markets since 1992. In the 1990s, FPI investments were governed by Government of India guidelines and permissions under Foreign Exchange Regulation Act, 1973 (FERA), which was the legal framework preceding FEMA. With the enactment of FEMA in 1999, the capital controls governing FPIs came under the regulatory purview of RBI. Hence, we track the changes in FPI investments post the enactment of FEMA capturing all capital control actions with respect to FPIs from the year 2000e2018. | 3,410.4 | 2019-11-16T00:00:00.000 | [
"Economics"
] |
Sensing force and charge at the nanoscale with a single-molecule tether
Measuring the electrophoretic mobility of molecules is a powerful experimental approach for investigating biomolecular processes. A frequent challenge in the context of single-particle measurements is throughput, limiting the obtainable statistics. Here, we present a molecular force sensor and charge detector based on parallelised imaging and tracking of tethered double-stranded DNA functionalised with charged nanoparticles interacting with an externally applied electric field. Tracking the position of the tethered particle with simultaneous nanometre precision and microsecond temporal resolution allows us to detect and quantify the electrophoretic force down to the sub-piconewton scale. Furthermore, we demonstrate that this approach is suitable for detecting changes to the particle charge state, as induced by the addition of charged biomolecules or changes to pH. Our approach provides an alternative route to studying structural and charge dynamics at the single molecule level.
Introduction
The quantification of forces at the molecular level has become a widely used approach to understand biomolecular dynamics and function.0][11] Nevertheless, both have their intrinsic limitations.For optical and magnetic trapping, the physical connection of a micrometre-sized bead to the molecule of interest introduces perturbations from the aqueous environment that prevent the investigation of small conformational changes. 9For scanning probe approaches, external mechanical stimuli imposed on molecules is the largest concern, which can alter the flexibility and elasticity of molecules. 127][18][19][20] Nevertheless, for all the aforementioned techniques with few exceptions, [21][22][23] a frequently faced challenge in the context of force measurements is throughput, limiting the obtainable statistics.
The majority of TPM studies have relied on the use of reporter beads with diameters >100 nm to maximise simultaneous localisation precision and temporal resolution required to monitor the bead motion. 15,24The convenience of a large optical signal comes at the expense of the inability to study the dynamics of short tethers due to volume exclusion effects near interfaces. 25Recently, Lindner et al. have combined TPM with total internal reflection (TIR) illumination and dark-field microscopy to extract the spring constant of DNA with a contour length of L = 925 nm tethered to a 80 nm diameter gold nanoparticle (AuNP), achieving 10 nm localisation precision and 1 ms temporal resolution. 26Using smaller particles is required to minimise the influence of particle motion on the bio-polymer dynamics, 25 and to measure conformational changes and transitions from small molecules. 27At the same time, the scattering cross section decreases with the sixth power of the particle diameter, therefore making the imaging and tracking of smaller particles a significant experimental challenge.This is a particular problem in the context of rapid diffusion, which results in positional blurring.Furthermore, the larger the particle, the more difficult is detection and quantification of any changes affecting the particle motion, such as charge or viscosity.
Here, we use an optimised total internal reflection-based dark field microscope 28 (Fig. 1a) to achieve exceptionally high signal-to-background ratio images of 20 nm diameter gold nanoparticles on microscope cover glass, allowing for few nanometre localisation precision even with <10 μs exposure times, significantly reducing the effects of motion-blurring.
TPM experiments with such small scattering labels enables the studies of much shorter DNA strands than were possible to date, without significant surface interference.We take advantage of these imaging capabilities not only to characterise the mechanical properties of short DNA tethers, but also to use the reporter bead as a nanoscale force and charge sensor with sub-piconewton sensitivity.
Results and discussion
Our approach is based on tethering short (≤160 bp) doublestranded DNA (dsDNA) to a microscope cover glass surface, with both surface and bead attachment achieved by biotinstreptavidin linkages.The glass surface is passivated with neutrally charged polyethylene glycol (PEG) molecules to prevent non-specific binding.This arrangement, combined with optimised micro-mirror TIR illumination, produces exceptionally high contrast images of individual 20 nm AuNPs (Fig. 1b).Even at an exposure time of 6 μs we could achieve 3 nm localisation precision (Fig. 1c) almost entirely removing motion blurring.This high acquisition speed and in principle infinite observation time coupled with a large field of view (25 × 25 μm 2 ) allows for efficient and high-throughput characterisation of nanoscale DNA tethers.Here, we accurately character-ise the tether properties, such as distinguishing between single-tethered, partially immobilised, and fully immobilised beads (Fig. 1d-f ).The scatter plot of a fully mobile, single-tethered 20 nm diameter AuNP tethered to 160 bp dsDNA with a contour length of 52 nm exhibits a symmetric distribution in the x-y plane.Its maximum radial extension of ∼60 nm agrees well with the length of the 160 bp DNA molecule (52 nm), the radius of the attached bead (10 nm) and the overall length of the biotin-streptavidin linkage between the glass surface and the DNA molecule, and the DNA molecule and the gold nanoparticle (about 3 nm in total).
The restricted motion of a multi-tethered particle, by contrast, results in an elongated distribution (Fig. 1e) to a degree that depends on the distance between the anchor points of the DNA tethers on the surface.Inherent to the self-assembly process of these molecular force sensors, stuck beads and multi-tethered beads are present, sometimes caused by incomplete surface passivation.Even though the use of gold nanoparticles in an excess amount relative to DNA molecules helps to minimise multiple attachment of DNA tethers to a single gold nanoparticle, it cannot be eliminated completely.These immobile particles were found to have much more tightly distributed x-y scatter plots (Fig. 1f ).Scatter plots of a streptavidin conjugated 20 nm AuNP immobilised directly on the mPEG/biotin-PEG layer in the absence of dsDNA suggests neg- ligible flexibility contributed by biotin-streptavidin linkers and passivation (Fig. 1g).
The diffusion of single-tethered particles matches a normal distribution that enables us to treat the DNA tether as a classical harmonic oscillator (Fig. 2a).As there is variability among the elastic properties of dsDNA molecules and their binding configuration on the nanoscale, we characterise each tether individually based on the experimentally measured thermal distribution of the particle motion.This approach also enables us to detect small conformational changes and binding/ unbinding events where such stochastic behaviour is typically averaged out in ensemble measurements.
At thermal equilibrium, the probability of finding the particle in a state with energy with Z the partition function, κ B the Boltzmann constant and T the temperature.In the over-damped regime, the particle mass becomes irrelevant because the inertial motion is damped by friction and the kinetic energy is solely determined by thermal motion.We thus obtain the potential of mean-force We can conclude that the (local) minimum of the potential coincides with the position where the particle is found most often.At this point dV dr ¼ 0. Without losing generality, the origin is normalised to the minimum potential of the particle distribution.If we (Taylor) expand the potential around its minimum, we find where κ H denotes the effective in-plane spring constant and κ z the spring constant in the z-direction.Displacement in the z-direction involves stretching the DNA and a different spring constant κ z ≠ κ H .In general, κ H could be obtained by taking the second derivative of the experimentally measured log P(x) around its maximum.We observe in our measurements that the in-plane particle position follows a Gaussian distribution P(x) ∝ e −x2/2σ2 , thus for an acceptable approximation, the effective spring constant is given by where σ is the standard deviation of the particle distribution extracted from a Gaussian fit.Based on the acquired experimental data (Fig. 2b), we obtain average values of the effective spring constant for dsDNA tethers consisting of 60, 90, 120 and 160 base pairs in 150 mM NaCl and 10 mM HEPES buffer at pH 7.6, respectively.In the regime of low stretching force, an elastic polymer follows Hooke's law. 29Thus, the Young's modulus can be expressed as: where A = πR 2 denotes the cross-sectional area of the material and L the length of the spring.For simplicity, dsDNA molecules having the same GC content might be thought of as a homogeneous material, 1 of which the Young's modulus is E = 114.9± 18.5 pN nm −2 for R of 10 Å according to our measurements (Fig. 2c). 1,30This approach thus enables us to quantify κ H for each tether individually.Here we only focus on observ- We obtained the product of Young's modulus E and the cross-sectional area A from the fit (red line), E = 114.9± 18.5 pN nm −2 for R = 10 Å. 1,30 .
ing the effective spring (Hook's) constant of DNA tethers, however, to determine the persistence length of DNA molecules is more complicated and would require the analysis of the 3D nature of tethered particle motion. 15Based on this single-particle approach we cannot only characterise and resolve differences in mechanical properties of different dsDNA tethers, but also achieve quantitative force measurements on the sub-piconewton (0.1 pN) scale.
To test our ability for measuring forces, we modified our sample chambers to enable application of an electric field.A function generator provides a modulated potential via a power amplifier connecting to two platinum electrodes glued on each side of the flow cell.Assuming that the DNA-bead construct is charged, we would expect the centre of mass of the tether to change depending on the polarity of the applied field (Fig. 3a).In contrast to the previous short-pulse measurements (exposure time of 6 microseconds) where we needed to essentially freeze the position of the reporter bead to accurately sample its motion, here we are only interested in the centre of mass position.We therefore increase the exposure time to 2 ms (see Fig. S1b †).Application of an 80 Volt peak-to-peak (Vpp) square waveform potential at 0.5 Hz results in a displace-ment of the tether.The corresponding bead position along the axis of the applied potential reveals oscillatory behaviour after applying a low-pass filter (Fig. 3b).Performing a Fast Fourier transform on the raw data exhibits a dominant feature at 0.47 Hz, the fundamental harmonic of the damped harmonic response (Fig. 3d), with the difference attributable to the delay between the camera and the interfacing hardware.
We grouped the localised centre of mass (COM) positions of a tethered particle into POS cycles and NEG cycles according to the polarity of the EF when it appeared.The displacement was then defined and calculated by the distance between COM in opposite direction of the applied potential.A statistical t-test enabled us to exclude tethered molecules with no statistically significant difference between distributions in the presence of EF.We could further reduce the imaging noise by averaging to obtain a mean position for each direction of the applied potential, demonstrating clear separation, in this case on the order of 7.35 nm at equilibrium driven by the electrophoretic force (Fig. 3c).Following the expansion of potential energy as in eqn (2), we calculate the exerted force on this DNA tether to be 235 ± 14 fN, given a measured effective spring constant of κ H = 32 ± 2 pN μm −1 .We then repeated this analysis on all free tethers (N = 42) visible in a field of view and found a wide variety of displacements ranging from <1 to 8 nm, likely due to differences in the tether charge and details of surface attachment (Fig. 4a).Given the large error bars for the displacement compared to its magnitude, we performed a statistical analysis to test whether the difference in average positions for opposite potentials are indeed significant within the error of our measurement.We found that 20 out of 42 tethers exhibited p < 0.05 (Δ in Fig. 4b).In addition, we could compare particle positions in consecutive POS (Δ′) and NEG (Δ″) potentials, which exhibited dramatically different behaviour.These results suggest that we are able to resolve even few nm displacements in the centre of mass caused by the electrophoretic force.
In order to further verify the displacement dependence on applied EF potential, we changed the potential from 20 Vpp with a 20 V increment up to 80 Vpp to study the force-depen-dent variation of the displacement.At each applied potential, we performed the same analysis as outlined above for 9 different particles (Fig. 5a).In all cases, we found an increase in the centre of mass displacement with applied voltage, exhibiting an overall roughly linear dependence as expected for an electrostatic interaction (Fig. 5b).The standard deviation of displacement between consecutive cycles is consistent with what we expect from the localization error.
An alternative approach to alter the force on the tethered nanoparticle is to vary its charge for a fixed potential.We followed a similar approach to what has been developed for (micrometre-sized) particles in optical tweezers 31 to obtain the charge on the tethered particle from the measured displacement.The textbook treatment of a freely-diffusing charged particle under the influence of an electric field in a polyelectrolyte solution shows that the zeta-potential ζ is related to the drift velocity ν d of the particle by the Helmholtz-Smoluchowski (b) A t-test was carried out to exclude tethered molecules with no statistically significant difference between distributions where the p-value is greater than 0.05 in the presence of potential (Δ).Corresponding t-test results (Δ' and Δ'') reveal that particle displacements are similar within the same potential polarity.equation under the simplified assumption of a hard sphere. 32,33 where η and εε 0 are the viscosity and the absolute permittivity of the medium.This relation is valid in the absence of electroosmosis and at low frequencies when relaxation effects of the electric double layer are negligible.Since in the steady state, the electrophoretic force F e is balanced by the viscous drag, we can use the Stokes relation to obtain F e = −3πdην d , where d is the particle diameter, which leads to a relation between the zeta-potential and the electrophoretic force.
For a tethered particle, the drag force is zero (on average) but F e is balanced instead with the retention force of the tether given by F r = −κ H Δx COM .Therefore, for a charged and tethered particle under the effect of an electric field To obtain the charge from the zeta-potential, assuming all charge is distributed on the nanoparticle surface, we can use the Gouy-Chapman equation for surface charge density σ.Considering q = πd 2 σ, we find where n 0 is the number density of univalent ions present in solution and e is the elementary charge.We performed two different experiments to explore this mechanism.In the first experiment, we added excess biotinylated dsDNA to the chamber after an initial displacement measurement and found an increase in the displacement upon addition of charged molecules to the reporter bead surface (Fig. 6a).The additional displacement ranges from 1.24 nm to 3.88 nm, corresponding to a 8.1 mV and 25.3 mV change in the zeta-potential, respectively, and 369e and 1227e elementary charges, caused by the number of Biotin-dsDNA molecules that bind to each nanoparticle.
In a second demonstration, we changed the zeta-potential by varying the pH of our working buffer.Because the gold nanoparticles that we used are functionalised and covered by excess streptavidin molecules, the surface charge on the particle is dominated by protein rather than the charge on gold nanoparticle itself.Streptavidin has an isoelectric point (pl) between 5 and 6. 34 We therefore measured particle displacements at pH 5.5 and pH 7.6 with the flow chamber being washed using corres- ponding buffer in between.At pH 5.5, each tethered particle only carries a charges and is nearly electrically neutral.The particle displacements are considerably larger at pH 7.6, increasing from 1.51 nm to 5.84 nm, corresponding to 32.8 mV change of the zeta-potential (particle 2 in Fig. 6b).
The counterions that screen a charge surface cause electroosmotic (EO) flow under an external electric field, even if the neutral PEG coating covalently bound to the silica surface could shield this effect and hence reduce the EO force.It is difficult to quantitatively model due to possible local surface nonuniformity.To rule out this effect, we predicted the direction of EO flow from the known polarity of the applied potential and confirmed that by observing the direction of motion of freely diffusive particles or dirt in buffer solution (Fig. 7c).Since our tethered particles are negatively charged at pH 7.6, in the presence of EF, we found that the direction of the tethered particles motion is in fact opposite to the direction of flow.In addition, the relation between the external electric field and the tethered particle displacement (eqn ( 7)) holds when the EO flow is negligible.Observing a large range of displacements between nearby tethered particles (Fig. 7b) confirms experimentally that electrophoretic forces are much larger than those caused by the EO flow, which should cause comparable displacement on adjacent particles.
Taken together, we have demonstrated a tethered particle motion assay with 20 nm diameter reporter beads, maintaining simultaneous nanometre localisation precision and microsecond temporal resolution in a wide-field imaging configuration.These experimental capabilities enabled us to efficiently identify free, single tethers and characterise their mechanical properties on a molecule-by-molecule basis.By applying an alternating potential across our sample chamber, we could further characterise the charge properties of the tether, accessing sub-piconewton scale forces.In addition, by monitoring the position of reporter beads driven by electrophoretic force, we have experimentally demonstrated the dependence of the variations in their surface potential on surrounding pH values well as on the binding of supercharged biomolecules.Our results demonstrate a highly efficient, high-throughput approach to monitor the mechanics and dynamics of biomolecular interactions based on their effective charge.
Flow-cell preparation and tether assembly
Our flow chamber is prepared from a thin glass slide and a microscope coverslip bonded by 80 μm thickness double-sided Scotch tape (3 M).The coverslip surface is passivated with a mixture of monomethoxy polyethylene glycol-silane (mPEGsilane, molecular weight 2000) and biotinylated PEG-silane (molecular weight 3400) to prevent non-specific binding of gold nanoparticles onto the surface and to assemble double stranded DNA (dsDNA) tethers via avidin-biotin interactions.The surface passivation protocol is modified based on that from the Dogic lab. 35The coverslips were sequentially bathsonicated in 2% Hellmanex in MilliQ water, isopropanol solution and MilliQ water each for 5 minutes and subsequently blow dried with nitrogen.Clean and dry coverslips were then oxygen plasma-treated for 8 minutes to create a negatively charged surface.At the same time, a final mixture of 10 mg mL −1 mPEG-silane and 50 μg mL −1 biotin-PEG-silane were dissolved in 1% acetic acid ethanol.The plasma-cleaned coverslips were stacked with a spread of 80 to 100 μL mixture solution and stored in Petri dishes.After incubation in an oven at 70 °C for 45 minutes, the pegylated coverslips were rinsed and nitrogen blow dried again, before being assembled into flow chambers.
The tether assembly was composed of two main phases: immobilisation of dsDNA on the glass surface and attachment of gold nanoparticles to the dsDNA, both via a biotin-streptavidin linkage.For all measurements, a mixture of 150 mM potassium chloride (KCl) and 10 mM HEPES at pH 7.6 was used as the working buffer (WB), unless specified otherwise.To construct the gold nanoparticle functionalised dsDNA tethers, 10 μM streptavidin in WB was added into the flow cell and incubated for 5 minutes to ensure that all biotin-PEG binding sites on the surface were entirely occupied.Then 5′-end biotinylated-dsDNA (500 nM, purchased from ADTBio) in WB was injected into the chamber.After this, streptavidin-functionalised 20 nm AuNPs (diluted to 4 pM, BBI) were added into the chamber to finalise the assembly of tethers.We typically incubated for 5 minutes before washing the chamber with WB before adding new solution.Tether construction depends on the surface quality and binding site density.We therefore explored various mPEG to biotin-PEG ratios to maximise the population of qualified tethers while keeping the separation between particles on the order of 2 μm.
Dark-field microscopy
Our dark-field microscope setup 28 was built with a high numerical aperture (NA) objective (Olympus, 1.42NA, 60×) to create total internal reflection illumination at the glass-water interface.Briefly, two micromirrors were mounted as close as possible to the objective entrance pupil to couple incident and extract reflected light (520 nm wavelength) from the detection path. 36The scattered light from individual gold nanoparticles was collected by the same objective and focused by a tube lens onto a CMOS camera (Point Grey Grasshopper GS3-U3-32S4M-C).Using a 200 mm focal length imaging lens in combination with the camera and the objective yields an effective pixel size of 51.7 nm per pixel with a FOV of 25 × 25 μm 2 .We stabilised the focus position using a piezo driven feedback system reading the position of the reflected illumination beam with a second CMOS camera.
To track particles at high speed, and maintain high localisation precision without causing tether detachment by heating, we used pulsed illumination synchronised with camera exposure.The illumination laser was set at a power of 2.73 kW/ cm 2 and triggered by a TTL signal of 5% duty cycle at 1 kHz.The imaging camera was synchronously operated in triggering mode and the exposure time was set to the lowest setting of 6 μs.The high-speed data were analysed for selecting qualified tethers, extracting effective spring constants before starting any force or charge measurements.In addition, we used a reference laser beam (445 nm) directed at an empty region of the imaging camera providing a synchronised read out of the applied electric field.After acquiring the high-speed data, we switched the pulsed illumination beam (520 nm) to continuous wave mode and an exposure time of 2 ms for force and charge measurements.
Assessment of tethered particle behaviour
In order to accurately extract effective spring constants, and to measure the exerted force on DNA tethers in the presence of an external electric field, it is crucial to ensure that a single gold nanoparticle is only attached to the free end of one DNA tether and that both the dsDNA and the tethered bead are free to undergo Brownian motion by thermal fluctuations in solution.Tethers usually separated into three categories: 1. Fully mobile single-tethered particles (Fig. 1c); 2. Partially mobile multi-tethered particles (Fig. 1d); and 3. Immobile particles (Fig. 1e).We applied an intensity threshold to distinguish particle candidates from the background noise.To locate each single particle in the FOV, we used the coordinates of the pixel having the highest intensity within the point spread function (PSF).Projections of the particle in the x-y plane were plotted by fitting the PSF to a 2D Gaussian function to classify particles.We assessed the symmetry of scatter plots by equal division into eight sectors and probing the variance of projection density and radial distribution function of all sectors to exclude multiple tethered particles with asymmetric motion.Similar to multi-tethered particles, we also discard stationary particles.Overall, the selection criteria for identifying a qualified tether were chosen as: 1.The two-dimensional particle distribution has a radially symmetric shape whose ellipticity <1.1; 2. The two-dimensional particle distribution has a maximum amplitude close to the contour of the DNA molecule (0.8L < r < 1.2L); 3. The standard deviation of the number of points for all sectors STD-npts < 100 and the standard deviation of the RDFs for all sectors STD-rdf < 0.002.
After excluding faulty tethers, qualified molecular force sensors were used for future measurement in the presence of an external EF.About 70% of tethers were fully mobile single tethered particles.Finally, we took one movie with pulsed laser excitation at 6 μs exposure time in order to inspect the viability of every single tether for a final screening test after switching off the EF.Only data from those tethers which recovered from the oscillation driven by the electrophoretic force were analysed.In this work, of all the measurements carried out, about one in ten of the tethers analysed satisfied all eligibility criteria for the final force measurements.Overall, the filtering criteria for a qualified tether examination were: (a) It should be a fully mobile single-tethered particle, (b) The p-value for evaluating the difference of its distributions at opposite polarity of applied EF should be less than 0.05, (c) The p-value for evaluating the difference of its distributions within the same EF direction should be greater than 0.9 and (d) After switching off the external EF, both the reporter bead and the molecule remain active and revert to the normal radial distribution by thermal fluctuations.
Implementation of the oscillating electric field
After measuring the effective spring constant of individual tethers in the FOV at high speed, we applied an external electric field across the flow chamber causing the particle equilibrium position to shift due to the electrophoretic force.To apply an electric field parallel to the coverslip surface, two platinum electrodes were placed on both sides of the flow chamber.Both electrodes were wired to a power amplifier with amplitude and frequency controlled by an external function generator.In order to ensure reliable electrical connections between two electrodes and the electrolyte, epoxy was used to firmly attach the electrodes onto the surface and create barriers outside both the inlet and outlet of the flow chamber.Enough buffer solution was added inside the barriers to ensure that electrodes were completely immersed, and thereby in good contact with electrolyte inside the flow chamber.The resistance between the two platinum electrodes (2 < R < 5 MΩ) was measured and monitored to confirm good electrical contact.
Fig. 1
Fig. 1 Experimental approach and representative tethered particle behaviour.(a) Schematic of the microscope setup and diagram of the DNA-tethered particle construct in the presence of an external electric field applied via two platinum electrodes.(b) Dark-field image of single DNA tethers with 20 nm gold nanoparticles.Scale bar: 2 μm.Exposure time 6 μs.(c) Localisation precision as a function of the number of collected photons and camera exposure time.(d) Radially symmetric scatter plot of a freely mobile single-tethered particle.Each data set consists of 3800 points acquired with 6 μs exposure time over 19 s for a selected particle.(e) Scatter plot of a partially mobile, multi-tethered particle.(f ) Scatter plot of an immobile particle from a DNA-tether assay.(g) Scatter plot for a 20 nm gold particle immobilised directly on microscope cover glass.Scale bars: 10 nm.
Fig. 2
Fig. 2 Particle diffusive motion and measurement of the effective spring constant by thermal fluctuations.(a) Left: Thermal fluctuation-induced diffusive motion of single-tethered particle.Right: One-dimensional probability density function P(x) and fit to a normal distribution.(b) Distributions of effective spring constants extracted from P(x) of 60, 90, 120 and 160 bp dsDNA (N = 88, 73, 197, 210).(c) Effective spring constants versus the length of DNA tethers, one base pair equals to 0.34 nm.We obtained the product of Young's modulus E and the cross-sectional area A from the fit (red line), E = 114.9± 18.5 pN nm −2 for R = 10 Å.1,30 .
Fig. 3
Fig. 3 Response of a molecular sensor to an applied alternating electric field.(a) Schematic of a single tethered particle in the presence of an electric field.Centre of mass positions of the particle were grouped into POS cycle and NEG cycle based on the direction of the applied electric field.(b) Particle oscillating along the x-axis in the presence of an applied 80 V peak-to-peak potential alternating at 0.5 Hz (camera frame rate is 150 FPS).(c) Particle displacement calculated from the distance between the COM obtained from the average positions of all POS cycles and all NEG cycles, respectively, in one measurement.For the particle shown here, the particle displacement is 7.35 nm with 18 cycles in total.(d) Fourier transform of the particle trace shown in (b).
Fig. 4
Fig. 4 Particle displacements for different tethers.(a) Particle displacements from a single field of view ranging from 0.15 nm to 8.23 nm (N = 42).(b)A t-test was carried out to exclude tethered molecules with no statistically significant difference between distributions where the p-value is greater than 0.05 in the presence of potential (Δ).Corresponding t-test results (Δ' and Δ'') reveal that particle displacements are similar within the same potential polarity.
Fig. 5
Fig. 5 Displacement dependence on applied electric field.(a) Particle displacements at different applied E-field potentials.(b) Measured displacement increment of particle 7 in (a) under various applied electric field potentials.The distribution represents the variations between cycles of alternating the potential.
Fig. 6
Fig. 6 Dependence of particle displacements on nanoparticle charge for a fixed potential.(a) Binding of additional biotinylated dsDNA molecules to a streptavidin functionalised particle.(b) Increase in particle displacements upon binding additional biotinylated dsDNA to the reporter bead surface.(c) Schematic indicating a change of tethered particle surface charge caused by buffer pH value alteration.(d) Corresponding change of particle displacements.
Fig. 7 A
Fig. 7 A negatively charged particle moves in the direction opposite to the direction of electro-osmotic flow in the presence of EF.(a) An illustration of the wiring of electrodes and our definition of the direction of motion.(b) Directions of motion of the tethered particle (same one in Fig. 3) in the first 4 cycles.The dotted line indicates the change of polarity of the applied potential.(c) Directions of flow within corresponding cycles in (b).We predicted the direction from the known polarity of applied EF, confirmed by the motion of unbound objects (arrows).Scale bar: 2 μm.(d) Particle displacements due to nanoparticle surface charge variations (slope). | 6,717.2 | 2021-04-02T00:00:00.000 | [
"Physics",
"Engineering"
] |
SPIN-MOMENTUM LOCKED MODES ON ANTI-PHASE BOUNDARIES IN PHOTONIC CRYSTALS
An anti-phase boundary is formed by shifting a portion of photonic crystal lattice along the direction of periodicity. A spinning magnetic dipole is applied to excite edge modes on the anti-phase boundary. We show the unidirectional propagation of the edge modes which is also known as spin-momentum locking. Band inversion of the edge modes is discovered when we sweep the geometrical parameters, which leads to a change in the propagation direction. Also, an optimized source is applied to excite the unidirectional edge mode with high directivity.
Introduction
The quantum spin-Hall effect indicates that the spin of the electron is locked to the direction of propagation [1]. The Z 2 index, or the spin Chern number which is a topological invariant of the given quantum system is defined to verify if the spin Hall conductance exists on the edge of the bulk material [2,3]. After introducing the Z 2 topological index to analyze the system, a variety of unidirectional edge modes in quantum systems were discovered [4,5,6]. By analogy with the quantum spin-Hall effect of electrons, spin-momentum locking phenomena can also be found in photonic topological insulators [7,8,9,10,11]. The direction of propagation is still used to define 'momentum' of the light while the concept of 'spin' is not as clear as the spin of the electron. It may refer to the bonding (antibonding) states of electric and magnetic fields [7], left-hand (right-hand) circular polarizations of electric fields [8], and clockwise (anticlockwise) circulations of coupled resonator optical waveguides [11].
Spin-momentum locked edge modes can also be discovered in trivial optical systems without topological properties, such as photonic crystal waveguides [12,13,14], surface plasmon polaritons [15,16], and even dielectric waveguides [16]. A pair of orthogonal dipoles with ±π/2 phase differences which represent opposite spin directions are applied to excite the unidirectional edge modes in these systems. The spin of dipole sources couples to the spin of evanescent waves near the edges, giving rise to the spin-momentum locked edge modes. An anti-phase boundary is created by shifting the crystal by one-half period along the propagation direction. It can be observed in electronic systems and can be treated as a defect in the crystal that breaks the translation symmetry [17,18]. Accurate atomic manipulation is required in order to design the anti-phase boundaries in electronic systems [19,20]. It is easier to design the anti-phase boundary in photonic system, which may help us have a deeper understanding of how the energy is distributed near the anti-phase boundary.
In this paper, we create an anti-phase boundary in a photonic crystal structure by shifting the structure along the direction of periodicity. Unidirectional propagation of the edge modes is discovered. To the authors' best knowledge, spin-momentum locked edge modes have not been found on anti-phase boundaries in quantum or optical systems. It will not only make the existence of the propagating edge modes along anti-phase boundaries in quantum systems possible, but also provide a new way to design chiral waveguides in photonic crystal structures.
Spin-momentum locked modes
As shown in Fig. 1, an anti-phase boundary is created by shifting the photonic crystal along the direction − → a 2 by −a 0 /2, which is one-half period. The geometry and material parameters are given in Fig. 1. Here we only investigate the transverse magnetic (TM) modes of the electromagnetic waves, where only E z , H x , and H y are nonzero. According to Ref. [8], tuning the distance between the center of the diamond and the center of cylinders R will change the topological properties of the crystal. When R < a 0 /3, the structure behaves as a topologically trivial material with Z 2 index equal to zero. Band folding occurs when R = a 0 /3 since the lattice vectors of the unit cell change into Fig. 2. The size of the unit cell shrinks while the Brillouin zone expands. If the original Brillouin zone (R = a 0 /3) is chosen, the bands on expanded Brillouin zone (R = a 0 /3) must be folded to fit in the original one, which leads to the creation of a Dirac cone at the Γ point. Further increasing R opens the band gap at Γ point and turns the trivial crystal into a topological insulator with nonzero Z 2 index. Band inversion happens at the Γ point when R > a 0 /3 with dipole modes in the higher band and quadrupole modes in the lower band. Unidirectional edge modes can be found at the boundary between the topological insulator (R > a 0 /3) and trivial crystal (R < a 0 /3).
Here we place the topological insulator with R > a 0 /3 on both sides of the boundary as shown in Fig. 1b. However, the topological properties of the photonic crystal cannot explain the edge modes discovered on the anti-phase boundary since shifting will not change the band diagram and Z 2 index of the crystal. As shown in Fig. 3a, the odd edge modes (anti-symmetric distributions) and the even edge modes (symmetric distributions) are caused by the mirror symmetry of the super-cell. The field distributions of the edge modes calculated by COMSOL are given in Fig. 3b. The E z distributions at point P 1 and P 2 defined in Fig. 3a are the same while the Poynting vectors are in opposite directions. Here we define the counter clockwise rotation of the Poynting vectors on the left side of the anti-phase boundary as spin-up and the clockwise rotation as spin-down. By comparing P 1 and P 2 we know that the edge modes with the same frequency but opposite k vectors have different spin directions. Also, we show that the edge modes with the same k vector have opposite spin directions by comparing the fields at P 2 and P 3 .
In order to excite the edge modes, a circularly polarized magnetic dipole is chosen as the source in our driven mode simulation. By observing the Poynting vectors in Fig. 3c, we conclude that magnetic dipole (x − iŷ)/ √ 2 behaves like the spin-up source while (x + iŷ)/ √ 2 like the spin-down source. The frequency of excitation is chosen to be inside the band gap of the bulk modes, which only excite the odd edge modes as we can conclude from Fig. 3a. We apply the spin-up source to the shifted structure to excite the spin-up edge mode at P 1 . Since the group velocity at P 1 is positive, the wave will propagate along the direction − → a 2 . The simulation result shown on the left side of Fig. 3d matches this theoretical prediction. Similarly, a spin-down source will excite the edge mode propagates along − − → a 2 , which is also shown on the right side of Fig. 3d.
Tuning the parameter R to R < a 0 /3 will dramatically change the properties of the edge modes. According to Ref. [8], the band diagram has been closed and reopened at the Γ point when tuning the R from R > a 0 /3 to R < a 0 /3. The even edge mode rises while the odd mode declines. As shown in Fig. 4a, the even mode is above the odd mode inside the band gap when R = 0.3a 0 , which is opposite to the result shown in Fig. 3a. If we apply the spin-up source (x − iŷ)/ √ 2 with normalized frequency inside the band gap, it will excite the spin-up edge mode at P 2 as shown in Fig. 4b. Since the group velocity at P 2 is negative, the wave will propagate along the − − → a 2 direction, which is verified by the left part of Fig. 4c. This indicates both topological and trivial photonic system can form anti-phase boundary and support spin-momentum locked edge modes on the boundary. The source of the same spin can excite wave with opposite propagation directions in these two photonic crystal systems.
Band inversion of edge modes when tuning the offset
By tuning the offset t which is defined in Fig. 1b, we can get a series of dispersion relations as shown in Fig. 5a and Fig. 5b. Since the mirror symmetry is broken for t = −0.5a 0 , we can't define the odd mode or even mode according to the mirror plane. For the trivial unit cell, varying from the anti-phase boundary with t = −0.5a 0 to the two dimensional photonic crystal with t = 0 will make the dispersion curve get closer to the projected bulk band diagram. The variation of dispersion curves for the structure consisting of topological unit cell is more complicated. As shown in Fig. 5a, the two dispersion curves converge at the Γ point and form a degenerate point at Γ when the offset t = −0.2085a 0 . If we continue changing t from −0.2085a 0 to 0, the gap between two edge modes reopens and increases until the two curves vanish into the bulk bands.
The band inversion occurs at the Γ point when the offset crosses over the degenerate case t = −0.2085a 0 . As shown in Fig. 5c, the E z distributions in the higher band of the case t = −0.2a 0 are the same as the lower band when t = −0.22a 0 . When k 2 is sufficiently far away from the Γ point, the field distributions look similar in the higher band or lower band for different offsets. We can conclude that only the edge modes that are close to Γ point will be inverted when −0.5a 0 < t < 0.2085a 0 , which is similar to the band inversion of the bulk modes in Ref. [8].
Edge modes in gradual shift structure
We can also create an anti-phase boundary by gradually shifting the unit cells on the two sides of the boundary as shown in Fig. 6a. Here the unit cell with R = 0.345a 0 is studied. We can conclude from the dispersion relations shown in Fig. 5a that the edge modes which decay rapidly into the bulk can only be found when the offset between the adjacent unit cells is large enough. For the offset with |t| < 0.1a 0 , the dispersion curves are so close to the bulk band diagram that their energy is not well confined to the boundary. Hence the offset of t = 0.05a 0 is chosen between the adjacent unit cells on the two sides of the anti-phase boundary to prevent the appearance of redundant edge modes. The unit cells will look the same if they are far enough from the boundary, which is different from the radical shift structure where the offset difference always exists on the two sides. In this structure, there is no long-range offset between the two sides, only a local shift in the unit cells near the boundary. The dispersion relation and field distribution are shown in Fig. 6b and Fig. 6c respectively, which is similar to the radical shift case as shown in Fig. 3a and Fig. 3b. The unidirectional propagation of the edge modes can also be found when we excite with sources of different spin directions as shown in Fig. 6d.
Optimization of the source
By optimizing the combination of two orthogonal magnetic dipoles, we can achieve edge modes with better directionality. The magnetic dipole can be defined as: where 0 < θ < π/2 and −π < φ < π. The spin-up ((x − iŷ)/ √ 2) and spin-down ((x + iŷ)/ √ 2) source mentioned above are the particular cases when θ and φ in Eq. 1 are set to π/4, π/2 and π/4, −π/2 respectively. According to Ref. [21], we can also define the directionality of the edge mode by where c + (c − ) is the line integration of the Poynting vector measured on the top(bottom) of the structure as shown in Fig. 4c. If |D| is close to 1, we can conclude that the system has good directionality while no directionality can be observed when D = 0. As shown in Fig. 7, the signs of D at the locations of spin-up and spin-down source are opposite for R > a 0 /3 and R < a 0 /3, which verifies the conclusion that wave propagates in opposite directions for the same source when we tune the R of the system.
Conclusion
Spin-momentum locked edge modes are discovered on the anti-phase boundaries which are formed by shifting two halves of a photonic crystal along the direction of periodicity. By applying magnetic dipole sources with different spin directions, we can excite the edge modes propagating in opposite directions. The inversion of the edge modes is revealed when we adjust the distance between the center of the unit cell and the cylinders, which leads to opposite propagation directions with the same source. Also, tuning the offset of the unit cells on two sides can cause band inversion of the edge modes for the topologically non-trivial photonic crystal system. Optimization of the source gives the edge modes better directionality and helps us to further understand the system, making it more practical for the unidirectional wave propagation applications. | 3,102.4 | 2019-05-21T00:00:00.000 | [
"Physics"
] |
Scalable Multi-Core Dual-Polarization Coherent Receiver Using a Metasurface Optical Hybrid
The space-division multiplexed (SDM) coherent transmission using a multi-core fiber (MCF) is a promising technology for further increasing the capacity of optical communications and interconnects. For broader application of SDM systems, dual-polarization (DP) coherent receivers that can be directly coupled to MCFs without bulky fan-in/fan-out (FIFO) devices and polarization beam splitters (PBSs) are desirable. Unlike intensity-modulation direct-detection (IMDD) receivers, however, scaling a DP coherent receiver to multiple spatial channels in a compact form factor is not straightforward. Here, we propose and demonstrate a compact and scalable surface-normal multi-core DP coherent receiver using an ultrathin dielectric metasurface (MS). The MS is composed of silicon nanoposts on quartz, which are judiciously designed to function as the DP optical hybrids and focusing lenses for all spatial channels. Using the fabricated device, simultaneous homodyne detection of DP 64-ary quadrature-amplitude-modulation signals from a four-core fiber is demonstrated with an error vector magnitude of less than 3% for all spatial/polarization channels from 1530 to 1570 nm. The demonstrated receiver can be assembled into a compact module to enable low-cost SDM coherent receivers without bulky PBS and FIFO devices.
I. INTRODUCTION
O VER the past few decades, numerous key technologies have been introduced in optical communications to address the continuously growing data traffic [1], [2].In the 2010 s, dual-polarization (DP) digital coherent systems were put into practical use, which have substantially expanded the transmission capacity by fully utilizing optical phase and polarization degrees of freedom, in addition to optical intensity employed in intensity-modulation and direct-detection (IMDD) systems.Digital coherent systems, which have initially become dominant in the backbone and metro networks, are now expanding to shorter-reach networks, such as datacenter interconnects and optical access [3], [4], [5].
Another emerging technology to expand the transmission capacity is the space-division multiplexing (SDM) that utilizes multiple spatial modes of a multi-core fiber (MCF) or a multi-mode fiber (MMF) as an additional transmission degree of freedom [6], [7], [8].In particular, use of uncoupled MCFs is expected in the near-future practical systems since they do not require power-hungry multi-input multi-output (MIMO) processing to compensate for complex modal coupling among spatial channels [9], [10].SDM systems using dual-core uncoupled MCF have recently been deployed in submarine links, which will be in service in a few years [11].We can expect that such SDM systems would extend to shorter-reach networks, where compact and cost-effective multi-core DP coherent transceivers would be desired.
In order to handle multi-core DP coherent signals in a scalable manner, integrated transceivers that can be coupled directly to MCFs without bulky fan-in/fan-out (FIFO) devices and polarization beam splitters (PBSs) would be ideal.For IMDD systems, multi-core integrated transceivers using vertical-cavity surface-emitting lasers and surface-illuminating photodetector (PD) arrays have been demonstrated [12].Unlike IMDD format, however, dense 2D array of coherent receivers are challenging to realize due to their inherent complexity.As a result, a coherent receiver that can directly receive multi-core DP coherent signals from an MCF has never been demonstrated to our knowledge.
A straightforward approach to scale conventional integrated waveguide-based receivers to multiple 2D spatial channels is to use arrayed grating couplers (GCs).Indeed, single-polarization signals from an MCF can be efficiently coupled to multiple single-mode waveguides using fan-shaped GCs [13], [14], [15].In a DP coherent receiver, however, both polarization signals need to be coupled to multiple waveguides simultaneously, which makes use of GCs challenging.A polarization-separating (PS) GC that guides xand y-polarization components to separate waveguides in different directions has been explored [16], [17], [18].However, PSGCs generally require a large footprint and/or complicated routing to two independent waveguides per each PSGS, which make these devices not easily scalable to a dense 2D coherent receiver array.To this end, we have recently proposed an alternative approach of realizing a scalable multicore DP coherent receiver using a dielectric metasurface (MS) and a high-speed PD array [19].The MS consists of an array of silicon (Si) nanoposts, which were designed judiciously to focus signal light from each core to five PDs with precisely controlled polarization-dependent phases.By mixing with a local oscillator (LO) light, IQ signals of both polarizations were retrieved from five photocurrent signals without using FIFO or PBS.The designed MS was fabricated on a compact silicon-onquartz (SOQ) chip to demonstrate simultaneous demodulation of four DP coherent signals from a four-core step-index MCF at 1550-nm wavelength.
In this article, we provide a rigorous analysis of the receiver sensitivity of the proposed receiver and derive an explicit condition for the MS to achieve the highest sensitivity.Furthermore, detailed design methodology of MS, comprehensive experimental results using an SMF, and additional experimental investigations on broad wavelength operation from 1530 to 1570 nm are provided.
Throughout this paper, we employ following mathematical notations: c(t) denotes the expected value of a scalar function c(t), v is the Euclidean norm of a vector v, M T , M † , M F , and tr(M) are the transpose matrix, the adjoint matrix, the Frobenius norm, and the trace of a matrix M, respectively.
II. DEVICE CONCEPT
The configuration of the proposed SDM DP coherent receiver is shown in Fig. 1.Fig. 1(a) illustrates the case where a singlechannel DP coherent signal is coupled from an SMF [20].Signal light emitted from an SMF is incident on the MS and is focused with equal intensity at five points forming a regular pentagon on the surface of a PD array.Optical phases at respective focal points are designed to maximize the detection sensitivity as we will discuss in the next subsection.In an optimal case, the phases of adjacent focusing points differ by 2π/5 and 4π/5 for x and y polarizations, respectively, as shown in Fig. 1(b).Then, collimated LO light with 45 • linear polarization is combined and irradiated onto the same PD array, so that signal-LO beat components are detected.The nonlinear signal-signal beat noise can be eliminated effectively by taking the difference between the photocurrents of adjacent PDs.Subsequently, we obtain four real-valued electrical signals, from which the four distinct elements of DP coherent signal (i.e., in-phase and quadrature signals of both polarization components) can be retrieved through 4 × 4 linear operation by digital signal processing (DSP).
We should note that this device relies on the same principle as the waveguide-based coherent receiver with five PDs, demonstrated by the authors [21].Compared with conventional DP coherent receivers, which require eight PDs (or four balanced PDs) [22], [23], [24], the number of PDs in our receiver is reduced to five.As a result, we can minimize the footprint of the PD array.Moreover, owing to its surface-normal configuration, the proposed scheme can naturally be scaled to receive spatially parallelized coherent signals.Since the MS acts as a focusing lens, multiple signal beams from different input positions are focused to five points at shifted positions.This feature enables simultaneous detection of multiple signals from all cores when an MCF is placed at the input as shown in Fig. 1(c).Importantly, this can be accomplished without modifying the MS or other optical Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
components.Consequently, this configuration offers inherent scalability with respect to the number of channels, enabling compact implementation of SDM digital coherent receivers.
A. Theory
Here, we present an analytical model of a generalized surfacenormal DP coherent receiver comprising N PDs.To describe the electric field distributions of the xand y-polarization components on the PD array surface for the signal and LO lightwaves, respectively, we use Jones vectors denoted as where ) represent the normalized complex IQ amplitudes for both polarizations.Since the electric field at the PD array surface is described as , where ω c is the carrier angular frequency, the photocurrent I n (n = 1, . . ., N) of n-th PD is described as where R is the responsivity of PD.The first term, P bn , denotes the LO-LO beat component, which is spatially integrated within the entire detection area of n-th PD and is written as Similarly, P a1n and P a2n in the second term in (2) represent the signal-signal beat components for the x and y polarizations, respectively, and are defined as Finally, the third term in (2) represents the signal-LO beat component using Q n and A, which are defined as where To eliminate the signal-signal beat component term that are nonlinear with respect to A, differential photodetection is generally required.This operation is mathematically written as where S denotes the four real-valued signal components, C is a 4 × N matrix representing the differentiation operation, and In order to remove the signal-signal beat components, C must satisfy the following conditions: When these conditions are satisfied, S is expressed by inserting (2) to (8) as where we define In (10), the first term inside the parenthesis represents the signal-LO beat components, whereas the second term is a constant vector representing the remaining LO-LO beat components after the differentiation.If the matrix CQ is regular, the desired signal component A can be obtained from (10) as
B. Sensitivity Analysis
In general, the receiver sensitivity of a coherent receiver is ultimately limited by the shot noise.We consider a case where the detected PD current I meas includes noise components ΔI ≡ [ΔI 1 , . . ., ΔI n ] T and is written as Assuming that LO light intensity is large compared with the signal, variance of the shot noise current can be expressed as [22] where q is the elementary charge and W is the detection bandwidth.We can calculate the variance of the difference between the decoded IQ signal A retrieved from I meas and the actual IQ signal A (see Appendix A for details) as where Σ L is a N × N diagonal matrix consisting of the square roots of the LO light intensity expressed as We now define a dimensionless parameter K as Since K is proportional to the normalized noise ε, it is useful in comparing the receiver sensitivity of different configurations.
In the following, we evaluate the quantitative receiver sensitivity assuming that the incident signal power is normalized, so that |a 1 | 2 dxdy = |a 2 | 2 dxdy = 1, and the LO power is sufficiently large so that E LO 2 dxdy = P LO 1. First, we consider a conventional DP coherent receiver with a 90 • hybrid (N = 8) under ideal conditions.We assume that the xand y-polarization components of the signal and LO light are split and focused to n = 1, . . ., 4 and n = 5, . . ., 8, respectively, with equal power and precise phase relationship.Then, the signal and LO power detected by each PD can be expressed as Considering the phase shift of 90 • hybrid, H kn and Q are written as where |H| is defined as The matrix C for the four balanced PDs is expressed as By inserting parameters of ( 19), (22), and ( 24) to (18), we obtain Next, we investigate the sensitivity of our proposed configuration with five PDs (N = 5) shown in Fig. 1.Assuming that the power transmittance of the beam splitter (BS) is T , signal and LO power at each PD can be written as Since half of the LO power interferes with each polarization of the signal in this case, |H kn | can be written as Then, H kn and Q are derived as = |H| 1 e i2π/5 e i4π/5 e −i4π/5 e −i2π/5 , (28) Since we take the difference between the photocurrents of adjacent PDs, C is written as Note that the LO-LO beat component is also eliminated by the current differentiation in this case, leading to P ΔLO = 0. Inserting ( 26), (30), and (31) to (18), we obtain From (32), we see that K approaches 16 by increasing T , provided that sufficient LO power is input to the receiver.Consequently, we can conclude that the theoretical sensitivity limit of the proposed configuration is half of the conventional polarization-diversity 90 • -hybrid configuration given by ( 25).This 3-dB penalty originates from the fact that both polarization components of the signal are mixed with the LO light and received together in our five-PD scheme.This is in contrast to the conventional polarization-diversity configuration with eight PDs, where orthogonal polarization components of the signal are first demultiplexed, then combined with the LO light having the same polarization state, and finally detected by separate PDs for respective polarizations.In the proposed method, therefore, only half of the detected signal in average has the same polarization state as the LO light and can interfere to generate the signal-LO beat components, resulting in a 3-dB penalty.
Although the theoretical 3-dB reduction in the sensitivity is obviously a drawback of our five-PD configuration, we should note that fiber-to-chip coupling losses at GCs and insertion losses at the waveguide-based 90 • optical hybrids, which are required in conventional DP coherent receivers, can easily add up to exceed 3 dB in practice [23], [24].These losses should be eliminated or relaxed in our surface-normal receiver.In addition, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.the elimination of PBS and reduction of the number of PDs from eight to five in our scheme would offer a substantial advantage in minimizing the device footprint and complexity.Besides, in a system where an optical preamplifier is deployed before the receiver, the amplified-spontaneous-emission (ASE) noise becomes dominant.In such a case, the proposed scheme does not have any sensitivity penalty compared with conventional polarization-diversity-based DP coherent receivers.
Finally, we should note that the minimum number of PDs to demodulate a DP coherent signal by linear operations in general is five.Through a numerical parameter search, it is confirmed that our proposed scheme with the phase distributions shown in Fig. 1(b) exhibits the theoretical maximum sensitivity among all the possible phase configurations using five PDs.
III. DESIGN AND FABRICATION OF METASURFACE
To experimentally demonstrate the proposed device, we designed and fabricated an MS composed of Si nanoposts on SiO 2 .The MS was designed to focus light from each core into five different spots with precise phase relationship as derived in the previous section.Fig. 2 illustrates the setup, where a fiber output is placed at a distance of l 1 from the MS and focused at a distance of l 2 from the MS output.Assuming an input fiber with the mode field diameter (MFD) of d, the diameter of each focused spot would be (l 2 /l 1 )d.Then, when an MCF with the core spacing of Λ is deployed, convolutions of these five spots would appear at the focal plane with a distance of (l 2 /l 1 )Λ.
In this work, a four-core step-index MCF [25] with d = 10.4 μm and Λ = 40 μm at a wavelength of 1550 nm was employed.The diameter of the focused spot was set to 31.2 μm by setting l 1 = 2 mm and l 2 = 6 mm.The diameter of entire MS region was set to 600 μm, so that more than 99% of the light emitted from the fiber was captured.To simplify the design of MS, a highly symmetric structure is preferable.Thus, the focal points were arranged in a regular pentagon, as depicted in Fig. 2(b).The size of the regular pentagon was determined to maximize the spacing between the focal points, resulting in a spacing of p = 45.8 μm.Assuming Gaussian beam approximation, we estimated that 98.7% of the total optical power of each focused spot was within the diameter of p, indicating sufficiently low crosstalk to adjacent PDs.
The MS was designed based on the methodology demonstrated previously [26], [27].As shown in Fig. 3(a), Si elliptical nanoposts with 1.05-μm height on a SiO 2 substrate were employed as meta-atoms and arranged in a square lattice.By adjusting the diameters (D x , D y ) of each meta-atom, arbitrary phase shifts (φ x , φ y ) for xand y-polarization components could be applied.Using the rigorous coupled-wave analysis (RCWA), a look-up table (LUT) was first generated, which indicated the required (D x , D y ) to achieve desired (φ x , φ y ) [28].Assuming the lattice constant L of 0.9 μm and the ranges of D x and D y from 200 nm to 840 nm, we successfully constructed LUT that gives arbitrary φ x and φ y with an average transmission as high as 94%.
The desired phase shift characteristics of MS were then calculated.The target phase profiles at the input and output of the MS were obtained through forward propagation from the input fiber mode as well as backward propagation from the ideal electric field at the PD array surface.By taking the difference of them, required phase profiles for xand y-polarization components were obtained.Since dielectric MS can control only the phase, electric field profiles on the PD array generally deviate from the ideal ones, causing, for example, intensity variations between each focal point.Nonetheless, owing to the high rotational symmetry in the complex amplitude distribution in the designed layout, such errors were minimized.Using these phase shift profiles and LUT, the distribution of meta-atom dimensions was derived as shown in Fig. 3(b).
To confirm the validity of the designed MS, rigorous simulation using the finite-difference time-domain (FDTD) method was performed.Fig. 3(c) shows the intensity and phase profiles of electric field (E x , E y ) at the PD plane for xand y-polarized inputs.The white line denotes the assumed photosensitive area of each PD, with a diameter of 40 μm.We can confirm that the light is focused at the five PDs with precise phase differences as designed.The total loss from incident light to PD is 3.1 dB and 3.7 dB for x and y polarizations, respectively.The intensity difference between different PDs is less than 0.2 dB, while the crosstalk from the neighboring input cores is suppressed below -16 dB for both polarization components.From (18), the receiver sensitivity penalty from the theoretical limit is derived to be 3.9 dB.Furthermore, it is numerically confirmed that the variation of sensitivity using our MS is less than 0.2 dB within the C-band, which is comparable or smaller than the loss variation of conventional waveguide-based 90 • hybrids using multimode interference couplers [29].
The primary sources of the loss compared to the ideal case are attributed to two factors.First, since a lossless dielectric MS can only manipulate the optical phase, the actual wavefront generated by MS may not match perfectly with the ideal profile, which generally has a nonuniform intensity distribution.This difference is estimated to account for ∼1 dB of loss.By employing dual MS configuration, it is possible to control both the intensity and phase distributions [30], which should reduce the penalty caused by such intensity mismatching.The second factor is the interaction between adjacent meta-atoms, which is estimated to contribute to ∼2 dB of loss.While the LUT generated through RCWA assumes an infinite array of meta-atoms with identical dimensions, the actual designed MS consists of spatially varying meta-atoms, which results in phase errors.We can, therefore, expect that these losses can be eliminated to some extent by employing more sophisticated design algorithms that take into account complex interactions between neighboring meta-atoms [31], [32], [33].
The designed MS was fabricated using a silicon-on-quartz (SOQ) substrate with 1.05-μm-thick crystalline Si layer.The meta-atom patterns were formed by electron-beam lithography with ZEP520 A resist, followed by inductively coupled plasma reactive-ion etching (ICP-RIE) using SF 6 and C 4 F 8 gases (Bosch process).Microscope and scanning-electron-microscope (SEM) images of the fabricated MS with a diameter of 600 μm are shown in Fig. 4. By optimizing fabrication conditions, the undesired scalloping effect on the sidewalls induced by the Bosch process [34] was minimized.Besides, high sidewall verticality exceeding 88 • and good in-plane uniformity were obtained.To evaluate the yield of our MS, we fabricated 20 devices on different chips and confirmed that the deviation of meta-atom dimensions was generally within ±10 nm, indicating high yield and reproducibility of our fabrication process.
IV. CHARACTERIZATION OF METASURFACE
The fabricated MS was evaluated by measuring the intensity and phase characteristics at the focal plane.Fig. 5(a) illustrates the experimental setup.Continuous-wave (CW) light at a wavelength of 1550 nm was emitted from an SMF, focused at P1' plane, and incident on MS.The polarization state was controlled using a quarter-wave plate (QWP), a half-wave plate (HWP), and a polarizer.The focal plane P2 at the output of MS was magnified using a 4-f system comprising two lenses, f 3 and f 4 , and then observed with an InGaAs camera (Hamamatsu C10633-13, 320 × 256 pixels).In order to measure phase profiles, a reference path was placed by splitting the collimated beam and combined with the MS output to form an interferometer.We employed the digital off-axis holography method to extract phase information from the camera image [35], [36].To measure the focusing efficiency, each focusing point was selected by an iris and the transmitted power was detected by an optical power meter.
Fig. 5(b) shows the complex amplitude distributions obtained for xand y-polarization inputs.We can confirm that the light is focused precisely onto five points at the vertices of regular pentagon for both polarizations as designed.The optical phase at each focal point, averaged over the photosensitive area of each PD (indicated by white circles in Fig. 3(c)), is plotted for xand y-polarization inputs in Fig. 5(c).The average and worst phase errors from the ideal case are 0.018π and 0.047π, respectively.Fig. 5(d) shows the measured focusing efficiencies to five points.The average efficiency to each of these five focal point is −11.8 dB (i.e., 4.8 dB below the intrinsic splitting loss of 1/5), so that the total insertion loss of our device is derived to be 4.8 dB.The intensity variations among the five focal points are less than 1.1 dB for both polarizations.The efficiencies obtained by the FDTD simulation are also plotted by the dashed lines for comparison.The ∼1.5-dB difference between the experiment and the simulations is attributed to imperfections in fabrication.6) was used as a reference light or LO; we extracted the light from only one core by using an iris and collimated to an InGaAs camera.The other path (the upper path in Fig. 6) was used as a signal.As were stabilized.Note that the modulation speed was limited by the read-out time of the camera and the response time of VRs in this experiment.A real-valued 5 × 2 MIMO adaptive equalizer by a decision-driven least-mean-square (DD-LMS) algorithm [37], [38] was applied to obtain IQ components and to compensate for any polarization variation inside MCF and other fluctuations in the setup.Prior to the experiment, the adaptive equalizer was trained using transmitted symbols.Then, no knowledge about the transmitted data was used during the measurement to retrieve IQ signals.
V. MULTI-CORE COHERENT DETECTION EXPERIMENT
First, complex optical field distributions on the focal plane of MS measured at 1550-nm wavelength without IQ modulation are shown in Fig. 7(a) for both polarizations.Once again, the relative optical phases are extracted from these distributions for 20 focal points and plotted in Fig. 7(b).Now, we see that the signals from the four cores are focused on 20 spots with the same relative optical phases as in the single-core case shown in Fig. 5(b).The average and worst phase errors compared with the ideal case for the 20 focusing spots are 0.074π and 0.26π, respectively.Finally, Fig. 8 shows the retrieved constellations of xand y-polarization components from all four cores at wavelengths of 1530, 1550, and 1570 nm.Here, an averaged value of approximately 500 symbols is shown for each constellation.The error vector magnitude (EVM) derived from Fig. 8 is plotted for each wavelength in Fig. 9.The EVM is less than 3% for all wavelengths, cores, and polarization states, demonstrating that high-accuracy homodyne detection is achieved across the entire C-band.
VI. CONCLUSION
We have proposed and experimentally demonstrated a spatially scalable multi-core DP coherent receiver based on a dielectric MS.The proposed receiver realizes all the necessary functionalities of optical hybrids and focusing lenses for multiple channels using a single dielectric MS.Rigorous analytical model was introduced to derive explicit conditions for MS to achieve the highest receiver sensitivity.We designed and fabricated MS composed of Si nanoposts array on SiO 2 to demonstrate simultaneous detection of four DP 64QAM signals from a four-core step-index MCF.The EVM was less than 3% for all spatial/polarization channels from 1530 to 1570 nm.While the speed was limited by the InGaAs camera in this work, use of high-speed PD arrays [39], [40], [41] can easily achieve >100 Gb/s bandwidth per single spatial/polarization channel, enabling >Tb/s total data rate for four-core DP coherent signals.
Our designed MS can simply replace micro-optic lenses used in conventional receivers and can be inserted between an MCF and a PD array separated by less than 1 cm.The entire receiver can therefore be assembled into a compact module equivalent to current commercial single-channel coherent receivers [42], enabling low-cost and practical SDM coherent receivers without PBS and FIFO devices.
Finally, we should note that the scalable coherent receiver demonstrated in this work would be useful not only in optical communications and interconnects, but also in various other emerging applications.For example, frequency-modulated continuous-wave (FMCW) light detection and ranging (LiDAR) systems utilize the coherent nature of lightwaves to achieve high-sensitivity three-dimensional (3D) sensing.Compact and large-scale coherent receiver arrays are desired to achieve highspeed and high-resolution 3D imaging without using bulky and fragile mechanical beam steerers [43].In the field of optical neural networks, on the other hand, scalable coherent receiver arrays would be useful in performing large-scale multiplyaccumulate operations in the optical domain at ultra-low latency and computational energy [44], [45].We, therefore, believe that this work paves the way for a wide range of applications that Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
utilize the spatial parallelism of coherent light, including optical communication, sensing, and computing.
APPENDIX A DERIVATION OF SENSITIVITY FORMULA
Based on (13) and (14), The difference between the received symbol A and the actual symbol A is expressed as To quantitatively evaluate the sensitivity of receivers with different configurations, we derive the expression for the variance ε defined as Since the shot noise expressed by ( 15) is superimposed on each PD current, the noise vector ΔI follows an N -dimensional Gaussian distribution N N as It is known that a general N -dimensional vector X ≡ [X 1 , . . ., X N ] T ∼ N N (µ, Σ) has a following property about a transformation by a N × N matrix M [46]: Applying ( 35) and ( 36) to (33), the distribution of A − A can be written as (37) Next, the variance of the norm of X is calculated as where X n is the n-th element of X, which follows the onedimensional Gaussian distribution.When the mean and the standard deviation of X n are defined as μ n and σ n , the second order moment X 2 n can be calculated using the moment generating function M X n (t) = exp(μ n t + σ 2 n t 2 /2) as Equations ( 38) and (39) yield Applying (37) and ( 40) to (34), ε is expressed as Here, we used the Frobenius norm defined as
Fig. 1 .
Fig. 1.Proposed spatially scalable DP coherent receiver.(a) Schematic of the receiver when a single signal is input from an SMF.(b) Spatial distributions of signal and LO lightwaves on the PD array plane (a) when a single signal is input from an SMF and (c) when multiple signals are input from a 19-core MCF.In both configurations, the same MS can be used.
Fig. 2 .
Fig. 2. Functionalities of the designed MS.(a) Light propagation from each core.MFD: mode field diameter.(b) Intensity pattern on the PD array surface when a four-core MCF (Λ = 40 µm) is placed at the input.The light from each core forms a regular pentagon.
Fig. 3 .
Fig. 3. Design and numerical analysis of MS.(a) Schematic of the MS composed of Si elliptical nanoposts on SiO 2 , having the x and y diameters of (D x , D y ) and the lattice constant of L. (b) Designed distributions of D x and D y for L = 0.9 µm.(c) Electric field distribution on the PD array obtained by FDTD simulation.
Fig. 5 .
Fig. 5. Phase measurement of fabricated MS at 1550-nm wavelength for single-channel input.(a) Experimental setup.LD: laser diode, VOA: variable optical attenuator, SMF: single-mode fiber, QWP: quarter-wave plate, HWP: half-wave plate, Pol.: polarizer, f 1 = f 2 = 50 mm, f 3 = 4 mm, f 4 = 200 mm.(b) Electric field distribution at the PD array for x and y polarization inputs.(c) Relative optical phases of x and y components at each focal point.(d) Focusing efficiency at each focal point.Solid lines: measurement, dashed lines: FDTD simulation.
Fig. 6 .
Fig. 6.Experimental setup of four-channel DP coherent detection using the fabricated MS and a step-index four-core MCF.TLD: tunable laser diode, FC: fiber collimator, VR: liquid-crystal-based variable retarder, f 1 = f 2 = 50 mm, f 3 = 10 mm, f 4 = 200 mm, f 5 = 40 mm, f 6 = 125 mm, f 7 = 200 mm.The components in the broken-line box are used to emulate an IQ modulator.Blue arrows indicate the angle of light polarization, the slow axes of the VRs, or the orientations of the polarizers.
Fig. 6
Fig. 6 shows the experimental setup to demonstrate simultaneous homodyne detection of four-channel DP coherent signals from an MCF.CW light from a wavelength-tunable laser (Santec TSL-510) was irradiated to the input facet of a four-core stepindex MCF (cladding diameter = 125 μm, MFD = 10.4 μm at λ = 1550 nm, Λ = 40 μm) through a focusing lens.Output light from MCF was converted to 45 • linear polarization and split into two paths.One path (the lower path in Fig.6) was used as a reference light or LO; we extracted the light from only one core by using an iris and collimated to an InGaAs camera.The other path (the upper path in Fig.6) was used as a signal.As
Fig. 7 .
Fig. 7. Phase measurement of MS for four-core inputs at 1550-nm wavelength.(a) Electric field distribution at the PD array for x and y polarization inputs.(b) Relative optical phases of x and y components at each focal point for four cores.
Fig. 9 .
Fig. 9. Measured wavelength dependence of EVM in derived from measured data in Fig. 8. | 7,089.6 | 2024-06-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Effects of stenting on blood flow in a coronary artery network model
The effect of stenting on blood flow is investigated using a model of the coronary artery network. The parameters in a generic non-linear pressure–radius relationship are varied in the stented region to model the increase in stiffness of the vessel due to the presence of the stent. A computationally efficient form of the Navier–Stokes equation is solved using a Lax–Wendroff finite difference method. Pressure, vessel radius and flow velocity are computed along the vessel segments. Results show negative pressure gradients at the ends of the stent and increased velocity through the middle of the stented region. Changes in local flow patterns and vessel wall stresses due to the presence of the stent have been shown to be important in restenosis of vessels. Local and global pressure gradients affect local flow patterns and vessel wall stresses, and therefore may be an important factor associated with restenosis. The model presented in this study can be easily extended to solve flows for stented vessels in a full, anatomically realistic coronary network. The framework to allow for the effects of the deformation of the myocardium on the coronary network is also in place.
INTRODUCTION
Stents are metallic mesh tubes that are inserted into arteries to keep them open.A stent is generally placed in an occluded artery (such as coronary arteries) following a balloon angioplasty (also known as percutaneous transluminal coronary angioplasty).Balloon angioplasty involves inserting a balloon catheter into the femoral artery and guiding it to the blockage site in the coronary artery.The balloon is then inflated to widen the artery and remove the blockage.The catheter is removed, and then the stent catheter is introduced into the system.The stent is placed over a deflated balloon on the catheter.Once the stent is at the blockage site, the balloon is inflated and the stent is deployed to keep the artery open.
Stenting can reduce acute complications of angioplasty as well as the restenosis rate (Jowett and Thompson 2003).With just balloon angioplasty (and no stent implantation), up to 50% of cases develop restenosis.This restenosis is due to arterial remodelling (shrinkage) and neointimal hyperplasia (Jowett and Thompson 2003).Restenosis rates are lower with stenting than without because of better initial patency of the artery.However, restenosis still occurs in about 20% to 30% of cases mainly due to neointimal proliferation.Stenting is also said to improve the safety and efficiency of balloon angioplasty procedures since the need for emergency coronary artery bypass graft surgery is reduced (Jowett and Thompson 2003;Yock et al. 2003).
Some of the stenosis locations that are targeted for stent implantation (and used in many modelling studies) are in the left anterior descending (LAD), right coronary and circumflex arteries (Wentzel et al. 2000;Capozzolo et al. 2001;Hsieh et al. 2001;LaDisa et al. 2003;Zhu et al. 2003).
Stents can cause longitudinal straightening of the vessels due to their stiffness and can also cause the enlargement of the lumen of the vessels due to their radial force (Zhu et al. 2003;Tortoriello and Pedrizzetti 2004).The presence of the stent introduces a compliance mismatch with the surrounding portion of the vessel.Several studies have modelled the biomechanics of stents including initial stent expansion and deformation (Barragan et al. 2000;Etave et al. 2001;Tan et al. 2001;Migliavacca et al. 2002).Other studies have presented experimental results and models of flow and flow-tissue interactions in the local stented region of arteries (Rolland et al. 1999;Berry et al. 2000;Moore and Berry 2002;Benard et al. 2003;LaDisa et al. 2003;Segment (1) Segment ( 2) Segment ( 5) Segment ( 6) Segment ( 4 4) and ( 7) are 34 mm, ( 5) and ( 6) are 26.5 mm.(b) Close-up of segment 2 where the stent was placed-L is the length of the stent in place (15 mm in this study), 2L denotes the region where the stent affects the material properties of the vessel, A-E are points along the vessel segment at which results are presented later in the article.Tortoriello and Pedrizzetti 2004).Many such studies are mainly concerned with the local changes in the flow and the implications on restenoses of stented arteries.One-dimensional blood flow models that assume a radial velocity profile are increasingly being developed in the literature (Smith et al. 2002;Formaggia et al. 2003;Sherwin et al. 2003).The simplifications in these models remove explicit calculation of local effects such as circumferential stress.However, the formulation is computationally efficient, and thus provides the ability to simulate blood flow in large networks.To our knowledge, this is the first study to investigate computationally the effects of stenting on the flow in a network.
In this article, we present such a computationally efficient formulation to study the effects of stenting an artery segment on the local, upstream and downstream flow pressures, velocities and vessel radii.The principles of modelling blood flow used in this study can be extended in a straightforward manner to anatomically realistic coronary artery meshes.The framework to allow the mechanics of the heart is already in place for more detailed future models (Smith et al. 2005).A finite difference grid based on an underlying finite element coronary artery mesh is used in this study (Figure 1).
This study presents a one-dimensional model of coronary artery blood flow in the unstented and stented cases for an assumed pressure pulse.In the stented case, the compliance mismatch is modelled by changing the form of the pressure-radius relationship within the stiffer stented region compared with the surrounding vessels.The resulting transient and steady-state pressures, velocities and vessel radii are compared with each other and physiological conclusions are drawn from the results.
Single vessel
The blood flow model used in this study is the one presented by Smith et al. (2002).Blood is modelled as an incompressible, homogeneous, Newtonian fluid.The Navier-Stokes equations govern the Newtonian fluid flow.A cylindrical coordinate system (r, θ and x) is used in the model, with x representing the local vessel axial direction.Assuming that the velocity in the circumferential (θ) direction is zero and following the derivation given in Smith et al. (2002), the Navier-Stokes equations that govern the Newtonian fluid flow reduce to (1).
where V is the average velocity, p is the pressure, R is the inner vessel radius, ρ is the fluid density, ν is the fluid viscosity and t is the time.The term α is a nondimenionalised parameter, which defines the radial velocity profile that can vary between 1.33, fully developed flow, and 1.0, corresponding to a flat plug flow profile (for more details, see Smith et al. 2002).
The equation of conservation of mass is given by (2).
To solve the system, another equation describing the relationship between pressure and inner vessel radius is needed.The vessel wall is assumed to be elastic and any viscoelastic effects are ignored in this model.An empirical relationship shown in (3) is established between the pressure and radius.
where R 0 is the unstressed vessel radius and G 0 and β are parameters that define wall behaviour.A two-step Lax-Wendroff finite difference technique is used to solve the above equations.
Bifurcation model
A bifurcation model is necessary to model flow through the branches in the artery network.The bifurcation model used in this study was presented by Smith et al. (2002).
In the model, the junction of three tubes (a, b and c) is approximated as the elastic tubes short enough for velocity along them to be assumed constant and for losses due to fluid viscosity to be negligible.It is also assumed that no fluid is stored within the junction.Let a 1 be the finite difference grid point at the end of vessel a (the parent vessel) entering the junction and let a 2 be the point proximal to it.Let b 1 and c 1 be the grid points at the beginning of the daughter vessels (b and c) and let the distal points be denoted b 2 and c 2 , respectively.The conservation of mass at any given time through the junction is then governed by (4), where F a 1 , F b 1 and F c 1 are the flows through each junction segment and p 0 is the pressure at the junction centre.
In a segment of length l a and radius R a of tube a, the conservation of momentum of fluid is given by (5).
Similarly, the conservation of momentum equations can be written for tubes b and c.These are then expanded using a central difference representation about the k + 1 2 time step.
Arterial properties
For all simulations, blood density ρ, viscosity µ, and the flow profile parameter α have been set to 1.05e-03 g mm −3 , 3.2 mm 2 s and 1.1, respectively.The difference between unstented and stented vessels lies in the pressure-radius relationship defined for each case (Equation (3)), in particular, the descriptions of G 0 and β used in each case.
Unstented vessel
For an unstented vessel, G 0 and β values are taken to be constant along the entire vessel.The values used in this model are G 0 = G 0C = 10.0 kPa and β = 2.0.These chosen values for the constants are typical of values fitted from the experimental data of Carmines et al. (1991).
Stented vessel
Because of the presence of the stent, the vessel becomes stiff.Therefore, the pressure-radius relationship within the stented region will be different from the one outside the stented region.As the effect of the stent in the network is modelled by altering the mechanical properties of the vessel wall, the shape of the stent is assumed to implicitly conform to the shape of the vessel wall and not substantially alter the haemodynamic characteristics, that is, cylindrical, axi-symmetric flow.Thus, specifics such as stent geometry and strut size are beyond the scope of inclusion in network models via this technique.
Since the vessel is a continuous structure, this change in the pressure-radius relationship between the unstented and stented regions is assumed to occur in a smooth manner without discontinuities.This change in the pressureradius relationship is described by the gradual change in G 0 and β values along the length of the vessel, where the stent is assumed to have an effect.The total length of the region where the stent is assumed to have an effect is taken to be twice the length of the stent.The variation in G 0 and β values is given by the relationships shown in ( 6) and ( 7), which are adapted from the work of Tortoriello and Pedrizzetti (2004).
In ( 6) and ( 7), x is the axial distance from the start of a vessel segment, x s is the distance at which the effect of the stent is assumed to start, L is the length of the stent, G 0C and β C are the constant parameters as used in the unstented vessel and n is used to control the steepness of the transition of the G 0 and β values from the unstented to the stented regions and back.The x-dependent functions Eh(x) and R (x) are given by ( 8) and (9).
For this model, the constants Eh and R were taken to be 60 and 0, respectively, thus making R(x) = 1.The function R(x) can actually be used to model any increase in unstressed vessel radius due to the expansion of the stent.For this model, however, the unstressed vessel radius was taken as constant in the stented and unstented regions.
According to the experimental measurements of pressure and vessel diameter conducted by Rolland et al. (1999), the effects of the stent can be felt up to 10 mm on either side of the stent.Therefore, the value of n was taken as 4, giving a less steep transition for the G 0 and β values compared with that reported in Tortoriello and Pedrizzetti (2004), where a value of 8 was assigned to n. Figure 2 shows an example of the variation of G 0 and β along a stented vessel.
Stented vessel
Here, we derive a steady-state solution for the stented vessel case as a means of verifying the numerical solution procedure.By setting all time-dependent terms in (1) to be zero, and using vessel area S = πR 2 and a constant flow rate Q = VS, from the conservation of mass principle, we obtain (10).6) and ( 7) along a stented vessel with x s = 5 mm, stent length L = 15 mm.
Substituting vessel area into (3), we obtain (11), where G 0 (x) is given by the appropriate part of ( 6) and Equations ( 11) and ( 12) can be used to give an expression for d p dx (Equation ( 13)) in the region where x s ≤ x ≤ (x s + 2L).Note that in the stented region of the vessel both G 0 and β are functions of the axial distance x.
Equations ( 7)-( 9) can be easily differentiated to give dβ dx , d(Eh) dx and d( R) dx .So substituting ( 14) and ( 15) into (13), we have an expression for d p dx in terms of x and S such that it can be used in (10).This gives us an ordinary differential equation (ODE) describing dS dx as shown in ( 16).This differential equation cannot be solved analytically.It is solved numerically using the MATLAB software package with a Runge-Kutta method based on the work of Dormand and Prince (1980).The vessel area S(x) from this steady-state solution is later compared with the flow simulation results to verify the simulations described below (Figure 6).
FLOW SIMULATIONS
Pressure boundary conditions were set for the inlet at the top of vessel segment 1 and at the outlets at the bottom of segments 4, 5, 6 and 7. Exit pressures were held at 2 kPa for the duration of the flow simulations.This pressure was set to the lower end of the range reported by Defily et al. (1993) for a small epicardial vessel of the same size.Simulations were carried out for two different cases of inlet pressure under unstented and stented conditions: (I) linear increase in inlet pressure from 2 kPa to 3 kPa over 0.25 s and inlet pressure held at 3 kPa up to 1.0 s and (II) a single sinusoidal pressure pulse squared followed by a constant pressure value held at 2 kPa as given by (17).
where p i is the inlet pressure and t f is the period of the curve (0.25 s in this case).The stent was assumed to have a length L of 15 mm with an expanded radius R 0 of 1.5 mm.The unstressed vessel radii for all segments of the network were also taken to be 1.5 mm.All simulations were carried out on a High Performance Computer maintained at The University of Auckland.The machine (Silicon Graphics Origin 3400 model) contains 16 processors (MIPS designed, 500 MHz, R14000) with 16 GB physical DRAM running the Silicon Graphics IRIX 6.5.13 operating system-only one processor in the system was used for running the simulations.The software used for the simulations was CMISS (continuum mechanics, image analysis, signal processing and system identification), which was developed at The University of Auckland.The CMISS can be used to perform finite element, boundary element or finite difference analyses.It has a computational back end to perform the analyses and a graphical front end allowing the user to view the results in three dimensions.From our previous study, we found that a time step of 2e-06 s, when using double precision arithmetic, provides numerical stability, accuracy and no accumulation of numerical errors.Applying this time step, each simulation took approximately 25 minutes to run to a final simulation time of 1.0 s.
Simulation I
Figure 3 presents time versus varying pressure, vessel radius and flow velocity results at different points along vessel segment 2 in the unstented and stented cases.The positions of these points on the network are shown in Figure 1.These points are significant for the following reasons: point A lies before the start of the region where the effects of the stent are felt (where the material properties of the vessel start changing), B lies at the start of the region where the effects of the stent are felt, C lies exactly in the middle of the stented region where the vessel is at its stiffest, D is the point corresponding to B downstream of the stent and E is the point corresponding to A downstream of the stent (i.e. after the material properties return to their unstented values).
The pressure plots in Figure 3 are similar to but not exactly the same as each other in the unstented and stented cases.In the stented case, the pressures at the upstream points are slightly higher and pressures at the downstream points are slightly lower than the corresponding values in the unstented case.Similarly, the radii upstream and downstream of the vessel are only slightly different in the stented and unstented cases.However, as expected, the radius in the middle of the stented region (point C) hardly rises above 1.5 mm (the radius of the stent).Also, as expected, the velocity in the middle of the stent is much higher than the upstream or downstream velocities.The velocities upstream and downstream of the stent are lower in the stented case than in the unstented case.
Figure 4 shows that pressure along vessel segments 1, 2 and 4 at various times in the unstented and stented cases.No difference can be observed in the pressures for segment 1.The greatest difference between the unstented and stented cases occurs in segment 2 where the stent is placed.In the transition zone to the stented region, the pressure dips at both the proximal and distal parts of the vessel.This introduces a short region where there is a negative pressure gradient.However, as can be seen in Figure 5(e), the blood still flows downstream due to the momentum already present in the flow.In segment 4, there is a slight decrease in pressure in the stented case compared with the unstented case.
Figure 5 shows radius, velocity and flow along vessel segment 2 at various times in the unstented and stented cases.The radius in the stented region stays close to 1.5 mm as noted previously.The flow velocity along the unstented vessel is reasonably constant compared with the stented case.In the stented case, the velocity in the stented region of the vessel is greater than in the unstented regions.As the pressure increases over time, this difference in velocity also increases.The figure also shows that the flow rate in the unstented case is higher than in the stented case.By 1.0 s, the system essentially reaches a steady state.The radius along vessel segment 2 in the stented case at 1.0 s (from the finite difference solution) is used to compute the corresponding vessel area that is plotted against distance along the vessel in Figure 6.Also shown in the same figure is the solution of the steady-state ODE (16).Equation ( 16) was solved using a constant flow rate (Q) of 2520 mm 3 /s and a vessel area (S i ) of 8.88 mm 2 (at the start of the region that is modified by the stent, i.e. at x = x s ).This steady-state flow rate and vessel cross-sectional area at x s was obtained from the finite difference solution.Figure 6 also shows the pressure along the segment at 1.0 s from the finite difference and the steady-state ODE solutions.The steady-state ODE pressure was computed from the steady-state ODE area using (11).
Simulation II
Figure 7 presents time versus varying pressure, vessel radius and flow velocity results at different points along vessel segment 2 in the unstented and stented cases for sinusoidal inlet pressure variation.The positions of these points on the network are shown in Figure 1.As with the previous set of simulations (Simulation I: linear increase in inlet pressure), the pressure plots in Figure 7 are similar to but not exactly the same as each other in the unstented and stented cases.Similarly, the radii upstream and downstream of the vessel are only slightly different in the stented and unstented cases.The radius in the middle of the stented region (point C) is only slightly higher than 1.5 mm (the radius of the stent).Also, as expected, the differences in velocity between the unstented and stented cases are as described for Simulation I.
Figure 8 shows pressure, radius and flow velocity along vessel segment 2 at various times in the unstented and stented cases.The trends in pressures, radii and flow velocities along the vessel described for Simulation I apply to this set of simulations as well.The pressure in the transition zone in the stented case shows the highest change from the unstented case.
DISCUSSION
The placement of a stent in an artery leads to a region of increased stiffness where the stent is placed.Because of this increase in stiffness, the flow is changed considerably within the stented region and at the transition from the unstented to stented regions.This study makes two contributions-firstly, we extend a computationally efficient methodology to solve for flow in a stented coronary artery network and, secondly, using this methodology, we investigate flow in a coronary artery network with and without the stent in place.
The strength of the methodology presented in this article lies in the fact that it can be implemented in a full coronary artery mesh relatively easily following the procedure used by Smith et al. (2002).The framework for embedding this full coronary mesh in a model of the heart is also in place (Smith et al. 2000(Smith et al. , 2005)).The flow problem can be combined with the deformation of the myocardium of the heart to produce more realistic simulations of blood flow in stented and unstented networks.Another advantage of the methodology presented in this study is its ability to simulate the effects of more than one stent at various places in the network.
The main limitation of this methodology is its onedimensional nature and the lack of localised flow information that can only be obtained with higher dimensional models.Local pressure gradients, changes in velocity and wall shear stress can be computed accurately with three-dimensional models of the stented region.However, when considering the effects of flow changes in large networks of vessels such as the coronary network, full threedimensional models are computationally prohibitive while one-dimensional models provide the required computational efficiency.
We have applied two relatively simple pressure boundary conditions in this study.It is important to acknowledge that the actual pressure gradients in the coronary vasculature can be determined by intermyocardial, ventricular and transmission of downstream coronary pressure.Thus, it is impossible to say whether a constant or sinusoidal boundary condition is more appropriate.However, we do propose that the two boundary conditions applied provide an initial basis from which to understand the dynamics of the system.
The results of the investigation of flow in a coronary artery network with and without a stent in place allow us to draw conclusions regarding restenosis and arterial remodelling.As mentioned previously, a significant problem arises due to restenosis of stented regions and remodelling of other arterial segments.Restenosis is thought to be the result of abnormal flow conditions affecting the endothelial cell (innermost) layer of the blood vessel.Vessel wall shear stress distribution and flow direction are known to affect endothelial cells (Davies 1995;Mates 1995;Kataoka et al. 1998;Yamamoto et al. 2003).
Wall shear stress is dependent on local velocity gradients, which in turn are dependent on the pressure gradients along the vessel.Rapidly changing pressure gradients such as those occurring in the transition zones between the stented and unstented regions of a vessel (Figures 4 and 8) could indicate local regions of negative velocity, although the average velocity is positive (Figures 5 and 8).The rapidly changing pressure gradients indicate the possibility of a large difference in the mechanical environment of endothelial cells at the entrance and exit of stented regions in vessels.As recently reviewed by Boisseau (2005), this has significant implications for gene regulation, vulnerability to endothelial cell hypoxia, accumulation of white cells and a number of other haemorheological disorders.Furthermore, these changes in local flow velocity can adversely affect the endothelial cell layer and hence cause restenosis.Wentzel et al. (2000) found significant changes in wall shear stresses at the entrance and exit areas of the stent, and concluded that the changes could be related to in-stent restenosis.Moore and Berry (2002) also confirmed that the greatest effects are likely to be felt at the entrance and exit areas of the stents.Rachev et al. (2000) mentioned experimental studies that have found arterial lumen decreasing in the region just outside the stent due to wall remodelling.Their results showed the remodelling effects as a result of axial and circumferential stress concentration in the immediate vicinity of the stent.
A very important factor in any model regarding the effects of the stent is the set of material properties or the particular form of the pressure-radius relationship assigned to the region in question.The model presented in this study can be used to assess the effects of different material property assumptions on coronary blood flow in detailed representations of coronary vasculature.
Figure 1
Figure 1 (a) Coronary artery network model-labelled in the figure are the numbers of the network vessel segments.The lengths of the segment are: (1) is 20 mm, (2) and (3) are 40 mm, (4) and (7) are 34 mm, (5) and (6) are 26.5 mm.(b) Close-up of segment 2 where the stent was placed-L is the length of the stent in place (15 mm in this study), 2L denotes the region where the stent affects the material properties of the vessel, A-E are points along the vessel segment at which results are presented later in the article.
Figure 3
Figure 3 Time versus varying pressure, radius and flow velocity at various points (A-E) along the vessel segment 2 in the unstented ((a)-(c)) and stented ((d)-(f)) cases with linear increase in inlet pressure (Simulation I).See Figure 1 for the positions of these points in the network.Note that the radius at point C in the middle of the stented vessel is just over 1.5 mm (the radius of the stent).
Figure 4
Figure 4 Pressure along different vessel segments at the times shown in the legends of the plots in the unstented ((a)-(c)) and stented ((d)-(f)) cases.The results for 0.25 s and 1.0 s are very similar to each other and appear together as the top line in all the plots.See Figure 1 for the positions of the vessel segments in the network.
Figure 5 Figure 6
Figure 5 Radius, flow velocity and flow rate along vessel segment 2 at the times shown in the legends of the plots in the unstented ((a)-(c)) and stented ((d)-(f)) cases.The radius results for 0.25 s and 1.0 s are very similar to each other and appear together as the top line in plots (a) and (d).See Figure 1 for the positions of the vessel segment 2 in the network.
Figure 7
Figure 7 Time versus varying pressure, radius and flow velocity at various points (A-E) along the vessel segment 2 in the unstented ((a)-(c)) and stented ((d)-(f)) cases with sinusoidal inlet pressure variation (Simulation II).See Figure 1 for the positions of these points in the network.Note that the radius at point C in the middle of the stented vessel is just over 1.5 mm (the radius of the stent). | 6,347.6 | 2006-04-01T00:00:00.000 | [
"Engineering"
] |
The Hsp90 Chaperone Machinery*
Hsp90 was originally identified as one of several conserved heat shock proteins. Like the other major classes of heat shock proteins, Hsp90 exhibits general protective chaperone properties, such as preventing the unspecific aggregation of non-native proteins (1). However, Hsp90 seems to be more selective than the other promiscuous general chaperones, as it preferentially interacts with a specific subset of the proteome (2). Another specific feature ofHsp90 is its regulatory role of inducing conformational changes in folded, native-like substrate proteins that lead to their activation or stabilization (3). Recently, the three-dimensional structures of full-length Hsp90 from Escherichia coli, yeast, and the endoplasmic reticulum were solved (4–7). Together with sequence data, these showed that, although Hsp90 maintained its general domain structure from bacteria to man, distinct changes seem to have adapted Hsp90 to the more complex protein environment of the eukaryotic cell. Concomitant with the occurrence of a long charged linker connecting the N-3 and M-domains, the eukaryotic protein exhibits an extension of theC-terminal domain, which includes the conserved amino acid motif MEEVD at the C terminus (8). This region serves as the major interaction site for a cohort of co-chaperones (Table 1) (9), which apparently support Hsp90 in the folding andactivationof its substrate proteins in eukaryotes. In this review, we summarize the current knowledge on the functional principles of this molecular machine, including the ATPdriven chaperone cycle of Hsp90 and its regulation by co-chaperones and post-translational modifications.
Hsp90 was originally identified as one of several conserved heat shock proteins. Like the other major classes of heat shock proteins, Hsp90 exhibits general protective chaperone properties, such as preventing the unspecific aggregation of non-native proteins (1). However, Hsp90 seems to be more selective than the other promiscuous general chaperones, as it preferentially interacts with a specific subset of the proteome (2). Another specific feature of Hsp90 is its regulatory role of inducing conformational changes in folded, native-like substrate proteins that lead to their activation or stabilization (3). Recently, the three-dimensional structures of full-length Hsp90 from Escherichia coli, yeast, and the endoplasmic reticulum were solved (4 -7). Together with sequence data, these showed that, although Hsp90 maintained its general domain structure from bacteria to man, distinct changes seem to have adapted Hsp90 to the more complex protein environment of the eukaryotic cell. Concomitant with the occurrence of a long charged linker connecting the N-3 and M-domains, the eukaryotic protein exhibits an extension of the C-terminal domain, which includes the conserved amino acid motif MEEVD at the C terminus (8). This region serves as the major interaction site for a cohort of co-chaperones (Table 1) (9), which apparently support Hsp90 in the folding and activation of its substrate proteins in eukaryotes. In this review, we summarize the current knowledge on the functional principles of this molecular machine, including the ATPdriven chaperone cycle of Hsp90 and its regulation by co-chaperones and post-translational modifications.
Structure and ATPase Activity
Hsp90 is a flexible dimer. Each monomer consists of three domains: the N-domain, connected by a long linker sequence (in eukaryotes) to an M-domain, which is followed by a C-terminal dimerization domain (Fig. 1). The N-domain possesses a deep ATP-binding pocket (10), where ATP is bound in an unusual kinked manner. ATP hydrolysis by Hsp90 is rather slow: Hsp90 from yeast hydrolyzes one molecule of ATP every 1 or 2 min (11,12), and human Hsp90 hydrolyzes one molecule of ATP every 20 min (0.04 min Ϫ1 ) (13). The ATPase activity is essential for the function of Hsp90 in yeast (11,14). The slow hydrolysis suggests that complex conformational rearrangements of Hsp90 are coupled to the ATPase reaction and that these represent the rate-limiting step of the enzyme. The first steps of these conformational changes were elucidated recently in detail (15): upon ATP binding, a short segment of the N-domain called the "ATP lid" changes its position and flaps over the binding pocket (Fig. 1, steps 2 and 3). This releases a short N-terminal segment from its original position (16). In a subsequent reaction, this segment binds to the respective N-domain of the other subunit in the dimer, producing a strand-swapped, transiently dimerized N-terminal conformation (step 3) (5,15). These N-terminal rearrangements result in further conformational changes throughout the entire Hsp90 dimer leading to a twisted and compacted dimer, in which N-and M-domains associate and the distance between M-domains is shortened by 40 Å (5). The association of N-and M-domains completes the active site of this "split ATPase" (step 4). Recently, a similar progression of steps was shown to occur also for the endoplasmic homolog Grp94 (17), mitochondrial TRAP1 (6,18), and human Hsp90 (19). Therefore, the scenario outlined above seems to be the ubiquitously conserved ATPase mechanism for Hsp90.
Interestingly, the unusual way in which ATP is bound by Hsp90 is perfectly mimicked by some natural compounds, such as geldanamycin and radicicol. These are highly specific and potent inhibitors of the Hsp90 ATPase (20), blocking the maturation of substrate proteins and eventually resulting in their degradation (21). As several Hsp90 substrate proteins are kinases, which can be deregulated in the development of cancer, derivatives of Hsp90 inhibitors are currently being investigated as anticancer therapeutics at the stage of clinical trials (22).
Hsp90 Cofactors Involved in Substrate Maturation
Current models assume that the conformational changes associated with ATP hydrolysis are required for reaching or maintaining an activated state of a substrate protein. In well studied examples such as the SHRs, several cofactors interact with Hsp90 in a sequential manner to assemble a functional chaperone machinery (23,24). The basis for this ordered succession of different assemblies can now be rationalized, as it turned out that several Hsp90 cofactors display a strong binding preference for specific Hsp90 conformations. The loading of an SHR onto Hsp90 requires the cooperation of Hsp90 with the chaperone Hsp70 and its cofactor Hsp40 (25). Moreover, both chaperones become physically linked by an adaptor protein called Hop/Sti1 (Table 1). This co-chaperone binds via small helical TPR domains to the C-terminal ends of Hsp70 and Hsp90 (26). It seems that Hsp70 stabilizes the SHR in a conformation that can be recognized and bound by Hsp90. However, experimental evidence for this notion is largely lacking. How the substrate in this complex is transferred from Hsp70 to Hsp90 is also still unclear. It might be that the bridging by Hop/ Sti1 selects for Hsp90 molecules in a conformation competent for substrate binding in addition to increasing the local concen-tration of Hsp70 and Hsp90. For the progression of the chaperone cycle, empty Hsp70 and Hop/Sti1 have to dissociate, and other co-chaperones such as specific PPIases and p23/Sba1 enter the complex (Table 1) (24). These PPIases also possess a TPR domain, which binds to the C-terminal end of Hsp90. The second cofactor, p23/Sba1, associates with the N-terminally dimerized conformation of Hsp90 (27,28), making it likely that the dramatic conformational rearrangement from the open to the closed state of Hsp90 occurs at this stage of the Hsp90 chaperone cycle (Fig. 1, steps 3 and 4). This closed conformation is metastable and upon ATP hydrolysis returns to the open state ( Fig. 1, step 5) (28). The bound substrate protein dissociates in turn from Hsp90, permitting a new round of the cycle.
The first steps of the cycle for the maturation of signaling kinases is a variation of the scheme described above. Here, the kinase-specific Hsp90 cofactor Cdc37 seems to associate with substrate kinases in their inactive forms first. This complex may then be loaded onto Hsp90 (29). The following steps are less clear yet. It may be that also for kinases, Hsp70, Hsp40, and Hop/Sti1 are additionally required (30). Because Cdc37 partially inhibits the ATPase of Hsp90 (31), it is reasonable to speculate that a transient stalling of the ATPase facilitates substrate transfer onto Hsp90 in general.
Up to now, more than a dozen distinct Hsp90 cofactors have been identified (9). Their large number is not paralleled by other chaperone systems. Most bind Hsp90 with submicromolar affinities ( Table 1). The major class of these is the TPR domain-containing cofactors, which include the proteins Hop/ Sti1, PP5/Ppt1, and the large PPIases, among others ( Table 1). Some of these cofactors could specifically facilitate the activation of a certain set of substrate proteins. In this context, the cofactor Unc45 has been shown to participate in early muscle development during the assembly of myosin filaments (32), and Xap2/AIP has been found in complex with the protein HbX from the hepatitis B virus (33) or the endogenous aryl hydrocarbon receptor (34). It remains to be seen whether these cofactors are really highly specialized or whether they are just the first substrates identified. Interestingly, in yeast, only two cofactors of the Hsp90 system are essential for viability (in addition to Hsp90). These are Cdc37 and Cns1 (29,35,36). Cns1 has been shown to associate with both Hsp90 and Hsp70; however, due to the presence of a single TPR domain, no ternary complexes can be formed (37). The function of Cns1 remains elusive.
Layers of Regulation
Hsp90 is embedded into several control mechanisms that influence its activity. As mentioned above, the ATPase activity is intrinsically decelerated due to slow conformational changes of the lid segment within the N-domains. In addition to this, the ATPase activity of Hsp90 is regulated by several cofactors. Another set of cofactors modulates substrate processing without changing the ATPase activity. Finally, the activity of Hsp90 is also regulated by post-translational modifications.
ATPase-modulating Cofactors
Several of the cofactors of Hsp90 modulate its ATPase activity by interacting preferentially with a specific conformation of Hsp90. p23/Sba1 binds to the ATPase domain and stabilizes the N-terminally dimerized conformation at the late stage of the ATPase cycle ( Fig. 1) (28,38,39). This positions p23/Sba1 to be part of the Hsp90 complex at the moment of hydrolysis and appears to be the reason for the decrease in the ATP turnover rate of Hsp90 in the presence of p23/Sba1.
A second site of cofactor interaction resides in the M-domain, to which Aha1, the only known activator of the Hsp90 ATPase, binds. This interaction stimulates the weak ATPase activity of Hsp90 by Ͼ10-fold (40). The stimulatory interaction of Aha1 with the M-domain of Hsp90 suggests a participation of this domain in the rate-limiting step of the ATPase reaction. Structural studies show that Aha1 binding remodels the M-domain around the catalytically active Arg 380 and shifts the domain to a conformation resembling the closed conformation, committed for ATP hydrolysis (compare Fig. 1, step 3) (7, 41).
Another option for modulating the ATPase activity of Hsp90 is implemented by the co-chaperone Sti1 in yeast. Sti1 binds to the C-terminal end of Hsp90 via its TPR domain. In addition, there is a second binding site in the N-or M-domain (42). Binding to this second site allows Sti1 to inhibit the ATPase of Hsp90 completely (42,43). Interestingly, ATP binding is not affected (42). As in the case of p23/Sba1, Sti1 selects a specific conformation of Hsp90 for binding. Biochemical analysis sug- Cdc37 Kinase-specific Hsp90 co-factor, ATPase inhibitor, chaperone properties 9.8 µM m (31) gests that Sti1 binding prevents the N-terminal dimerization and association of the N-and M-domains. (Fig. 1, step 3) (42). In consequence, the K m value of ATP hydrolysis is not affected, but k cat is effectively reduced. This is the classical behavior of a noncompetitive inhibitor. The open state propagated by Sti1 is the acceptor state for substrate, as outlined above. Cdc37 also inhibits the ATPase activity of Hsp90 (31,43). Details of the respective mechanisms are still unclear, but crystallographic data provide a model in which Cdc37 binds to the ATP lid in the N-terminal nucleotide-binding site of Hsp90 and prevents the N-terminal dimerization by inserting as a dimer in between the two N-domains (44). Because Cdc37 is involved in the loading of Hsp90 with kinases, it is consistent with notions about the acceptor state of Hsp90 that Cdc37 also keeps Hsp90 in an open state. The deceleration of the ATPase activity probably permits this state to persist for an extended period of time. Together, these cofactors allow the adjustment of the basic conformational changes of Hsp90 that can be viewed as substrate processing steps to the specific needs of certain clients concerning binding to Hsp90 but also, in the case of p23/Sba1, its release.
Post-translational Modifications
A further level of regulation that has gained increasing attention is covalent modifications of Hsp90, such as acetylation (45), S-nitrosylation (46), and phosphorylation (47)(48)(49). These modifications lead to alterations in the maturation of Hsp90 substrates (50). Although some of the modified amino acid positions have already been identified (mostly by incorporation of the corresponding radiolabeled groups and/or mass spectrometry), it is a challenging task to obtain a quantitative picture of the modifications.
For nitrosylation, an interesting feedback loop between human Hsp90 and its substrate eNOS was discovered (46,51). On the one hand, eNOS activity depends on the chaperone activity of Hsp90; on the other hand, the nitrosylating agent NO modifies human Hsp90 at Cys 597 (which is part of the C-terminal domain). As a consequence, the Hsp90 ATPase activity is inhibited (51), which inhibits in turn the up-regulation of eNOS activity by Hsp90. This might facilitate a tight regulation of cellular NO production in a negative feedback loop (51).
For acetylation, a similar scenario emerges. Histone deacetylase inhibitors, which result in the hyperacetylation of Hsp90, lead to a reduced interaction with and maturation of several of its substrate proteins, such as p53, Raf1, Bcl-Abl, and the glucocorticoid receptor (52)(53)(54)(55). As a consequence, an increase in proteasomal degradation of some Hsp90 substrates was found. HDAC6 was identified as the enzyme deacetylating Hsp90 (52). In vivo, Hsp90 is acetylated at least at two sites; one was identified as Lys 294 (in human Hsp90␣) (45). Furthermore, besides the ability to interact with its substrate proteins, the binding of , which in ATP-free Hsp90 contacts the very N-terminal residues. After lid closure, the first 24 amino acids of each Hsp90 monomer dimerize, and the first -strand and ␣-helix swap to associate with the N-domain (ND) of the other monomer (step 3). Furthermore, in each monomer, the N-domains contact the corresponding M-domains (MD). This metastable conformation is committed for ATP hydrolysis (step 4). Altogether, this leads to compaction of the Hsp90 dimer, in which the individual monomers now twist around each other. After hydrolysis, the N-domains dissociate; both monomers separate N-terminally; the ATP lid opens; and after release of ADP and P i , Hsp90 returns to the initial state (steps 5 and 6). CD, C-terminal domain. ATP by Hsp90 was also shown to be compromised upon acetylation of Hsp90 (55,56).
Phosphorylation of Hsp90 has already been known for 30 years (47). By two-dimensional gel electrophoresis, it was found that at least four differently phosphorylated isoforms of Hsp90 exist (57,58). For mammalian Hsp90 proteins, the majority of Hsp90 molecules contain on average three phosphates/monomer (59). The Hsp90 phosphorylation level is high under physiological conditions (60). Heat shock conditions induce an increased Hsp90 phosphorylation turnover (61). All known sites reside in the N-domain of Hsp90, but so far, no specific effect can be attributed to a particular phosphorylation event.
Kinases suggested to phosphorylate Hsp90 comprise casein kinase II, Akt, DNA-dependent protein kinase, and, in yeast cells, the Akt homolog Sch9 (reviewed in Ref. 50). Interestingly, one of the cofactors associating with Hsp90 via a TPR domain is PP5/Ppt1, a bona fide phosphatase. The yeast homolog dephosphorylates Hsp90 in a specific manner only when bound to Hsp90 (62). Dephosphorylation of Hsp90 is important for its in vivo function, as in a ppt1 deletion strain, the maturation of several substrate proteins was found to be impaired (62).
It will be a highly rewarding task to understand Hsp90 regulation at the level of post-translational modifications in detail, as this allows the cell to exert fine-tuned control over the Hsp90 chaperone machinery. The situation is even more complicated than described, as numerous co-chaperones of Hsp90 are regulated by modifications such as phosphorylation as well (63,64).
Substrate Selection
In this context, a long standing open question is the extent of substrate specificity of Hsp90. Compared with other chaperones, a large number of substrate proteins are known due to their stable interactions that allowed isolation of the respective complexes. Many of the identified Hsp90 substrate proteins fall into two classes: transcription factors (such as SHRs and p53) and signaling kinases. The interactions with important regulatory proteins apparently allow Hsp90 to influence seemingly unrelated processes, such as evolutionary events (65,66), mitochondrial homeostasis (67), and the propagation of RNA viruses (68). Surprisingly, the known substrate proteins are different in structure, leaving several possibilities as to which determinants are important for interaction with Hsp90. A good example is Src kinase. This protein exists in two variants, a cellular form (c-Src) and a viral form (v-Src). Both forms are almost identical (95%), with just a few amino acid changes. But although v-Src is strictly Hsp90-dependent in its activity, c-Src is largely independent of Hsp90 (69). A first step toward understanding the basis for the differences in the Hsp90 dependence of v-Src and c-Src is the finding that the intrinsic stabilities of the two Src forms differ, with v-Src being a highly unstable protein (70). Systematic variations of Hsp90-interacting kinases have further added to the notion that it is not a specific binding element but rather the stability of the protein that seems to be important for binding (71). Whether this is the general scheme remains to be seen, as there is evidence that a specific loop ("activation loop") in the kinase might participate in governing the interaction with Hsp90 (72).
Regarding identification of the substrate-binding site, Hsp90 is certainly lagging behind other chaperones such as Hsp70 and GroE, for which we have a clear idea of this region. For Hsp90, experimental evidence exists for binding sites in all three domains of the protein (12,73,74). However, recent advances have begun to shed light on substrate binding. Determination of full-length Hsp90 crystal structures of open and closed conformations is a major achievement in this context (4 -6). In analogy to chaperones like GroE and derived from homology of Hsp90 to DNA-binding topoisomerases (75), it was speculated that the substrate protein may be encapsulated by the Hsp90 dimer. This notion is not in agreement with the aforementioned structures, in which there is not enough space to accommodate a substrate protein between the two Hsp90 monomers. The first direct view of the structure of an Hsp90⅐substrate complex comes from electron microscopy and image processing of Hsp90 in complex with the kinase Cdk4 and the cochaperone Cdc37. In this assembly, the substrate kinase is bound in a highly asymmetric fashion to the M-domain and probably the N-domain of one subunit, whereas the co-chaperone Cdc37 resides between the N-domains (74). Whether this association is the same for all kinases and whether this can be generalized for other substrate proteins remain to be determined.
Perspectives
Although the basic features of ATP turnover and associated conformational changes of Hsp90 appear to be solved now, other characteristics of the Hsp90 machinery are not understood to date. These include substrate turnover and requirements for formation of particular substrate⅐co-chaperone⅐Hsp90 complexes. A combination of in vivo and in vitro approaches will be required to resolve these questions. Once we obtain a more detailed view of the processive power of this molecular machine, we can aim to integrate variations concerning specific substrate proteins, cofactors, and particular mechanisms of regulation and post-translational modifications into the picture. | 4,536 | 2008-07-04T00:00:00.000 | [
"Biology"
] |
Human Capital and Reemployment Success: The Role of Cognitive Abilities and Personality
Involuntary periods of unemployment represent major negative experiences for many individuals. Therefore, it is important to identify factors determining the speed job seekers are able to find new employment. The present study focused on cognitive and non-cognitive abilities of job seekers that determine their reemployment success. A sample of German adults (N = 1366) reported on their employment histories over the course of six years and provided measures on their fluid and crystallized intelligence, mathematical and reading competence, and the Big Five of personality. Proportional hazard regression analyses modeled the conditional probability of finding a new job at a given time dependent on the cognitive and personality scores. The results showed that fluid and crystallized intelligence as well as reading competence increased the probability of reemployment. Moreover, emotionally stable job seekers had higher odds of finding new employment. Other personality traits of the Big Five were less relevant for reemployment success. Finally, crystallized intelligence and emotional stability exhibited unique predictive power after controlling for the other traits and showed incremental effects with regard to age, education, and job type. These findings highlight that stable individual differences have a systematic, albeit rather small, effect on unemployment durations.
Introduction
The loss of paid work and ensuing periods of unemployment present severe negative repercussions for not only the person concerned but also their families and the society as a whole. These adverse effects go well beyond a mere loss of material prosperity. Enduring periods of unemployment deteriorate physiological and psychological well-being and, among others, result in more depressive symptoms and higher suicide rates [1][2][3]. Therefore, understanding the process of job search and identifying relevant factors explaining the probability of finding a new job represents a crucial concern for those affected by unemployment's consequences. The present study focused on the human capital of job seekers to identify cognitive and non-cognitive factors predicting their reemployment success. Adopting a longitudinal perspective, the employment histories of unemployed respondents were examined to identify the unique effects of domain-general intelligence, domain-specific competences, and enduring personality traits on the time until successful reemployment.
Human Capital and Reemployment Success
The speed that job seekers manage to find an adequate job matching one's interests, skills, and work expectations (e.g., with regard to wage) after periods of involuntary unemployment is determined by various factors (for a review, see [3]). Among others, reemployment success is influenced by current labor market needs, (e.g., regional unemployment rates), a job seeker's economic need to work (e.g., debts or dependent children), an individual's job search intensity, and, importantly, also his or her human capital [4]. Rather enduring personal characteristics that reflect who one is and what one knows have a substantial impact on an individual's employability [5]. Human capital subsumes several work-related individual differences such as job experience, education, or age, but also various psychological traits (e.g., intelligence, personality) [3]. Economic and sociological studies, for the most part, focused on sociodemographic differences that shape job seekers' success in finding adequate employment. For example, this research showed that older job seekers experience significantly longer unemployment periods [6], whereas highly educated individuals tend to have greater reemployment success [7]. In contrast, research on psychological characteristics in the form of cognitive skills (e.g., intelligence) and personality (e.g., with regard to the Big Five) that might shape job seekers' success in finding adequate employment has been less thoroughly scrutinized [3,8].
Despite a substantial body of research on job-search behavior and its antecedents [9,10], few studies addressed the relevance of cognitive abilities for reemployment success. Several prospective studies identified intelligence measured in childhood or adolescence as an influential predictor of unemployment in later life [11][12][13]. For example, representative data from Sweden highlighted that cognitive abilities assessed in youth that increased by one standard deviation corresponded to an increase in the probability of receiving unemployment benefits in adulthood by about 2 percent [14]. Similar results were identified in the United States: Herrenstein and Murray [13] reported that intelligence scores obtained in adolescence were significantly associated with the probability of being out of the labor force ten years later. Together, these findings indicate that cognitive factors can explain the likelihood of unemployment spells over an individual's life course. However, these results do not show whether cognitive skills might also be relevant for the speed of finding a new job after becoming unemployed. Empirical evidence from personnel selection programs indicate that intelligence represents the most important predictor of job performance [15] and, thus, represents an important criterion for human resource professionals for selecting job candidates. Thus, it is conceivable that cognitive factors might also determine the reemployment success of job seekers. Indirect support for this assumption comes from studies linking reemployment success to schooling outcomes [8,16]: job seekers with a higher education were more likely to find reemployment faster than less educated individuals. Given the strong association between intelligence and academic performance [17], it can be assumed that cognitive skills might be similarly relevant for finding a new job.
A full understanding of psychological characteristics determining reemployment success needs to incorporate non-cognitive skills alongside cognitive factors [18]. Important labor market outcomes such as occupational attainment are not only affected by cognitive abilities but also by various personality traits [19][20][21]. Even reemployment success is partly influenced by basic traits of personality [14,22]. Meta-analytic summaries [8], albeit based on only two to four studies, reported significant negative associations between the Big Five of personality and the duration of unemployment. Higher levels of extraversion, conscientiousness, agreeableness, and openness were associated with significantly shorter unemployment periods. However, with correlations of about r = −0.10, the respective effects were rather small. So far, it is not clear whether these correlations reflect a unique effect of personality on reemployment success or, rather, represent an indirect effect of basic cognitive abilities. Many personality traits exhibit systematic associations with domain-independent intelligence and domain-specific competences [18,23,24]. For example, reasoning correlates at about r = 0.25 with openness to experiences and at r = −0.14 with conscientiousness [25]. Typically, cognitive abilities account for about 5% to 10% of the variance in personality [26]. Given the interdependence of cognitive and non-cognitive skills [18], it is unclear whether personality traits explain incremental variance in reemployment success beyond cognitive skills.
The Present Study
Human factors play an important role in finding a new job. However, few studies examined the relevance of job seekers' psychological characteristics in the form of cognitive skills (e.g., intelligence, competence) and personality (e.g., Big Five) for reemployment success. Thus, the present study answers repeated calls [19,20] to contrast cognitive and non-cognitive factors to identify the unique predictive power of intelligence and personality traits for important labor market outcomes. Moreover, given the documented returns of educational investments on economic success [8,16], the study also acknowledged domain-specific competences as a specific form of cognitive skills that are gained during one's school and college education. Thus, the present study on a representative sample of German adults aims at disentangling the unique effects of domain-independent intelligence from acquired competences and enduring personality traits.
Sample and Procedure
The participants were drawn from the longitudinal National Educational Panel Study (NEPS) that followed a representative sample of German adults over the course of several years [27]. The surveys and cognitive tests were administered at the respondents' private homes by trained interviewers from a professional survey institute. Further information on the sampling procedure, the data collection process, and the interviewer selection and training is summarized on the project website (http://www.neps-data.de). The present analyses focus on N = 1366 individuals (54% women) forming part of the active labor force between 18 and 60 years of age (M = 41.08, SD = 11.14) that exhibited at least one unemployment spell between 2010 and 2015. The sample was well-educated, with about 53% having university entrance qualifications corresponding to levels 3 to 4 of the International Standard Classification of Educations (ISCED) version 1997 [28] and about 39% having finished a university education (equivalent to ISCED levels 5 or higher). The most recent employment of the respondents spanned various fields including blue-collar (30%) and white-collar professions (70%). The International Socio-Economic Index [29] that reflects an individual's economic and social position based on his or her most recent occupation (range: 0 to 100) was M = 43.89 (SD = 17.42).
Time to Reemployment
All respondents provided full occupational histories across their life courses including their employment and unemployment spells. For each new unemployment spell starting between 2010 and 2015, I calculated the duration (in weeks) until reemployment, that is, the number of weeks until an unemployed respondent found a new job. I only considered involuntary unemployment spells characterized by respondents that were out of paid work and were actively seeking paid employment. Thus, non-employment spells due to, for example, maternity leave, sick leave, or an educational hiatus were not considered. Each respondent exhibited between 1 and 9 unemployment spells (Mdn = 1) within the examined time frame, totaling in 1973 unemployment spells for the entire sample. About 68% percent of all included spells ended in reemployment, whereas the remaining spells were still ongoing at the end of 2015 (i.e., the final measurement occasion of this study). The mean time until reemployment was M = 25.64 weeks (SD = 27.58).
Intelligence
Fluid and crystallized intelligence were operationalized with two tests measuring reasoning and receptive vocabulary that were administered in 2014. Fluid intelligence was measured with a matrices test including 12 items [30,31]. For each item, respondents had to identify a basic logical rule and select a geometrical element that followed this rule. The sum score of correctly solved items represented the respondents' reasoning abilities as an indicator of their fluid intelligence. On average, the participants correctly solved M = 8.33 (SD = 2.80) items. The omega hierarchical reliability was ω h = 0.88 [32]. Crystallized intelligence was measured with the Peabody Picture Vocabulary Test [33,34] that included 89 items. For each item, the respondents had to select one out of four pictures that corresponded to a spoken word. The number of correctly answered items represented the measure of receptive vocabulary as an indicator of crystallized intelligence. The average score of the respondents was M = 71.84 (SD = 10.97) and the reliability was ω h = 0.97.
Competences
Domain-specific competences were measured in 2010 with achievement tests focusing on mathematics and reading. The adopted theoretical frameworks for these tests are described in [35,36]. Both tests were scaled using models of item response theory. Competence scores were estimated as weighted maximum likelihood estimates (WLE; [37]). The WLE reliabilities were good with 0.78 for the mathematical test and 0.72 for the reading test, respectively. Detailed psychometric properties of the administered tests are reported in [38,39].
Personality
The five basic traits of personality-openness, conscientiousness, extraversion, agreeableness, and emotional stability-were assessed with a short version of the Big Five Inventory [40] that measured each trait with two or three items. Each item was accompanied by a five-point response scale from fully disagree (1) to fully agree (5). Four scales exhibited acceptable reliabilities with ω h around 0.70 (see Table 1). However, agreeableness was inadequately measured with ω h = 0.45.
Control Variables
Several control variables were acknowledged in the analyses, including the respondent's sex (coded 0 for women and 1 for men), age (in years), and the years of education [41]. Moreover, respondents' most recent occupations were coded according to the International Standard Classification of Occupations (ISCO). Following the methodology adopted in the Programme for International Student Assessment [42], I grouped these occupations into blue-collar jobs (ISCO 6 and higher; coded 0) and white-collar jobs (ISCO 1 to 5; coded 1).
Statistical Analyses
The effects of intelligence, competence, and personality on the time until reemployment were examined with proportional hazard regression analyses [43] that modeled the conditional probability of reemployment (i.e., finding a new job) at a given time interval as the dependent variable. These analyses used the Kaplan-Meier estimator [44] that also acknowledges right-censored data (i.e., ongoing unemployment spells). Because some respondents experienced multiple unemployment spells during the examined period, these dependencies were acknowledged by computing robust variances using the Huber-White sandwich estimator [45]. Moreover, given that some respondents exhibited missing values on one or more variables, these analyses were based on multiple imputations, where missing values were imputed 50 times using predictive mean matching [46]. Effect sizes were evaluated in line with conventional standards [47] with odds ratios (OR) of 1.22, 1.86, and 3.00 indicating small, medium and large effects, respectively. To examine the unique and incremental effects of intelligence, competence, and personality with regard to the control variables, a two-step strategy was adopted [48]. First, the bivariate effects for each domain were studied without acknowledging any covariates. Then, the control variables were included to derive the incremental effects of each domain beyond age, educational level, and job type. Age was modeled as a time-varying variable, whereas educational level and job type were included as time-invariant predictors. Finally, an omnibus model was estimated that included all variables in a single model to derive the partial effect for each predictor controlling for the other. Different models were compared using the Bayesian Information Criterion (BIC; [49]) that indicates a better fit at smaller values.
Statistical Software and Open Data
All analyses were conducted in R version 3.3.2 [50] with the survival package version 2.40 [51] and mice version 2.30 [52]. The raw data is available at http://www.neps-data.de.
Results
The time to reemployment exhibited a rather skewed distribution (see Figure 1). About 25 percent of the respondents found a new job within little more than two months and 50 percent within four months. After six months, nearly 65 percent of the sample was reemployed, whereas the rest of the sample took a median of about eleven months until returning into paid employment.
Statistical Analyses
The effects of intelligence, competence, and personality on the time until reemployment were examined with proportional hazard regression analyses [43] that modeled the conditional probability of reemployment (i.e., finding a new job) at a given time interval as the dependent variable. These analyses used the Kaplan-Meier estimator [44] that also acknowledges right-censored data (i.e., ongoing unemployment spells). Because some respondents experienced multiple unemployment spells during the examined period, these dependencies were acknowledged by computing robust variances using the Huber-White sandwich estimator [45]. Moreover, given that some respondents exhibited missing values on one or more variables, these analyses were based on multiple imputations, where missing values were imputed 50 times using predictive mean matching [46]. Effect sizes were evaluated in line with conventional standards [47] with odds ratios (OR) of 1.22, 1.86, and 3.00 indicating small, medium and large effects, respectively. To examine the unique and incremental effects of intelligence, competence, and personality with regard to the control variables, a two-step strategy was adopted [48]. First, the bivariate effects for each domain were studied without acknowledging any covariates. Then, the control variables were included to derive the incremental effects of each domain beyond age, educational level, and job type. Age was modeled as a timevarying variable, whereas educational level and job type were included as time-invariant predictors. Finally, an omnibus model was estimated that included all variables in a single model to derive the partial effect for each predictor controlling for the other. Different models were compared using the Bayesian Information Criterion (BIC; [49]) that indicates a better fit at smaller values.
Statistical Software and Open Data
All analyses were conducted in R version 3.3.2 [50] with the survival package version 2.40 [51] and mice version 2.30 [52]. The raw data is available at http://www.neps-data.de.
Results
The time to reemployment exhibited a rather skewed distribution (see Figure 1). About 25 percent of the respondents found a new job within little more than two months and 50 percent within four months. After six months, nearly 65 percent of the sample was reemployed, whereas the rest of the sample took a median of about eleven months until returning into paid employment. Influential respondent characteristics determining the time to find a new job were examined by regressing the time to reemployment on the respondents' fluid and crystallized intelligence, mathematical and reading competence, the Big Five of personality, and several control variables. Preliminary analyses indicated that the respondents' sex violated the proportional odds assumption of the Cox [43] regression model. Therefore, the analyses were stratified by sex resulting in different baseline hazard functions for men and women but identical regression coefficients of the included predictors across sexes.
In the first step, the unique effects of intelligence, competence, and personality were studied without considering age, education, and job type as control variables. The results of these analyses are summarized in Table 2 (Model 1). Regarding individual differences in basic cognitive abilities, the odds of finding a new job at a given time interval significantly (p < 0.001) increased by 15 percent (OR = 1.15) for each standard deviation increase in fluid intelligence and by 6% for a standard deviation increase in crystallized intelligence (OR = 1.06, p = 0.03). Similarly, reading competences increased the probability of reemployment by about 14% (OR = 1.14, p = 0.004). In contrast, mathematical competences (OR = 1.06, p = 0.14) were not significantly associated with the odds of finding a new job. Finally, regarding the five personality traits only emotional stability was significantly (p = 0.02) associated with the odds of finding a new job; one standard deviation increase in emotional stability corresponded to an increase in the odds of reemployment of about 7 percent (OR = 1.07). Thus, fluid and crystallized intelligence, reading competence, and emotional stability independently predicted the examined hazard rates of reemployment. Note. Intelligence, competence, and personality scores were z-standardized. Stratified by sex resulting in different baseline hazard functions but constant coefficients across strata. B = Unstandardized regression weight, SE = Standard error for B, OR = Odds ratio, FMI = Fraction of missing information (i.e., the proportion of the total sampling variance due to missing data), BIC = Bayesian information criterion [49]. Based upon 50 imputation samples. * p < 0.05; + p < 0.10.
In the second step, age, education, and job type were added to these regression models to study the incremental contributions of intelligence, competence, and personality with regard to these control variables (Model 2 in Table 2). These analyses failed to identify incremental effects of fluid intelligence (OR = 1.03, p = 0.20) or reading competence (OR = 1.07, p = 0.09) on the odds of reemployment. In contrast, crystallized intelligence (OR = 1.09, p = 0.01) and emotional stability (OR = 1.09, p = 0.004) remained significant predictors of the probability of finding a new job. Thus, education and job type accounted for the effects of fluid intelligence and reading competence, whereas crystallized intelligence and emotional stability exhibited unique associations with reemployment probability beyond these sociodemographic characteristics.
Finally, these effects were replicated in an omnibus analysis that combined intelligence, competence, and personality into a single model (see Table 3). Again, crystallized intelligence (OR = 1.08, p = 0.03) and emotional stability (OR = 1.09, p = 0.01) showed unique effects on the odds of finding a new job, even after controlling for individual differences in fluid intelligence, competences, personality, and the three control variables. The respective effects are summarized in Figure 2. For respondents either high in crystallized intelligence or high in emotional stability, the odds of reemployment gradually increased with the unemployment duration. On average, the difference in the odds of reemployment between high (M + 1 SD) or low (M − 1 SD) levels on the respective traits was about 10 percentage points. Note. Intelligence, competence, and personality scores were z-standardized. Stratified by sex resulting in different baseline hazard functions but constant coefficients across strata. B = Unstandardized regression weight, SE = Standard error for B, OR = Odds ratio, FMI = Fraction of missing information (i.e., the proportion of the total sampling variance due to missing data), BIC = Bayesian information criterion [49]. Based upon 50 imputation samples. * p < 0.05; + p < 0.10.
Discussion
In the wake of the recent global economic crisis, many countries recorded a sharp rise in unemployment rates, thereby giving renewed relevance to investigate predictors of reemployment. The central objective of the present study was to scrutinize the employment histories of a sample of unemployed adults to identify psychological characteristics that might explain the time until they
Discussion
In the wake of the recent global economic crisis, many countries recorded a sharp rise in unemployment rates, thereby giving renewed relevance to investigate predictors of reemployment. The central objective of the present study was to scrutinize the employment histories of a sample of unemployed adults to identify psychological characteristics that might explain the time until they found a new job. The presented results allow for three central conclusions. First, fluid and crystallized intelligence predicted the time to reemployment. Higher levels of intelligence resulted in greater reemployment success. However, the effect of reasoning was fully mediated by the job seekers' education and job type. Fluid intelligence did not exhibit a unique predictive power beyond these sociodemographic characteristics. In contrast, general verbal competencies increased the probability of finding a new job, even after controlling for these sociodemographic differences. Thus, verbal skills seem to help job seekers to present themselves favorably during job interviews and, thus, contribute to their reemployment success [53]. Second, higher levels of reading competence predicted shorter unemployment periods. This might reflect the demands of the modern information age that, for many jobs, places increasing importance on written texts and, thus, text comprehension. Many work tasks (particularly in white-collar professions) require reading skills to quickly identify relevant information from, for example, work reports and derived conclusions from these texts. If reading skills represent valued competences for employees (see [54] for monetary returns of competences), this might increase job seekers' chance to find reemployment. However, competences are primarily acquired in school and college. Therefore, they did not exhibit incremental effects beyond the number of years in education. Educational accomplishments in the form of higher academic degrees have been shown to predict the duration of unemployment periods [8]; this effect also replicated in the present study. Thus, educational levels seemed to fully mediate potential effects of competences on reemployment success. Indeed, a path model estimating the implied mediation effect in Mplus 7 [55] revealed that the number of years in education exhibited a significant indirect effect on unemployment durations via reading competence, B = 0.03, SE = 0.01, OR = 1.03, p = 0.05, but not via mathematical competence, B = 0.01, SE = 0.01, OR = 1.01, p = 0.40. Third, personality explained unemployment durations beyond cognitive skills. These results contribute to the current discussion on the incremental effects of personality traits on economic outcomes [25]. As has been shown elsewhere [18], the Big Five of personality exhibit systematic associations with cognitive abilities. The most pronounced effects in this study were observed for conscientiousness that correlated negatively with domain-independent intelligence as well as domain-dependent competencies (see [21,23] for similar results). However, despite these associations emotional stability showed a unique predictive power on the time to reemployment. Thus, self-conscious individuals that are not easily frustrated by challenging situations were more likely to find a new job faster. This effect falls in line with previous findings that showed a higher probability of unemployment for individuals with mental disorders or maladaptive behaviors in youths [14,56].
Taken together, the study identified systematic effects of cognitive and non-cognitive factors with the time to reemployment. According to prevalent standards [48], the respective effects of crystallized intelligence and emotional stability, albeit significant, were rather small and can be considered negligible in size. One standard deviation increase in either intelligence or personality increased the odds of finding a new job by only about eight percent. However, on the individual level, both of these negligible differences can accumulate to noticeable differences in the odds of finding a new job. As presented in Figure 2, for respondents high in crystallized intelligence and high in emotional stability, the odds of reemployment at, for example, 20 weeks were at about 50%, whereas the respective odds fell at about 40% for respondents low (M − 1 SD) on these traits. Thus, certain psychological profiles can increase the chance of finding a new job at a given time interval.
The limitations of the present study point at intriguing opportunities for future research. For one, the study considered a rather short period of only six years. It might be informative to investigate longer periods of unemployment to examine whether the identified effects can be replicated for the long-term unemployed. Furthermore, the presented analyses were stratified by sex because of different baseline hazard rates for men and women. This might reflect the well-known effects of household composition on female labor participation [57,58]. The decision of many women to assume paid employment strongly depends on the presence of an additional earner in the household [59]. Thus, the present research should be extended in future studies by controlling for income effects of a second earner to study sex differences in reemployment success. Future research is also encouraged to examine the effectiveness of job-search programs for different subgroups of individuals. Thus, it might be conceivable that intervention programs aimed at boosting self-presentation or stress-managing skills might be more effective for job seekers low in emotional stability, whereas those with high emotional stability might profit more from trainings of specific job search skills [10]. Finally, the present study was limited to one specific aspect of reemployment success, namely, the time to reemployment. Future studies could extend these findings by addressing more qualitative components of reemployment success such as the match between a job seeker's job expectations and her or his realized job conditions. Individual differences, particularly in personality traits, might explain why some individuals might be content with the "first best" job, whereas others are more selective in their job choice.
Conclusions
Overall, the present study showed that the time to reemployment was systematically associated with cognitive and non-cognitive factors. Higher levels of crystallized intelligence and emotional stability resulted in slightly shorter unemployment periods and a higher probability of finding a new job. However, the respective effects were small and explained few individual differences in unemployment durations beyond educational levels and job type. | 6,134.8 | 2017-03-01T00:00:00.000 | [
"Economics",
"Psychology"
] |
Predicting calvarial morphology in sagittal craniosynostosis
Early fusion of the sagittal suture is a clinical condition called, sagittal craniosynostosis. Calvarial reconstruction is the most common treatment option for this condition with a range of techniques being developed by different groups. Computer simulations have a huge potential to predict the calvarial growth and optimise the management of this condition. However, these models need to be validated. The aim of this study was to develop a validated patient-specific finite element model of a sagittal craniosynostosis. Here, the finite element method was used to predict the calvarial morphology of a patient based on its preoperative morphology and the planned surgical techniques. A series of sensitivity tests and hypothetical models were carried out and developed to understand the effect of various input parameters on the result. Sensitivity tests highlighted that the models are sensitive to the choice of input parameter. The hypothetical models highlighted the potential of the approach in testing different reconstruction techniques. The patient-specific model highlighted that a comparable pattern of calvarial morphology to the follow up CT data could be obtained. This study forms the foundation for further studies to use the approach described here to optimise the management of sagittal craniosynostosis.
Sagittal craniosynostosis is caused by early fusion of the sagittal suture and is the most common form of craniosynostosis [1][2][3][4][5][6] . Its occurrence rate is about 3 in 10000 birth with several studies reporting a significant increase (2-3 times) in its occurrence in the last 20 years [7][8][9] . A number of surgical techniques have been developed for the treatment of this condition 10,11 . Many studies have recently compared the clinical outcomes of these techniques in search for the optimum treatment method for this condition [12][13][14] .
Finite element (FE) method is a powerful numerical technique used to analyse a wide variety of engineering problems 15 FE method has the potential to predict the morphological changes during the skull growth [16][17][18][19][20] and to compare the biomechanics of different reconstruction techniques. This can advance our understanding of the optimum management, not only of sagittal synostosis but all forms of craniosynostosis [21][22][23] . However, FE models first need to be validated and we need to understand the sensitivity of these models to build confidence in their outcomes.
The aim of this study was to develop a validated patient-specific finite element model of a case of sagittal craniosynostosis. Here, the finite element method was used to predict the calvarial morphology of a patient based on their preoperative morphology and the planned surgical techniques. The predicted calvarial morphology was then compared to the in vivo computed tomography data two years following the operation. This retrospective study, to the best of our knowledge, is the first study using FE method to predict the outcome of the calvarial reconstruction.
Materials and Methods
patient and image processing. A series of computer tomography (CT) images of a sagittal synostosis patient of unknown sex and identity were obtained from the Seattle Children's Hospital (Washington, USA). The preoperative CT was obtained at 3 months of age (Fig. 1A); the postoperative CT was obtained at 5 month of age (Fig. 1B) and the follow up CT was obtained at 29 months of age (24 months' post-operation - Fig. 1C). Figure 1D-F compares the morphological changes of this patient's skull. Note, this study was reviewed and approved by the Institutional Review Board of Seattle Children's Hospital (approval number 12394). Written informed consent from the parents or guardians of the child was obtained.
The CT images were imported into Avizo image processing software, (Thermo Fisher Scientific, Mass, USA) and 3D models were developed. Bone, sutures and intracranial volume were segmented on the pre-operative models. The pre-operative model was then reconstructed virtually to model the post-operative calvarial reconstruction. This model consisted of bone, sutures, craniotomies and intracranial volume (ICV) that broadly represent the brain. finite element analysis. Model development and materials. The 3D reconstructed pre-operative model was transformed into a 3D solid mesh model and imported to a finite element solver (ANSYS v.18, Canonsburg, PA, USA) to predict the follow up calvarial morphology. A quadratic tetrahedral mesh consisting of 1.6 million elements for the skull/sutures and 200,000 for the ICV was chosen following a mesh convergence study. Isotropic (linear and elastic) material properties were assigned to all regions with a thermal coefficient defined only for the ICV. Bone and suture were assumed to have an elastic modulus of 3000 MPa and 30 MPa respectively 24,25 . The elastic modulus of the ICV was assumed to be 100 MPa 17 . The bone and suture materials were assumed to have a Poisson's ratio of 0.3. The ICV value was 0.48. The craniotomies were modelled with the same properties as the sutures.
Boundary and interface conditions. We made the assumption that the bone-suture and bone-craniotomy were perfectly connected (bonded), and modelled the ICV-bone/suture/craniotomy with contact elements using a penalty-based algorithm. A low tangential friction coefficient of 0.1 was used to represent the frictionless environment at the ICV-bone/suture/craniotomy. Following a series of sensitivity tests similar to the study of Bernakiewicz et al. 26 , the normal contact stiffness was set at 500 N/mm and penetration tolerance at 0.5. The sensitivity tests results are included in the supplement (see Supplementary Table S1). These data highlighted that changing the contact stiffness within the range of 25-3000 N/mm, and penetration tolerance in the range of 0.1-0.5, resulted in less than 1% change in the outcome measurements (see Supplementary Table S1).
The model was constrained in all degrees of freedom around the foramen magnum, on the palate and airways. This was similar to our previous study on modelling the natural calvarial growth from 0-12 months of age 17 . The model was loaded via thermal expansion of the ICV, as previously described 17,19 . A linear isotropic expansion was applied to the ICV, where the pre-operative ICV (measured at 648 ml) was expanded to the follow up ICV (measured at 1320 ml) in seven intervals. No adaptive remeshing algorithm was used, as the geometry was updated at each interval to the new deformed shape. This approach avoided element distortions that would have otherwise occurred due to the large deformation.
Simulations and measurements. Six simulations were carried out. The main focus is on three key scenarios throughout the study, however a full comparison of the all cases is included in the supplement (see Supplementary Fig. S1). The three key scenarios were: Here the entire suture elements were selected and their elastic modulus was increased at the end of each interval. The metopic suture was fused at 8.5 months of age as previously described 27,28 . This was intended to model the actual in-vivo scenario.
Predicted calvarial morphologies from the simulations were compared against the in vivo calvarial morphology at the 29 months of age scan in terms of: (i) cephalic index, i.e. maximum skull width divided by the maximum skull length multiplied by 100, (ii) 2D cross-sections and (iii) 3D distance colour maps. The patterns of contact pressure on the intracranial volumes were also compared as an indication of how each of the considered cases affected the brain growth. Note (1) the changes in the calvarial morphology at each interval is not included here but such results are presented for our previous work on predicting calvarial morphology in mouse and normal human skull growth 17,19,20 . (2) all methods were carried out in accordance with relevant guidelines and regulations. Table 1. A summary of predicted calvarial measurements and cephalic indexes (CI) of cases 1-3 and the in vivo data at 29 month of age or 24 months post-operation. www.nature.com/scientificreports www.nature.com/scientificreports/
Results
The hypothetical Case 1 that assumed open sutures up to 2 years of age, showed the highest difference from the actual in vivo scenario. The predicted CI for Case 1 was 0.75 while the in vivo CI was 0.83 (Table 1). Similarly, the cross-sectional comparison between this case and the in vivo case (Fig. 2) showed that the model over-estimated the posterior growth of the skull. Since the brain growth (i.e. ICV expansion) was not constrained, (i.e. the sutures and craniotomies were patent), the contact pressure at the ICV-bone/sutures/craniotomies was almost negligible across the ICV (i.e. less than 0.1 MPa -see Fig. 3 for Case 1).
The hypothetical Case 2 that modelled fusion of all sutures after the operation showed a close match to the CI of the in vivo case i.e. 0.82 vs. 0.83 (Table 1). Considering the cross-sectional comparison between the predicated shape of this case and the in vivo case, the skull height was over predicted comparing to the in vivo result (Fig. 2). Since, all sutures and craniotomies were fused, the contact pressure at the ICV-bone/sutures/craniotomies were much higher than Case 1. There results predicted elevated level of pressure in the anterior part of the ICV, around the orbits.
The Case 3 that modelled bone formation at patent sutures most closely matched the actual in vivo calvarial growth of the patient two years after surgery. The predicted CI for this case was 0.80 (vs. 0.83 based the in vivo data). There was a close match between the predicted skull shape in all cross-sections (Fig. 2) and across the whole skull (Fig. 3). The contact pressure at the ICV-bone/sutures/craniotomies were lower comparing to Case 2 and higher comparing to Case 1 (Fig. 4).
Discussion
There are limited finite element studies on the biomechanics of craniosynostosis 23 despite huge potentials of this method to advance treatment of this condition. Our group in the past few years has been using this technique to predict the calvarial growth in humans 17 and in a mouse model of this condition 19,20 . To the best of our knowledge, the present study is the first attempt to use the finite element method to predict the outcome of calvarial reconstruction in a craniosynostotic patient based on the preoperative CT data.
Three cases were modelled here, two hypothetical (Case 1 & 2) and a more realistic (Case 3) scenario (see Supplementary Fig. S1 for some additional hypothetical cases). The hypothetical cases were aimed to verify the modelling approach and to ensure that it predicts the patterns that are clinically observed or expected. In Case 1, the open suture/craniotomies model, as expected the calvaria expanded with minimal pressure on the ICV; in Case 2, the closed suture/craniotomies model, skull height was increased and there was an elevated level of pressure on the ICV around the orbits, both of which are clinically observed in some of the syndromic forms of craniosynostosis e.g. as clinically observed in Crouzon patients 5,29,30 . www.nature.com/scientificreports www.nature.com/scientificreports/ A close match was obtained between the predicted calvarial morphology of Case 3 (modelled in vivo bone deposition) and the CT data obtained from the actual patient at 2 years after surgery. Together with the observations in Case 1 and 2, this is reassuring that the modelling approach proposed here has potential to reliably predict the outcome of the calvarial reconstruction. We cannot comment on the validity of the contact pressure maps obtained in this study however, the relative comparison between the three cases are informative.
The modelling approach presented has large potential in predicting calvarial morphology after remodelling surgery and to understand the biomechanical differences between different surgical techniques. In the case of sagittal synostosis this method can be used to compare the existing techniques for the management of this condition and their potential impacts on the brain development. The contact pressure maps that FE models provide us, together with the functional brain imaging data, can advance our understand of the interplay between calvarial reconstruction and brain development 31,32 .
It must be noted that there are other methodologies based on e.g. theories of finite growth and constrained mixtures that have been used to model the growth and remodelling of various living tissues 33,34 . A detail comparison between the approach described here and other theories is beyond the scope of this study. Perhaps one of the key advantages of the approach described here is that it takes into account the interaction between the intracranial volume and the overlying bones, sutures and craniotomies using Hertz contact theory. This is important in the context of calvarial growth and its reconstruction. Nonetheless, the approach presented here has its own limitations.
Perhaps the key limitations of the FE models described here are that: (1) the modelling approach presented here does not directly consider the effects of cerebrospinal fluid and various soft tissues present between the brain and calvarial bones. However, the contact elements used at this interface does take into account to some extent the role of these tissues. Including them explicitly can alter the magnitude of values presented in this study but we believe that the relative comparison between the cases here remains valid; (2) the calvarial reconstruction that was virtually modelled on the pre-operative CT data does not take into account the plastic deformation that may have occurred during surgery. It was evident that there was a difference in the calvarial morphology between the pre and post-operative CT data (Fig. 1). This difference was not taken into account in the models described here. Nonetheless, it is interesting that the model could still predict closely the calvarial shape on two year follow up.
conclusions
A validated patient-specific finite element model of calvarial growth was developed in this study. Despite the study limitation, the similarities between the predicted calvarial shape outcomes of the modelling approach and the in vivo data are a starting point for future studies. These studies will use the methodology described here to compare biomechanics of different reconstruction techniques and their impact on brain development in sagittal and other forms of craniosynostosis. | 3,192 | 2020-01-08T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Client-driven animated GIF generation framework using an acoustic feature
This paper proposes a novel, lightweight method to generate animated graphical interchange format images (GIFs) using the computational resources of a client device. The method analyzes an acoustic feature from the climax section of an audio file to estimate the timestamp corresponding to the maximum pitch. Further, it processes a small video segment to generate the GIF instead of processing the entire video. This makes the proposed method computationally efficient, unlike baseline approaches that use entire videos to create GIFs. The proposed method retrieves and uses the audio file and video segment so that communication and storage efficiencies are improved in the GIF generation process. Experiments on a set of 16 videos show that the proposed approach is 3.76 times more computationally efficient than a baseline method on an Nvidia Jetson TX2. Additionally, in a qualitative evaluation, the GIFs generated using the proposed method received higher overall ratings compared to those generated by the baseline method. To the best of our knowledge, this is the first technique that uses an acoustic feature in the GIF generation process.
Introduction
In this technological era, accessibility and sharing of multimedia content on social media platforms have increased as the speed and reliability of internet connections have improved. Animated images, such as graphical interchange format images (GIFs), are exceedingly popular; notably, they are used to share varied kinds of stories, summarize events, express emotion, gain attention, and enhance (or even replace) text-based communication [2]. There are many versatile types of media formats, but GIFs have become prevalent over the last decade owing to their distinct and unique features, such as instantaneous (very short in duration) and visual storytelling (no audio involved). Owing to their low bandwidth requirement and lightweight nature, GIFs are also integrated into streaming platforms to highlight videos. Figure 1 shows an example of animated images (WebP) on YouTube that are employed to instantaneously highlight a recommended video. Notably, there have only been a few generation studies for GIFs in multimedia research, despite their increasing popularity and unique visual characteristics.
According to a recent study [31], more than 500 million users spend approximately 11 million hours every day on the GIPHY website watching GIFs. Nevertheless, no real-time, lightweight GIF generation framework has been established, particularly for streaming platforms, despite the ubiquitous adoption and prevalence of GIFs. Server-driven techniques can provide real-time solutions to this problem, as all the information, including user data and video content, already exists on servers. There are three main concerns regarding serverdriven solutions: (i) provision of real-time response to many concurrent users with limited computational resources; (ii) user privacy violation in a personalized approach; and (iii) the fact that current solutions process entire videos to create GIFs, which increases the overall computation time and demands substantial computational resources. These are the key factors that prompted us to research a conversational method and to explore a lightweight client-driven technique for GIF generation.
This paper proposes a novel, lightweight, and computationally efficient client-driven framework that requires minimal computational resources to generate animated GIFs. It analyzes an acoustic feature to track and estimate the timestamp corresponding to the maximum pitch (henceforth referred to as the "maximum pitch timestamp") from the climax Fig. 1 Streaming platforms commonly use animated images to highlight recommended videos section of the corresponding video. Instead of processing the entire video, it processes a small segment of the video to generate a GIF. This makes the process efficient in terms of computational resources, communication, and storage. Sixteen publicly broadcast videos are analyzed to evaluate the effectiveness of the proposed approach. 1 To the best of our knowledge, this is the first attempt to design an entirely client-driven technique to generate animated GIFs using an acoustic feature for streaming platforms. 2 The remainder of this paper is organized as follows. Section 2 briefly describes a summary of related work. Section 3 presents the details of the proposed client-driven framework. Section 4 presents the qualitative and quantitative results, along with the discussion. Finally, the concluding remarks are provided in Section 5.
Related research
This section briefly reviews the related research on animated GIF generation methods. We also review current music genre classification (MGC) methods; notably, MGC is an important technique for classifying audio files based on the genre preference of users in the proposed method.
GIF generation methods
In recent years, there has been a growing interest in researching animated GIFs. Many qualities of animated GIFs that make them more engaging than videos and other media on social network websites have been identified [2]. Facial expressions, histograms, and aesthetic features have been predicted and compared [18] to determine the most suitable video features that express useful emotions in GIFs. Another recent study [22] used sentiment analysis to estimate the textual-visual sentiment score for annotated GIFs. Several researchers have collected and prepared datasets to annotate animated GIFs [14]. Particularly, they have collected the Video2GIF dataset for highlighting videos and extended it to include emotion recognition [13]. The GIFGIF+ dataset has been proposed for emotion recognition [4]. Another dataset, Image2GIF, has been proposed for video prediction [46], together with a method to generate cinemagraphs from a single image by predicting future frames.
MGC methods
MGC has become a prevalent topic in machine learning since a seminal report [38] was published. MGC has commercial value, but in addition, it has many practical applications such as music recommendation [39], music tagging [6], and genre classification [5]. Recent research [8,41] has shown that spectrograms, such as short-term Fourier transform spectrograms and Mel-frequency cepstral coefficient (MFCC) spectrograms, transformed from audio signals, can be successfully applied to MGC tasks. This is owing to their capability of describing temporal changes in energy distributions with respect to frequency. CNN models have been used for different MGC tasks. In early studies on MGC using neural networks [37], researchers confirmed that techniques such as dropout, use of rectified linear units, and Hessian-free optimization can enhance feature learning effects. To exploit the feature learning capability of CNNs, an initially trained CNN was used as a feature extractor and then the extracted feature was used as a classifier [20]. The researchers achieved good results on the GTZAN [38] dataset by combining extracted features with the majority voting method. The CNN-based approaches obtain notable results in MGC tasks; however, they neglect spectrogram temporal information, which may be useful. Based on this reasoning, the long short-term memory recurrent neural network (RNN) has been used [9] to extract features from scatter spectrograms [1] of audio segments and fuse them with those obtained using CNNs. In addition, to take advantage of both CNNs and RNNs, a convolutional RNN has been designed for music tagging [7] . By adopting this, both the spatial and temporal information of the spectrograms is used.
Despite the extensive research on GIFs and MGC, a lightweight client-driven GIF generation technique specialized for streaming platforms has not been developed. Most modern end-user devices have low computational resources, and analyzing entire videos to generate animated GIFs is time consuming. This is not feasible for real-time solutions. This paper presents a novel method to generate animated GIFs on end-user devices. It uses an acoustic feature and video segments, which makes it computationally efficient and robust enough to create GIFs in real-time. The following section explains the major components of the proposed framework and GIF generation process using an acoustic feature.
Proposed framework
Streaming platforms manage video and audio separately for each video. The video is split and stored in small continuous segments. Dividing a video into segments and separating audio allows the streaming platforms to manage them separately according to different specifications. As described in Section 2, existing techniques process the entire video to generate an animated GIF, which is not an efficient approach. In this context, a novel animated GIF generation method is proposed to reduce the consumption of computational resources and computation time. The use of an audio file instead of an entire video enables us to create animated GIFs within the limit of acceptable computation time for end-user devices such as an Nvidia Jetson TX2.
The high-level system architecture of the proposed method is illustrated in Fig. 2. It comprises two main parts: the HLS Server and the HLS Client. The proposed method mainly focuses on the client-side implementation. In the following subsections, the configuration and role of each component of the proposed method are explained.
HLS server
The first component of the proposed system architecture is the HLS server. The purpose of the HLS server is to smoothly transmit audio files and segments to concurrent users on heterogeneous end-user devices. Internet Information Services (IIS) was selected for this purpose and locally configured on Microsoft Windows 10. IIS supports most network protocols [29]. To reduce potential corruption or loss of packets during transmission [17], all videos are encoded as H.264/AAC Moving Picture Experts Group 2 (MPEG-2) transport stream (.ts) segments using FFmpeg [12]. Each video segment corresponds to approximately ten seconds of playback with a continuous timestamp. Similarly, the list of segments for each video is stored in a text-based playlist file (M3U8) in the playback order of the segments. Along with the segments, the HLS server also contains the audio file (.mp3) of the video, which is separately extracted from the source video using FFmpeg [12]. The detailed hardware specifications of the HLS server are described in Section 4.1.1. The following sections describe the details and roles of each HLS client component.
HLS client
The purpose of the HLS client is to process the audio file and a segment of the corresponding video to generate a GIF on the end-user device. To this end, the Nvidia Jetson TX2 was configured as the HLS client. This device is a GPU-based board with a 256 core Nvidia pascal architecture [11]. Jetpack 4.3 SDK was used to automate the basic installations and the maximum energy profile used in the proposed method. The HLS client consists of four major components: HTTP persistent connection, MGC module, animated GIFs generation module, and web-based user interface. The details of each component are described in the following subsections.
HTTP persistent connection
Several requests are initiated from the end-user device to obtain music files and segments during the GIF generation process. An HTTP persistent connection is used to download all corresponding files. It is used because it can simultaneously execute multiple requests and returns of data via a single transmission control protocol connection [3]. There are many advantages of using persistent connections, such as fewer new transport layer security handshakes, less overall CPU usage, and fewer round trips [47].
MGC module
The purpose of the MGC module is to analyze the audio files according to user music genre preferences. For this purpose, the GTZAN dataset was used in the experiments, which is extensively used as a benchmark for MGC [38]. The dataset has ten genres, and each genre includes 100 soundtracks of 30 s duration with a sampling rate of 22,050 HZ and a bit depth of 16 bits. The genres are blues, classic, country, disco, hip-hop, jazz, metal, pop, reggae, and rock. The dataset is divided into two sub-datasets: 70% for training and 30% for testing. The Librosa library was used to extract MFCC spectrograms from raw audio data [24]. The extracted features were used as input to the CNN model for training. MFCC spectrograms are a good representation of music signals [7].
The extracted MFCC features were input to the CNN model, which was a twodimensional array in terms of time and feature value. Each 30 second long audio clip was split into 3 second windows with a size of 19,000 samples × 129 time × 128 frequency × 1 channels. The backbone of the proposed network was based on the VGG16 neural network. The network structure of the music classification model is shown in Fig. 3. The model was trained using the SGDW optimization algorithm with a learning rate of 0.01, momentum rate of 0.9, and the default weight decay value [23]. The training data were fed into the model with a batch size of 256 and learning rate of 0.001 for cost minimization, and 1,000 iterations were performed for learning the sequence patterns in the data. The early stopping method was adopted with ten epochs. The network trained for 100 epochs, and the best validation accuracy was obtained in 51 epochs within 30 minutes training phase. The Keras toolbox was used for feature extraction and to train the
Animated GIFs generation module
The objective of this module is to extract the climax section from the corresponding audio file and estimate the maximum pitch timestamp from it, so that a video segment can be requested to generate the animated GIF. The first three seconds of the segment are used to create a GIF in the proposed technique. Here, the length of each GIF is fixed, but it can be extended to generate GIFs that have a specific length. The proposed method mainly focuses on music videos that have a plot. A composed story generally consists of an exposition, rising action, climax, and resolution. The most exciting part of the plot is the climax section, where all the key events happen, and this represents the most memorable part [16]. Generally, the climax section in a classical story plot begins at 2/3 of the total running time. Figure 4 shows the classical story plot structure of the Big Buck Bunny (2008) video. The details of GIF generation from the climax part using the proposed method are explained in Section 4.2.
Web-based user interface
The user can select the video and preferred music genres and also view the generated GIFs using the web-based user interface. The open source hls.js player is used for this purpose [10]. Hypertext Markup Language 5 video and media source extensions are needed to play back the transmuted MPEG-2 transport stream. This player supports client-driven data delivery, meaning that the player can decide when to request a segment. The details of the animated GIF generation process are explained in the following sections.
Experimental results and discussion
This section presents an extensive experimental evaluation of the proposed method. First, the experimental setup is described along with the baseline approach, which involves processing the entire video and audio. The complete flow of the proposed GIF generation process is then explained from the user perspective. The accuracy of the proposed action MGC model is presented and compared with those of other well-known approaches. Finally, the performance of the proposed method is compared with that of the baseline method.
Hardware configuration
In the experimental evaluation, both the HLS clients and the HLS server were locally configured. Two different hardware configurations were used for the HLS client: A highcomputational-resources (HCR) device ran on the open-source Ubuntu 18.04 LTS operating system, and a low-computational-resources (LCR) device was configured using an Nvidia Jetson TX2. The proposed and baseline methods were deployed on the HLS clients separately for the LCR and HCR devices. The HLS server was configured on Windows 10 and used in all experiments. All hardware devices were locally connected to the SKKU school network. Table 1 lists the specifications of each hardware device used in the experiments. The entire GIF generation process from the user perspective using the proposed approach is explained in the following subsection.
Proposed GIF generation process
This section explains the entire flow of the proposed GIF generation process from the user perspective. The flow is explained based on 16 popular videos that were selected from YouTube. 3 A complete description of the videos is provided in Table 2. The statistics for the number of views were collected in June 2020. Because of their popularity, some of the videos have been viewed more than a billion times on YouTube. All the videos used in the experiments had a resolution of 480 × 360 pixels. The user selects a video using a web-based interface to start the process. The user then selects the music genre preference using a web-based interface. The system requests an audio file for the corresponding video. The downloaded audio file is then analyzed by the proposed trained CNN model according to the user music genre preference. If the music genre of the audio file is consistent with the user preference, the system extracts the climax section from it using the Pydub library [32]. As described in Section 3.2.3, most of the videos follow the same plot structure, and the climax section begins after 2/3 of the running time. Thus, only the last 25% of the audio file is used as the climax section. The model estimates the maximum pitch timestamp from the climax section using Crepe [19]. The timestamp information is obtained in seconds to determine and download a segment. Equation 2 is used to estimate the segment number from the obtained pitch timestamp information.
The system requests a specific segment to be downloaded from the HLS server. Later, that segment is used to generate an animated GIF. The system uses FFmpeg [12] to create the GIF from the segment. Algorithm 1 shows all the processing steps required for using the proposed method to generate a GIF for each video. The variables are Ct (climax time), A (audio file), Sn (segment number), Asr (sample rate of the audio file), Sd (segment duration), and P t (timestamp of maximum pitch). Sd is a constant and its value is ten units (seconds).
Baseline method
This subsection explains the baseline method used for comparison with the proposed GIF generation method. As described in Section 2, the previous methods used the entire video in the GIF generation process. Here, the entire video and audio are used in the baseline method. As highlighted in Section 4.1.2, the baseline method uses the same web-based interface. In the baseline method, after the video and music genre preferences are selected, the client-side device requests the corresponding video file. The audio file is extracted from the video using FFmpeg [12]; further, the audio file is analyzed using the proposed trained CNN model to classify the music genre. Further, the baseline model estimates the maximum pitch timestamp from the entire audio file using Crepe [19]. The timestamp information is obtained within seconds to determine the starting point in the video to generate the GIF. Later, the timestamp information is used to generate the GIF from the video using FFmpeg [12]. Algorithm 2 shows the processing steps employed in the baseline method. The computation times of the proposed and baseline approaches are compared in the following experiments.
Experimental evaluation of MGC
This subsection evaluates the current CNN methods on the GTZAN dataset. To the best of our knowledge, Senac at al. [36] have achieved the best performance on the GTZAN dataset. The proposed method performed 4.06% better, in terms of the validation accuracy, within 133.83 million floating point operations per second. The experimental results of the baseline method and the proposed method on the GTZAN dataset are shown in Table 3. The proposed CNN model was used in all the experiments to identify the music genre according to user interest.
Performance analysis of the proposed method
This section compares the performance of the proposed GIF generation method with that of the baseline scheme described in Section 4.1.3. The performance evaluation was conducted Hamel, Philippe, et al. [15] 84. 30 Zhang, Weibin, et al. [45] 87. 40 Senac, Christine, et al. [36] 91.00 Proposed
94.70
The bold entries show the proposed method performance comparison with other approaches on 16 videos with different playtimes (see Table 2 for details). The computation time for the baseline method was determined by considering the time required to (i) download the video corresponding to the video, (ii) extract the audio, (iii) identify the genre, (iv) estimate the pitch from the audio, and (v) generate the GIF from the video. Meanwhile, the computation time for the proposed method was determined by considering the time required to (i) download the audio corresponding to the video, (ii) identify the genre, (iii) estimate the pitch from the climax section, (iv) download the segment, and (v) generate the GIF from the segment. Here, the model loading time is not encompassed in all experiments to calculate computation time.
The computation times required (seconds) to generate the animated GIF using the baseline and proposed methods were compared in the first experiment. Both approaches were configured on the HCR device (refer to Table 1 for the detailed specifications of the device). The sizes of the segment and climax section employed to estimate the pitch in the proposed method were significantly smaller than that of the video and the duration of the audio used in the baseline method. The overall computation time of the proposed method was significantly lower than that of the baseline method. Tables 4 and 5 show the computation times required for the HCR device to create a GIF using the baseline method and the proposed method, respectively.
Since this study focused on creating GIFs using the computational resources of the enduser device, in the next experiment, the proposed and baseline approaches were configured on the LCR device (i.e., an Nvidia Jetson TX2). Tables 6 and 7 show the computation times required to create a GIF on the LCR device using the baseline method and the proposed method, respectively. The overall computation times obtained using the proposed method were significantly lower than those obtained using the baseline method.
The combined duration of the 16 videos was 70 min. To generate the 16 corresponding GIFs on the HCR device, the baseline method required 11.27 min, whereas the proposed The bold entries show the proposed method performance comparison with other approaches The bold entries show the proposed method performance comparison with other approaches method required 6.01 min. Furthermore, on the LRC device, the baseline method required 77.67 min, and the proposed method required 20.64 min. Thus, based on the analysis of these 16 videos, on average, the proposed method was 1.87 times and 3.76 times faster than the baseline method on the HCR and LCR devices, respectively. In conclusion, these results show that the proposed method is more computationally efficient than the baseline method on both HCR and LCR devices.
Qualitative evaluation
This section presents the evaluation of the quality of the GIFs generated using the proposed method by comparing these with those obtained from YouTube and those generated using the baseline approach. The evaluation was based on a survey conducted with the help of 16 participants. Undergraduate students were recruited from our university as participants for this task. They were divided into four groups based on their music genre of interest. Each group of participants was shown three GIFs (i.e. YouTube, baseline, and proposed). The survey was based on 16 videos (refer to Table 2). The quality of the generated GIFs were evaluated using the exact rating scale. The participants were asked to rate the GIFs according to the arousal aspect. An anonymized questionnaire was created for the generated GIFs so that the users could not determine which method was used (i.e., YouTube or baseline or proposed). They were asked to watch all GIFs and rank them on a scale of 1-10 (1 being the worst and 10 being the best). Table 8 shows the ratings given by the participants for all three approaches. The average ratings obtained for all 16 videos for the YouTube GIFs, the baseline method, and the proposed method were 6.6, 7, and 7.9, respectively. Figure 5 shows the sample frames obtained using the GIFs generated using each of the three approaches. The bold entries show the proposed method performance comparison with other approaches
Discussion
The previous sections evaluated the overall effectiveness of the present study by comparing the proposed and baseline approaches. The proposed approach exhibited better performance and reduced computation time on the HCR and LCR devices (Nvidia Jetson TX2). Instead of processing the entire audio and video (baseline), the proposed method used the climax section of the audio and video segment to generate an animated GIF. This is also indicated in the experimental results of comparison of the proposed method with the baseline, according to which the proposed approach was 3.76 and 1.87 times more computationally efficient on the LCR and HCR devices, respectively. This reduces the overall demand for computational resources and the computation time required to generate GIFs on end-user devices. In qualitative evaluation in Section 4.4, the proposed method received overall a higher average rating than the other approaches used in the qualitative analysis. One of the main Fig. 5 Illustrations of the frame samples from generated GIFs reasons that the GIFs generated using the proposed method have higher ratings was because the GIFs were generated from the most exciting parts of the videos.
This study demonstrates the use of an acoustic feature in the GIF generation process while using client device computational resources. Instead of processing the entire audio and video, the proposed method uses a small portion of the audio (climax section) and a video segment to generate the animated GIF. This makes it computationally efficient. There is one constraint while analyzing an acoustic feature. It is possible that while analyzing the baseline and proposed methods to obtain the maximum pitch timestamp, the timestamp information may be the same.
The proposed framework is designed to support a wide range of end-user devices with diverse computational resources capabilities. Because of its simplicity and scalability for implementing various configuration devices [21], the proposed approach can be easily adapted to other animated images (WebP), recommendation techniques [25,43], and streaming protocols such as dynamic adaptive streaming over HTTP (DASH). In addition to reducing the required server computational resources, the proposed method can serve as a privacy-preserving solution using efficient encryption techniques [28,30] that can be integrated into other client-driven solutions [26,27]; moreover, it can be adapted in three-screen TV solutions [33,34].
Conclusion
This paper proposes a novel, lightweight method for generating animated GIFs using enduser-device computational resources for the entire process. The proposed method analyzes the climax section of the audio file, estimates the maximum pitch, and obtains the corresponding video segment to generate the animated GIF. This improves computational efficiency and decreases the demand for communication and storage resources on resourceconstrained devices. The extensive experimental results obtained based on a set of 16 videos show that the proposed approach is 3.76 times more computationally efficient compared with the baseline on an Nvidia Jetson TX2. Moreover, it is 1.87 times more computationally efficient than the baseline on the HCR device. Qualitative results show that the proposed method outperforms other methods and receives higher overall ratings.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommonshorg/licenses/by/4.0/. | 6,385.4 | 2021-02-12T00:00:00.000 | [
"Computer Science"
] |
3D Whole‐heart free‐breathing qBOOST‐T2 mapping
Purpose To develop an accelerated motion corrected 3D whole‐heart imaging approach (qBOOST‐T2) for simultaneous high‐resolution bright‐ and black‐blood cardiac MR imaging and quantitative myocardial T2 characterization. Methods Three undersampled interleaved balanced steady‐state free precession cardiac MR volumes were acquired with a variable density Cartesian trajectory and different magnetization preparations: (1) T2‐prepared inversion recovery (T2prep‐IR), (2) T2‐preparation, and (3) no preparation. Image navigators were acquired prior the acquisition to correct for 2D translational respiratory motion. Each 3D volume was reconstructed with a low‐rank patch‐based reconstruction. The T2prep‐IR volume provides bright‐blood anatomy visualization, the black‐blood volume is obtained by means of phase sensitive reconstruction between first and third datasets, and T2 maps are generated by matching the signal evolution to a simulated dictionary. The proposed sequence has been evaluated in simulations, phantom experiments, 11 healthy subjects and compared with 3D bright‐blood cardiac MR and standard 2D breath‐hold balanced steady‐state free precession T2 mapping. The feasibility of the proposed approach was tested on 4 patients with suspected cardiovascular disease. Results High linear correlation (y = 1.09 × −0.83, R2 = 0.99) was found between the proposed qBOOST‐T2 and T2 spin echo measurements in phantom experiment. Good image quality was observed in vivo with the proposed 4x undersampled qBOOST‐T2. Mean T2 values of 53.1 ± 2.1 ms and 55.8 ± 2.7 ms were measured in vivo for 2D balanced steady‐state free precession T2 mapping and qBOOST‐T2, respectively, with linear correlation of y = 1.02x+1.46 (R2 = 0.61) and T2 bias = 2.7 ms. Conclusion The proposed qBOOST‐T2 sequence allows the acquisition of 3D high‐resolution co‐registered bright‐ and black‐blood volumes and T2 maps in a single scan of ~11 min, showing promising results in terms of T2 quantification.
| INTRODUCTION
Cardiac MR (CMR) is a powerful tool for the assessment of a wide range of pathologies such as congenital heart disease, coronary artery disease, myocardial inflammation and edema. [1][2][3] However, several CMR sequences with different acquisition planning and geometries are needed to assess these pathologies. In particular, bright-blood imaging can be used to visualize whole-heart anatomy and the great thoracic vessels. 4 Black-blood imaging provides visualization of atrial/ventricular myocardial, aortic and pulmonary wall and enables thrombus/hemorrhage detection. 5 T2 mapping enables noncontrast quantitative tissue characterization, with increased myocardial T2 values reported to correlate with edema that can be associated with acute myocardial infarction, 6,7 cardiomyopathies 8,9 and transplant rejection. 10 Bright-blood CMR angiography (CMRA) for coronary and whole heart anatomy visualization is conventionally performed free-breathing with 1D diaphragmatic navigator (dNAV) gating. 11 Similarly, thrombus/hemorrhage visualization is typically performed with a 3D free-breathing noncontrast enhanced black-blood T1-weighted inversion recovery (IR) technique 5 with 1D dNAV. 1D navigator gating approaches minimize respiratory motion by acquiring data only when the navigator signal is within a small gating window (~5-6 mm), leading to long and unpredictable scan times. To enable shorter and more predictable scan times several self-gating techniques have been proposed to directly track and correct for the respiratory motion of the heart. [12][13][14][15][16][17][18] Conventional cardiac T2 maps are acquired with T2 prepared balanced steady-state free precession (bSSFP) in 2D short-axis views, under several breath-holds, requiring patient cooperation and expert planning. T2 preparation (T2prep) pulses with increasing T2prep durations are used to acquire several T2-weighted images that follow an exponential T2 decay curve. [19][20][21] A pause time of several cardiac cycles is used to allow for T1 recovery before applying the next T2 prepared imaging series. 3 Typically, only a single 2D slice can be acquired for each breath hold leading to limited spatial resolution and coverage. High-resolution free breathing 3D T2 mapping of the heart has been demonstrated using 1D dNAV but leads to long and unpredictable scan times, 20 hindering the acquisition of high isotropic resolution images. 1D dNAVs have also been used to correct for foot-head translational respiratory motion with ~100% scan efficiency, 21 enabling shorter scan times; however, the heart is not directly tracked with this approach and a motion model to relate the diaphragmatic to cardiac motion is needed. 1D respiratory self-navigation has been investigated for 3D radial trajectories, enabling the acquisition of 1.7 mm isotropic T2 maps in ~18 min. 22 However, acquisition time (TA) remains a challenge with this approach because a heart beat is necessary between acquisitions to allow magnetization recovery.
Furthermore, the sequences (bright-blood, black-blood, and T2 mapping) are usually performed sequentially, with different geometries (2D and 3D) and orientations, and under different breathing conditions (i.e., breath-hold and free-breathing), leading to prolonged TAs and potential missregistration errors between the images. To partially overcome this problem, a T2 prepared Bright-blood and black-blOOd phase SensiTive (BOOST) IR sequence 23 has been recently proposed to provide respiratory motion compensated and co-registered bright-and black-blood 3D whole-heart images. Nevertheless, this sequence is unable to provide quantitative tissue characterization and still requires long scan times (~20 min with fully sampled acquisitions).
The aim of this work was to develop a novel accelerated and respiratory motion compensated 3D whole-heart sequence (qBOOST-T2), which provides co-registered highresolution 3D bright-blood, black-blood, and quantitative T2 map volumes from a single free-breathing scan of ~11 min. This was achieved by extending the BOOST sequence 23 to enable undersampled acquisition and to provide highresolution 3D whole-heart T2 maps. The proposed sequence is based on the acquisition of 3 interleaved datasets with different magnetization preparation pulses. The first volume provides bright-blood anatomy visualization, the black-blood volume is obtained by means of phase sensitive IR (PSIR), -like reconstruction 24 between the first and third datasets, and T2 maps are generated by matching the signal evolution to a simulated dictionary.
| qBOOST-T2 framework
The proposed 3D whole-heart electrocardiograph triggered qBOOST-T2 mapping sequence is shown in Figure 1. Three interleaved bright-blood bSSFP volumes were acquired with an undersampled variable density Cartesian trajectory with spiral-like profile order. 24,25 A nonselective T2prep-IR module with T2prep length = 50 ms and TI = 110 ms was applied before the first dataset acquisition. T2 preparation (T2prep length = 30 ms) was performed before the second volume, 3D whole-heart, black-blood imaging, bright-blood cardiac anatomy, respiratory motion correction, T2 mapping | 1675 whereas the third dataset is acquired with no preparation. Fat suppression was achieved with a short inversion time IR (STIR) approach 26 in the first dataset, whereas spectral presaturation fat suppression (SPIR, spectral presaturation IR) 27 was used in the second and third datasets.
2D low-resolution iNAVs were acquired before the acquisition of each volume to estimate and correct for superiorinferior (SI) and left-right (LR) translational respiratory motion, enabling 100% respiratory scan efficiency. A templatematching algorithm with a mutual information similarity measure 28 was used to estimate SI and LR beat-to-beat translational motion from the iNAVs. Outliers due to deep breaths (outside the interval calculated as mean ± 2 standard deviations) were removed and 2D translational motion correction is performed as a linear phase shift in k-space. 29 Each undersampled translational motion corrected 3D volume was independently reconstructed with a 3D low-rank patch-based reconstruction (3D-PROST). 25 PROST undersampled reconstruction exploits local (within a patch) and nonlocal (between similar patches within a neighborhood) redundancies of the 3D volumes in an efficient low-rank formulation. The reconstruction is formulated as an iterative 2-step process: (1) a L2-norm regularized parallel image reconstruction using the denoised volume from step 2 as prior knowledge, and (2) a low-rank patch based denoising. The first step is solved using conjugate gradient whereas the second step is solved by using a truncated singular value decomposition.
3D affine image registration was performed between the 3 reconstructed volumes. The T2prep-IR volume provided bright-blood anatomy visualization, while a PSIR-like reconstruction 24 between the first and third acquired volume was performed to obtain the black-blood dataset. Whole-heart T2 maps were generated by matching the measured signal evolution of each voxel through the 3 motion corrected and reconstructed volumes to the closest entry of a subjectspecific dictionary obtained by means of extended phase graphs (EPG) simulations. 25 EPG simulations provide the evolution of transversal and longitudinal magnetization for the given sequence and avoid the use of recovery periods, usually needed for the complete recovery of the longitudinal magnetization. The dictionary generation and the matching step between measured and simulated signal are described in more detail hereafter.
| Dictionary generation and matching
EPG simulations were carried out to generate a subjectspecific dictionary. Trigger delay and acquisition window F I G U R E 1 Framework of the proposed 3D whole-heart qBOOST-T2. Acquisition (A), Three undersampled interleaved bSSFP bright-blood volumes are acquired with: (1) T2prep-IR, (2) T2prep, and (3) no preparation modules, respectively. 2D-iNAVs are acquired in each heartbeat before image acquisition. Reconstruction (B), image navigators are used to estimate/correct SI and LR translational motion. Translational beatto-beat motion correction is performed on the 3 datasets independently and each volume is reconstructed with 3D PROST reconstruction. PSIR reconstruction (C), Black-blood images are obtained by performing a PSIR reconstruction between the dataset acquired with T2prep-IR preparation (bright-blood image) and the third volume as a phase reference. T2 map generation (D), T2 map is generated by matching the measured signal and a previously generated EPG simulated dictionary. The first dataset acquisition includes a STIR fat suppression (TI = 110 ms), whereas the second and third datasets use a SPIR pulse for fat saturation parameters were specified for each simulation according to the heart rate (HR) and mid-diastolic resting period of the subject. Taking into account the centric k-space reordering of the acquisition trajectory, the simulated dictionary was generated considering the mean absolute value of the signal for the k-space central region (40% of the readouts per heartbeat), containing contrast information. Longitudinal magnetization evolution was used to determine the signal magnetization polarity. The dictionary was generated with 3 different T1 values = (900, 1100, 1300) ms and variable T2 values in the range (minimum: step size: maximum) (4:2:100,105:5:200,210:10:450) ms. 30 Healthy myocardium value at 1.5T is T1 = 1100 ms 31 ; however, additional T1s (900 ms and 1300 ms) were included in the dictionary to account for possible sources of T1 variability. The simulated T2 value range was selected to enable coverage of a wide range of T2s, including healthy myocardium (T2 ~ 50 ms), diseased myocardium (i.e., edema T2 ~ 60 ms), and blood (T2 ~ 250 ms). 3 Quantitative T2 maps were generated by matching each measured and normalized signal evolution to a specific dictionary entry, corresponding to a unique T2 value. The matching was performed minimizing the least square error between the measured signal and the EPG-based dictionary entry.
Before matching, 2 PSIR reconstructions were performed between the T2prep-IR prepared and the nonprepared datasets and between T2-prepared and nonprepared datasets. These PSIR reconstructions were used to systemically restore signal polarity that would affect the matching with the simulated dictionary. The 3 translational motion corrected volumes were normalized in time by dividing each voxel in each volume by the root mean square of the corresponding voxels in the 3 volumes. The obtained datasets were used to obtain the normalized signal evolution, through the 3 acquired volumes, for each voxel.
| Experimental design
The proposed qBOOST-T2 sequence was tested in simulations, in a T2 phantom, on 11 healthy subjects (5 males; mean age, 29 years; range, 27-35 years) and on 4 patients with suspected cardiovascular disease (3 males; mean age, 51 years; range, 25-75 years). Acquisition was performed on a 1.5T MR scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany) with an 18-channel chest coil and a 32-channel spine coil. Written informed consent was obtained from all participants before undergoing the MR scans and the study was approved by the Institutional Review Board.
| Patients
The feasibility of the proposed qBOOST-T2 sequence was tested on 4 patients with suspected cardiovascular disease. Imaging acquisition parameters matched the healthy subject scans. The patients were, respectively, 25, 75, 41, and 63 years old with an average HRs of 45, 72, 85, and 76 bpm. A conventional 2D bSSFP T2 prepared mapping sequence was acquired for comparison purposes with the same imaging parameters used for the healthy subject study.
| Reconstruction
2D T2 maps were reconstructed in-line using the scanner software (Syngo MR E11A, Siemens Healthcare, Erlangen, Germany). Nonrigid motion correction to compensate for inplane motion between 2D T2 weighted images and exponential pixel-wise fitting were performed in-line on the scanner.
qBOOST-T2 and CMRA raw data were exported from the scanner and reconstructed in MATLAB (The MathWorks, Inc., Natick, MA) on a dedicated workstation (16-core Dual Intel Xeon Processor, 2.3 GHz, 256 GB RAM). Translational motion correction to end-expiration was performed individually on each qBOOST-T2 dataset in vivo. The 3 datasets were independently reconstructed using 3D-PROST, with reconstruction parameters set as suggested in Bustin et al. 25 Total reconstruction time for each of the 3 datasets was 18 min. The T2prep-IR dataset enables bright-blood anatomical visualization, whereas the black-blood volume was obtained after PSIR reconstruction between the first and third datasets. Finally, the 3 acquired datasets were normalized, and dictionary matching was performed to obtain the T2 map, as previously described. The averaged time to generate the dictionary was 2 min and 28 s, whereas the averaged matching time for the entire 3D T2 map was 32.4 s, using a classical least square error minimization.
The 2D translational motion correction to end-expiration was performed on the fully sampled CMRA dataset and a sensitivity-weighted coil combination was performed. 33
| Healthy subjects
Quantitative analysis was performed for the 3D T2 maps generated with qBOOST-T2 and the conventional 2D T2 mapping sequence. 3D T2 maps from qBOOST-T2 were reformatted to the same slice position as the corresponding 2D T2 maps. Mean T2 values were measured for both sequences by selecting a region of interest (ROI) in the myocardial septum. The standard deviation of the T2 measurements within the ROI was used to quantify the precision of the techniques. Additionally, a Bland Altman analysis was performed to evaluate the agreement between the proposed qBOOST-T2 mapping technique and the conventional 2D T2 mapping approach.
The American Heart Association 17-segment model 34 was used to evaluate the percentage of variation of mean T2 and T2 precision between 2D bSSFP and 3D qBOOST-T2. The myocardial T2 values of the whole ventricle were measured in 16 American Heart Association segments in 3 slice positions: basal, mid and apex. The 17th segment was excluded from the analysis as the coverage of the reference 2D T2 map was not sufficient to visualize the apical cap. The percentage errors of variation were calculated for each segment and each subject as: The percentage errors of variation were averaged across subjects and displayed as bull's eye plots and bar plots. The T2 homogeneity in the whole left ventricle was evaluated for a representative healthy subject by generating a histogram of per-pixel T2 values and quantifying the T2 distribution through different coronal slices.
| Patient
Mean and standard deviation in T2 quantification were evaluated and compared with conventional 2D bSSFP T2 mapping by selecting a ROI in the septum of the myocardium in apical, mid and basal short axis slices. The American Heart Association 17-segment model was used to compare the conventional 2D T2 maps and the proposed qBOOST-T2 mapping in terms of mean T2 value and precision across the whole left ventricle for a representative patient.
| RESULTS
All data acquisitions and reconstructions were carried out successfully and results are reported hereafter.
| Simulations
EPG simulation results are shown in Figure 2. A T2 variability < 5% was observed for each simulated T2 value for T1 ranging between 800 and 1400 ms (Supporting Information Figure S1). No T2 variation was observed as function of different HRs.
| Phantom
The quantified T2 values obtained with reference SE, 2D bSSFP T2 map, and 3D qBOOST-T2 are shown in Figure 3A. A T2 overestimation is observed with the conventional 2D T2 mapping sequence, especially for high T2 values, although high linear correlation was observed (y = 1.25x + 2.44 with R 2 = 0.99). A better agreement in T2 quantification was found between qBOOST-T2 and SE with linear correlation y = 1.09x -1.67 (R 2 = 0.99); however, overestimation of long T2 values was observed.
T2 dependency on the T1 dictionary used is shown in Figure 3B. Including additional T1 values improves the dictionary matching accuracy for longer T2 values (corresponding also to longer T1 values) and reduces the standard deviation within a phantom vial. A variation of 3.2% and 3.8% was observed, respectively, for T2 values that correspond to healthy myocardium T2myoc = 52 ms and diseased myocardium T2myoc-diseased = 65 ms, whereas a variation of 8.6% was observed for a long T2 = 115 ms. However, T1s > 1400 ms are not expected in vivo; therefore, these values were not included in the dictionary used to match T2 values in healthy subject and patient acquisitions to reduce computational time.
The results of the experiments to investigate HR dependency are shown in Figure 3C. A variation in T2 quantification between 8.2% and 11.6% was observed for all the phantom vials. Additionally, T2 matched standard deviation increased at high HR (100 and 120 bpm), particularly for long T2 values.
| Healthy subjects
Coronal, transversal, short axis, and 4-chamber views of 2 representative healthy subjects acquired with the proposed qBOOST-T2 are shown in Figure 4. Bright-blood, blackblood volumes, and T2 maps are shown, respectively, in first, second, and third columns. Atria, ventricles, aorta, and papillary muscles are visible in the anatomical bright-and black-blood images for both subjects. Good left ventricle delineation is observed in the T2 maps of both subjects. Additionally, 3 Supporting Information Videos S1, S2, and S3 show the bright-blood, black-blood 3D volumes, and the co-registered 3D T2 map for 1 representative healthy subject.
Short axis reformatted anatomical bright-and blackblood images and T2 map are shown for a different healthy subject in Figure 5A. The 3D nature of the acquisition allows whole coverage from the apex to the base of the myocardium. Bull's eye plot of mean myocardium T2 quantification and T2 standard deviation are shown in Figure 5B, uniform T2 values are observed across the different segments, although lower precision (corresponding to a higher standard deviation) is observed in the inferior part of the left ventricle. A histogram of per-pixel T2 distribution is shown in Figure 5C. The mean and standard deviation of T2 distribution were 49.1 ms and 4.8 ms, respectively, whereas maximum and minimum matched T2 values were 71 and 22 ms. Additionally, T2 distribution through coronal slices showed a linear correlation of y = 0.02x + 48.38 ( Figure 5D).
Coronal, 4-chamber views and coronary reformatted images obtained with bright-blood qBOOST-T2 and CMRA are shown in Figure 6 for a representative healthy subject. Both approaches show clear delineation of aortic wall, papillary muscles, and coronary arteries.
T2 maps generated with the proposed approach were compared with conventional 2D bSSFP T2 mapping qualitatively and in terms of T2 quantification. The 2D short axis views and the reformatted short axis views obtained with qBOOST-T2 are shown in Figure 7 for 10 healthy subjects.
The 3D nature of the acquisition permits to obtain complete coverage of the heart. B, Bull's eye plot of average T2 quantification and T2 standard deviation show uniform T2 quantification in all the different segments. C, Histogram of per-pixel T2 distribution through the whole left ventricle. D, Averaged T2 distribution through coronal slice. Uniform T2 quantification is observed in the left ventricle | 1681 MILOTTA eT AL.
F I G U R E 6 Comparison between
bright-blood anatomical images (first column), black-blood images (second column) acquired with qBOOST-T2 (A), and bright-blood CMRA (B) for 1 healthy subject. Coronal, 4-chamber views, and coronary artery reformats are shown in first, second, and third row, respectively F I G U R E 7 Comparison between 2D short-axis standard T2 maps and short-axis reformatted 3D qBOOST-T2 maps for 10 healthy subjects.
qBOOST-T2 maps have been reformatted to the same slice position of the acquired 2D bSSFP T2 maps. Comparable visual image quality is obtained with the 2 approaches showed a slightly lower precision with respect to the standard 2D T2 mapping technique (4.09 ± 1.25 ms and 5.19 ± 10.9 ms for standard T2 mapping and qBOOST-T2, respectively); however, it was not statistically significant. T2 quantification obtained with standard 2D bSSFP T2 mapping and qBOOST-T2 mapping were also compared in a Bland-Altman analysis ( Figure 8C). A mean difference of 2.71 ms was observed between the 2 mapping techniques and the limits of 95% agreement were 0.61 ms and 6.03 ms.
Bar and bull's eye plots of the percentage of variation of mean T2 value and T2 standard deviation are shown in Figure 9. An overestimation of T2 is obtained with qBOOST-T2 approach with respect to conventional 2D bSSFP in all left ventricular segments. Additionally, a lower precision is observed, especially in the inferior part of the left ventricle. However, precision may be affected not only by the different sequences but also by different imaging parameters, such as slice thickness and resolution. The effect of averaging contiguous 3D qBOOST-T2 slices on precision has been investigated and the results are shown in Supporting Information Figure S2. Similar findings were obtained by investigating the effect of image resolution on T2 quantification (Supporting Information Figure S3).
| Patients
The average scan time for the proposed qBOOST-T2 was 10 min and 35 s. Bright-and black-blood images, and T2 F I G U R E 8 Quantification of septal myocardium mean T2 and precision of the proposed qBOOST-T2 technique and comparison with conventional 2D T2 mapping. A, Comparison between myocardial mean T2 obtained with conventional 2D T2 mapping (gray) and the proposed 3D qBOOST-T2 mapping sequence (blue) for each healthy subject. Good agreement is observed in terms of mean T2 between the 2 approaches. B, Comparison between myocardial T2 precision (measured as standard deviation with in a septal ROI) obtained with conventional 2D T2 mapping (gray) and the proposed 3D qBOOST-T2 mapping sequence (blue) for each healthy subject. C, Bland Altman plot comparing the proposed qBOOST-T2 sequence with the conventional 2D bSSFP T2 mapping technique. Good agreement is observed between the 2 approaches. A slight T2 overestimation is obtained with qBOOST-T2mapping (bias = 2.71 ms), however, T2 quantification is within the 95% interval. D, Comparison between precision obtained with standard T2 mapping and the proposed qBOOST-T2. A slightly lower (not significant) precision is observed with the proposed qBOOST-T2 sequence. Myocardial T2 accuracy and precision were measured in a ROI in the septum of the myocardium maps reformatted in coronal orientations are shown in Figure 10. Corresponding conventional 2D bSSFP T2 maps are also included in Figure 10 for comparison purposes. Myocardial septal T2 values were measured in apical, mid and basal slices for each subject and the results are reported in Supporting Information Table S3. A general overestimation (bias of 2.3 ms) and lower precision with respect to conventional 2D bSSFP T2 mapping was observed with the proposed approach. Bull's eye plot of mean myocardium T2 quantification and T2 standard deviation are shown in Supporting Information Figure S4A for a representative patient. A histogram of per-pixel T2 distribution is shown in Supporting Information Figure S4B. The mean and standard deviation of T2 distribution were 46.5 ms and 6.8 ms, respectively. T2 distribution through coronal slices showed a linear correlation of y = −0.03x + 47.3 (Supporting Information Figure S4C).
| DISCUSSION
In this study, a 3D free-breathing accelerated qBOOST-T2 sequence for simultaneous and co-registered acquisition of anatomical high-resolution bright-blood and black-blood volumes and a 3D T2 map has been proposed.
This approach was based on the acquisition of three 4× undersampled interleaved bright-blood whole-heart datasets acquired with different magnetization preparations: (1) T2prep-IR preparation module, (2) T2 preparation, and (3) no preparation. The T2prep-IR prepared dataset provided bright-blood anatomical visualization, the black-blood volume was obtained by performing a PSIR reconstruction between the first and third dataset and the 3D T2 map was generated by matching the acquired signal evolution to a dictionary obtained by means of EPG simulations. The use of 2D image-based navigators allowed SI and LR translational F I G U R E 9 Percentage of variation of mean T2 (A) and T2 precision (B) between 2D bSSFP and 3D qBOOST-T2. T2 overestimation and a lower precision are observed in each segment of the left ventricle. A, anterior; S, septal; I, inferior; L, lateral; AS, anterior-septal; IS, inferior-septal; IL, inferior-lateral; AL, anterior-lateral motion correction with 100% respiratory scan efficiency and predictable scan time. Whereas, the use of a 3D patch-based PROST reconstruction enabled 4× undersampled acquisition preserving good visual image quality.
The proposed qBOOST-T2 has been designed to enable a comprehensive assessment of the heart including anatomical visualization and T2 myocardial tissue quantification in a single free-breathing scan, thus overcoming some of the limitations of current sequential acquisitions, such as misalignment and long scan times. The 3D acquisition allowed whole-heart myocardium coverage in comparison to conventional breathhold 2D T2 mapping maintaining uniform T2 quantification across the whole left ventricle and across different slices. Additionally, the nearly isotropic high-resolution nature of the acquisition permitted to reformat the co-registered bright-blood, black-blood volumes, and T2 maps in different orientations (coronal, transversal, short-axis, and 4-chamber) preserving good image quality and uniform T2 quantification in a clinically feasible scan time, in comparison to recently proposed 3D T2 mapping methods with lower resolution that do not allow reformatting the 3D volume in different orientations 35,36 or that requires long acquisition time. 37 The proposed qBOOST-T2 approach showed good accuracy and precision with respect to spin echo reference values (high linear correlation) in the phantom experiment. T2 quantification was found to be robust to T1 variability in both simulations and phantom experiments (T2 variability <5%). Sequence simulations showed robustness to different HRs (percentage of variation <5%). Higher dependency on HR was observed in the phantom scan (variability of 10%); however, capability to differentiate between different T2 values ranging between 25 ms and 115 ms was observed.
Good delineation of anatomical structures was observed in the bright-blood volume acquired with qBOOST-T2 approach. However, lower sharpness was observed in the reformatted coronary qBOOST-T2 image ( Figure 6), which may be caused by the undersampled nature of the acquisition and by residual motion that could affect fine resolution details.
Good agreement in terms of T2 quantification was observed between T2 maps obtained with the proposed qBOOST-T2 sequence and standard bSSFP T2 mapping in healthy subjects. A slight T2 overestimation and a lower precision was observed with the proposed approach in comparison to conventional 2D bSSFP T2 mapping. However, the difference in measured precision was not statistically significant. The bias in T2 quantification between qBOOST-T2 and 2D bSSFP T2 mapping calculated with the Bland-Altman analysis was 2.7 ms (within the limits of 95%). The slight T2 overestimation of qBOOST-T2 with respect to 2D bSSFP T2 mapping was likely due to the different k-space ordering used by both sequences (centric for qBOOST-T2 and linear for conventional 2D T2 mapping), as has been reported before. 3 Whereas, the high-resolution 3D nature of qBOOST-T2 (slice thickness = 2 mm) may explain the lower precision observed F I G U R E 1 0 Comparison between 2D short-axis standard T2 maps and short-axis reformatted 3D qBOOST-T2 maps for 4 patients with suspected cardiovascular disease. Apical, mid, and basal slices are shown for the acquired patients. Additionally, bright-blood and black-blood short axis reformatted images are shown for the qBOOST-T2 acquisition. No pathologies were diagnosed for any of the acquired patients | 1685 with the proposed approach with respect to 2D T2 mapping (slice thickness = 8 mm). A trade-off between image resolution, T2 precision, and partial volume have been observed (Supporting Information Figures S2 and S3); thus, the lower precision observed in the in vivo experiment may not only be due to the proposed technique but due to different image parameters adopted in the 3D and 2D scans (i.e., resolution). Our experiments showed that decreasing the resolution leads to an increased precision, associated to the increased signal to noise ratio in each acquired volume and thus an increased precision. However, it has been previously shown 20 that lowresolution acquisitions may introduce partial volume artefacts that could affect T2 quantification and precision.
A general T2 overestimation across the whole myocardium with respect to conventional 2D bSSFP T2 mapping was also observed in the bull's eye plots. However, T2 quantification was uniform across the whole 3D volume. A lower precision was observed particularly in the inferior part of the heart, which may be explained by the presence of residual motion in the reconstructed images and lower signal to noise ratio due to larger distance to the radiofrequency coils. Additionally, the inferior region of the heart is located close to the edge of the FOV; thus, imperfect shimming could lead to field inhomogeneities that would affect the T2 map. Moreover, a lower signal to noise is expected in the qBOOST-T2 acquisition due to lower slice thickness.
Preliminary results in 4 patients showed a similar trend as noticed in the healthy subject study. A slight T2 overestimation was observed in each acquired short-axis slice when compared with standard 2D bSSFP T2 mapping, with slightly lower precision. However, the 3D whole heart coverage of the proposed approach provides the flexibility to reformat the acquired volume in any orientation, which could be beneficial for the identification of localized pathologies as shown in van Heeswijk et al. 37 A potential limitation of the proposed work is fat suppression. Different fat suppression techniques are used on each acquired dataset because fat signal evolution differs in each volume. In the first dataset, a STIR approach is used to achieve fat suppression. The inversion pulse of the T2prep-IR module was used to null the fat signal with an TI of 110 ms. In the second and third dataset, a SPIR approach was used and spectral presaturation FAs of 110 degrees and 130 degrees were used to null fat signal in the second and third volume, respectively. Both TI and SPIR FAs were optimized for a HR of 60 bpm, however, the HR dependency of fat suppression techniques could lead to residual fat signal in 1 or more reconstructed volumes. If a suboptimal fat suppression is achieved in 1 or more of the acquired volumes an unpredictable signal will be matched in the T2 map: depending on the acquired signal evolution, the T2 corresponding to the closest dictionary entry to the measured signal will be matched. Moreover, residual fat signal could generate partial volume artefacts thus affecting the T2 quantification at the myocardium-fat interface. In the presence of partial volume, the mixed signal will be matched to the closest signal evolution entry in the dictionary; however, it will not reflect the proper T2 value of the voxel. The least square error of the matching process could be used to assess the accuracy of the matching in the presence of partial volume artefacts.
The approximation of the standard deviation of the proposed technique used in this study ignores intrinsic variability of underlying T2 because uniform mean T2 values were expected across healthy subjects. However, this approximation is valid only when analyzing normal T2 values and percentages of the mean should be considered in future patient studies.
An additional limitation of the proposed technique is the approximation of respiratory motion as pure translational motion in SI and LR directions. Respiration induces additional displacements of the heart such as translational motion in the anterior-posterior direction, as well as rotation and nonrigid deformation. [38][39][40] Future studies will focus on the implementation and optimization of nonrigid respiratory motion correction within the reconstruction. 41 A further limitation is the sensitivity to arrhythmia. In the presence of arrhythmia, the measured signal would differ from the steady state signal expected in the 3 different interleaved acquisitions, generating a T2 overestimation or underestimation in the matched T2 maps. Prospective or retrospective arrhythmia rejection could be incorporated in the future to overcome this limitation. With a prospective arrhythmia rejection approach, 3 interleaved beats will be rejected in the presence of 1 arrhythmic heart beat and the entire acquisition will be repeated with a stabilized HR; however, this approach will lead to longer and unpredictable acquisition time. On the other hand, by exploiting retrospective arrhythmia rejection, all the datasets will be acquired and the data corrupted by arrhythmic heart beats will be excluded from the reconstruction. However, the reconstructed dataset will be further undersampled (an undersampling factor of 4 is used to accelerate the acquisition); thus, in the presence of high undersampling, the image quality of the reconstructed datasets and, therefore, of the matched T2 maps may be compromised. Validation of the proposed approach in patients with cardiovascular disease and challenging acquisition conditions, i.e., arrhythmic heart beat will be investigated in future studies.
| CONCLUSIONS
The proposed accelerated qBOOST-T2 sequence allows the acquisition of 3D co-registered high-resolution bright-and black-blood volumes and T2 map for comprehensive assessment of cardiovascular disease in a clinically feasible scan time of ~11 min. The proposed approach shows promising results in terms of accurate T2 quantification when compared
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of the article. FIGURE S1 EPG simulations performed to assess the matched T2 dependency on the T1 used to generate the simulated signal. A, Signal evolution of T1/T2 pairs with T2 = (40:6:88) ms and T1 = (800:100:1400) ms were matched to a EPG dictionary with fixed T1 = 1100 ms. High T1 dependency was observed for long T2 values. B, Signal evolution of T1/T2 pairs with T2 = (40:6:88) ms and T1 = (800:100:1400) ms were matched to a EPG dictionary with T1 = (900, 1100, 1300) ms. T2 matching percentage error was decreased and a T2 variation < 5% was observed for almost all the simulated signal. C, Maximum variability errors (T1 = 800 and 1400 ms) obtained by matching the simulated signal to a dictionary with fixed T1 (top row) and a dictionary with T1 = (900, 1100, 1300) ms (bottom row) FIGURE S2 A, Effect of averaging contiguous slice on T2 quantification and T2 precision. Averaging 6 contiguous slices leads to a reduction of standard deviation in a septal ROI from 5.90 ms to 3.39 ms (percentage of variation of 42.5%), whereas no effect on T2 quantification was observed (T2 variability of only 1.1%). B, T2 intensity profile drawn across a septal region (indicated by the black line) for different number of summed slices. Decreasing resolution in the slice direction leads to an increase of partial volume effects between blood and myocardium. Indeed, a narrower myocardial delineation is observed for a high number of summed slices. Additionally, partial volumes effects are visible in lower resolution images as shown by the black arrow FIGURE S3 A, Three 3D qBOOST-T2 maps were generated for 1 representative subject with reconstructed resolutions of 1 × 1 × 2 mm 3 , 1.5 × 1.5 × 3 mm 3 and 2 × 2 × 4 mm 3 and compared with 2D bSSFP T2 map. B, Mean T2 and T2 precision measured in the septum of the myocardium as function of different reconstructed resolutions for 3D qBOOST-T2. A reduction in standard deviation is observed, whereas a variability of only 0.96% in myocardial T2 quantification was observed between different resolutions. Table: Mean and standard deviation of T2 measured in the septum for different reconstructed resolutions FIGURE S4 A, Bull's eye plot of averaged myocardial T2 quantification and precision of the proposed qBOOST-T2 mapping sequence for patient 2. B, Histogram of per-pixel T2 distribution through the whole left ventricle. C, Averaged T2 distribution through coronal slices showed a linear correlation of y = −0.03x + 47.3. Uniform T2 quantification is observed in the left ventricle TABLE S1 T1 and T2 values obtained from Inversion Recovery Spin Echo (IRSE) and Spin Echo (SE) experiments on a phantom with 6 vials with different agar concentration (0.8, 1, 1.5, 2, 3, 5%). The measured T2 values are within a range than includes T2 of physiological and pathological myocardium (T2 myoc = 52 ms T2 myoc-diaseased = 65 ms) TABLE S2 Acquisition parameters used in phantom and in vivo acquisition for 2D bSSFP T2 mapping, 3D qBOOST-T2 and coronary magnetic resonance angiography (CMRA) TABLE S3 Measured septal myocardial T2 values obtained with qBOOST-T2 and conventional 2D bSSFP for 4 patients. A general T2 overestimation and lower precision is observed with the proposed technique VIDEO S1 Bright-blood 3D volume acquired with qBOOST-T2 for a representative healthy subject VIDEO S2 Co-registered black-blood 3D volume acquired with qBOOST-T2 for same healthy subject shown in Video S1 VIDEO S3 Co-registered 3D T2 map acquired with qBOOST-T2 for same healthy subject shown in Videos S1 and S2. Uniform T2 quantification is observed across the whole myocardium | 8,566.6 | 2019-10-21T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Detection of nanoscale electron spin resonance spectra demonstrated using nitrogen-vacancy centre probes in diamond
Electron spin resonance (ESR) describes a suite of techniques for characterizing electronic systems with applications in physics, chemistry, and biology. However, the requirement for large electron spin ensembles in conventional ESR techniques limits their spatial resolution. Here we present a method for measuring ESR spectra of nanoscale electronic environments by measuring the longitudinal relaxation time of a single-spin probe as it is systematically tuned into resonance with the target electronic system. As a proof of concept, we extracted the spectral distribution for the P1 electronic spin bath in diamond by using an ensemble of nitrogen-vacancy centres, and demonstrated excellent agreement with theoretical expectations. As the response of each nitrogen-vacancy spin in this experiment is dominated by a single P1 spin at a mean distance of 2.7 nm, the application of this technique to the single nitrogen-vacancy case will enable nanoscale ESR spectroscopy of atomic and molecular spin systems.
Here we develop the theory of relaxation based sensing as used in the main text of this work. As we are considering axial magnetic field strengths, B 0 , such that B 0 ∼ 2πD/2γ e ∼ 512 G (where D = 2.87 GHz is the zero-field splitting of the NV spin, and γ e = 17.6 × 10 6 s −1 G −1 is the electronic gyromagnetic ratio), only the |0⟩ ↔ | − 1⟩ transitions of the NV spin will be appreciably excited, meaning we can disregard any population of the | + 1⟩ state. The time evolution of the associated density matrix, ρ T , is described by where ρ T represents the combined density matrix of the entire NV spin + environment system. The full Hamiltonian is given by H T = H NV + H int + H E where H NV and H E are the self Hamiltonians of the NV centre and environment, which, for a general spin bath environment are given by respectively, where we have assumed that the both the z axis and the external magnetic field are aligned with the principle axis of the NV spin, and E ij is the tensor describing the interaction between spins i and j in the environment, which in general may include both exchange and magnetic-dipole interactions depending on the environment in question. Due to the highly localised nature of the NV wavefunctions, the coupling of the environmental spins to the NV may be described by the magnetic dipolar interaction alone, where B i is the symmetric magnetic dipole tensor describing the interaction of the NV spin with the i th environmental spin, and includes both transverse and longitudinal components, proportional to P x,y and P z of the NV spin respectively. The latter have a pure dephasing effect, resulting in an additional contribution to the intrinsic dephasing rate of the NV. As relaxation processes occur on timescales that are much longer that the typical interaction timescales of the environmental constituents, essentially placing the system in a Markovian regime, the resulting dephasing will be purely exponential. These effects may thus be modeled using a master equation approach for the reduced density matrix of the NV spin, ρ NV , as follows where, in the present context, L is the Lindbladian operator corresponding to a pure dephasing process on the NV spin, and is given by L = √ 2Γ NV P z . The total dephasing rate due to both the local crystal environment and the longitudinal coupling to any external environment is given by . The timescale of the intrinsic dephasing process is described using the inhomogeneous linewidth, (T * 2 ) −1 , since the transverse phase accumulation occurs in the absence of any pulsed microwave control. Subtle tuning effects that modify the sensitivity of this technique to various parts of the environmental spectral density may be achieved by changing the intrinsic dephasing rate via dynamic decoupling techniques.
In what follows, owing to the strong intra-environment and comparatively weak NV-environment couplings, we will treat the coupling of the environment to the transverse components of the NV spin as a semiclassical oscillatory field (these simplifications will be later justified in Supplementary Note 2), where B (ω E ) = b x (ω E ) P x +b y (ω E ) P y ; and b x and b y are the x and y components of the magnetic field. The frequency spectrum (ie the distribution of ω E ) is determined by analysing the interaction between environmental constituents, as described by H E (again, see Supplementary Note 2). To make the solution tractable, we change to the interaction picture. The transformed equation of motion is given by with the interaction Hamiltionian given by V I = e iHNVt Ve −iHNVt . We are then interested in determining the rate at which the NV spin relaxes to its equilibrium state under the influence of the environment. We proceed by reducing the 3 × 3 system of first order linear differential equations described by equation 6 to a higher order differential equation for P 0 ≡ ρ 00 . We then wish to solve this equation, together with the initial conditions of ρ ij = 0 unless i = j = 0, in which case we have ρ 00 = 1, representing the initial polarisation of the NV spin in the |0⟩ state.
To gain insight into the expected analytic solution for the spin-1 NV centre, we consider the simplified case in which only one of the transitions of the NV centre is excited by the environment, and the other is assumed to be too far detuned to have any effect on the population of the spin states. This simplifies the analysis dramatically, yet demonstrates the main properties of relaxation based detection, and is applicable to a spin-1 system for cases of significantly strong Zeeman splittings between the | ± 1⟩ states.
The equation of motion for where b ≡ ⟨ B 2 ⟩ 1 2 is the second moment of the strength of the effective magnetic field operator, B, and δ = 2πD − ω NV − ω E is the detuning between the energy levels of the NV (2πD − ω NV ) and environmental (ω E ) frequencies.
A. Response to a monochromatic transverse field
Resonant case
For the case where the frequency of the environment is resonant with the transition frequency between the probe's spin states (δ = 0), the solution of Supplementary Equation 7 is Typically the spin based environments in which we are interested couple weakly to the NV spin as compared with its intrinsic dephasing rate, implying Γ NV ≫ b. In fact, even a strong coupling will also induce additional dephasing, so even in a worst case scenario we are guaranteed Γ NV > b. In this limit, we have Hence the resonant (and therefore maximal) longitudinal relaxation rate is given by
General case
When a finite detuning, δ, exists, we may examine relative importance of the terms within Supplementary Equation 7, subject to rescaling t in terms of the decay time from the resonant solution. That is, if we consider the dimensionless variable T = Γ max 1 t and retain terms up to and including order O , the solution for an arbitrary detuning becomes For zero detuning, we recover the previous result (Supplementary Equation 10). For finite detuning, the relaxation rate is modified by a Lorentzian factor with a FWHM of Γ NV . The complete decay profile is then obtained by integrating this expression over the spectral density of the environment, implying that the δ-dependent relaxation rate acts acts to filter out the environmental spectrum about δ ∼ 0.
B. Response to a transverse field with an arbitrary spectral density
Even without considering the specifics of the spectral density, the response of the NV spin to an arbitrary spin bath can vary remarkably due the geometric proximity and arrangement of the bath relative to the NV centre. The definitions of the NV spin relaxation and the corresponding filter function are given by ,and respectively, where the filter function, G(ω NV , ω E ) acts to filter out regions of the spectral density (as dependent on the external field strength, B 0 = ω NV /γ e ), and depends explicitly on the geometric arrangement of the environmental constituents. Ultimately, given some measurement record and filter function, it is expression 12 that must be deconvolved to reproduce the spectral density, S(ω E ). In this section, we consider the effects of the geometric arrangement of the environment on the filter function, G, for a general spectral density; and specific case of the internal P1 nitrogen donor electron spin bath in type-1b diamond is considered below in Supplementary Note 2.
Response to an internal (bulk) spin bath
We note that the coupling of the NV to a bath spin located at some distance r may be written b ≡ β(θ, ϕ)/r 3 , where the specific details of the angular dependence of the coupling are incorporated into the parameter β(θ, ϕ). We note that is is necessary to omit further discussion of β until Supplementary Note 2, as different environmental processes will be more readily detectable by the NV spin at different relative angles. Unlike the transverse (spin echo) case, where the precession of the NV spin vector in the x − y plane is sensitive to all longitudinal field sources, the effect on the longitudinal projection is dominated by the coupling to the nearest P1 electron spin. As such, we may write P b (b) = P r (r)P θ,ϕ (θ, ϕ), where r, θ, ϕ are the spherical coordinates associated with the distribution of field sources.
From Ref. 1, the distribution of distances from a given NV centre to its nearest spin impurity is given by where n is the average density of impurities in the bath. Substituting this expression into Supplementary Equation 12, we find where G is the Meijer-G function, and ⟨ |β| ⟩ = ∫ |β(θ, ϕ)| P θ,ϕ (θ, ϕ) dθ dϕ. Thus, we identify the filter function associated with environments inside the diamond lattice to be where A in is a constant, associated with the geometry of the bath, that may be renormalised.
Response to an external (surface) spin bath
In contrast to the bulk spin bath case, spins on the surface are unable to exist arbitrarily close to the NV centre. Typically we consider samples in which NV centres exist at some depth h + δh below the surface, with h being the mean depth, and δh a normally distributed variable with variance ⟨ δh 2 ⟩ ≪ h 2 . In this case, an individual NV spin is exposed to many bath spins, meaning that the effective coupling distribution, P b (b), is normally distributed.
In this case, we may expand Supplementary Equation 11 for small t, which, upon substitution into Supplementary Equation 12 and averaging over P b (b) gives Thus, we identify the filter function associated with environments outside the diamond lattice to be Supplementary Note 2.
Theoretical description of the coupled NV-P1 system
In this section, we discuss the features we expect to be evident in the P1 centre spectrum by examining the effect of a P1 centre on the magnetic field dependent relaxation rate of a near-by NV centre. We conclude this section by demonstrating the equivalence of the semi-classical approach used in this work, and a quantum mechanical treatment of the NV-P1 interaction.
A. P1 Hamiltonian
The Hamiltonian of a P1 centre is given by where A P1 is the hyperfine tensor describing the coupling between the P1 electron, ⃗ S, and 14 N nuclear spin, ⃗ I; and Q is the quadrupole splitting of the nuclear spin. For all field strengths at which there is an appreciable overlap of this spectrum with the NV spin filter function, B 0 ∼ 512 G, we find that the eigenstates of the P1 centre electron spin are predominantly dictated by the external magnetic field. In this instance, the Hamiltonian of the P1 centre becomes for cases where the P1 axis is aligned with the external magnetic field, where A z = 114 MHz, and A x = 81.3 MHz. If the P1 axis is aligned along one of the three other bond directions not aligned with the field, the Hamiltonian may be transformed via the rotation operator R = exp (−iI y θ) = 1 − iI y sin(θ) − I 2 y (1 − cos(θ)) (The other two axes may be realised via a trivial rotation about the z axis), and is given by where a z = 1 9 (8A x + A z ) = 85 MHz, and a x = 1 9 (5A x + 4A z ) = 99 MHz.
B. Coupling of the P1 environment to the NV centre
The interaction between the NV spin and the P1 nuclear spin is ignored on account of its comparative weakness. For the interaction between the NV and the P1 electron, the magnetic dipole interaction is given by wherer is the unit separation vector between the NV and P1 centres, and is the effective dipolar coupling strength.
The components of the NV-P1 interaction responsible for the relaxation of the NV spin are those coupling to its transverse components, namely P x and P y . Without loss of generality, we may rewrite this interaction as an effective quantum mechanical magnetic field, ⃗ B, where B x,y,z couples to P x,y,z . To make use of Supplementary Equation12, we make the semi-classical approximation, and assume that NV-P1 interaction plays very little part in determining the dynamics of the environment. The problem of determining the environmental spectrum then reduces to solving for the environmental evolution exclusively under its own influence as follows.
In order to determine the effective strength of the semi-classical magnetic field used to model the effect of the P1 bath on the NV spin, we must determine which components of the NV-P1 interaction, H int , are relevant, and which may be discarded for a given magnetic field strength. To do this, we transform H int to the interaction picture (we point out that only the Hilbert space associated with the NV and P1 electron spins need be considered, as the interaction between the NV electron spin and the P1 nuclear spin is sufficiently weak that it may be ignored). In matrix form, H int is given (in the In order to simplify H int , we switch to the interaction picture to see which terms are important near 512 G.
where k = −1, 0, +1 is the hyperfine projection of the nuclear spin. Using the rotating wave approximation, we can see that only terms of frequency 2πD −2ω 0 −2πkA z need be retained, and all other off-diagonal terms may be ignored. Transforming back to the Schrodinger picture, the simplified interaction Hamiltonian becomes from which we infer that the effective transverse field strength associated with the allowed NV-P1 |0, ↓⟩ ↔ | + 1, ↑⟩ transitions ((a), (b) in Supplementary Figure 1) is given by Determination of the effective field strength associated with the disallowed transitions ((c), (d) in Supplementary Figure 1) follows the same approach as above. From this, we find the associated field strength to be
C. P1 dynamics
To determine the dynamic behaviour of the P1 environment, we compute the autocorrelation functions associated with the field components above. Interactions between environmental components may be modeled by damping these autocorrelation functions with a decaying exponential, exp (−Γ P1 t) to describe their relaxation due to mutual flip-flop processes with corresponding relaxation rate Γ P1 .
The corresponding spectra may then be found by computing the Fourier transforms of the autocorrelation functions. From this, we find the spectra associated with the allowed transitions to be S on all (ω) = 1 6π and S off all (ω) = 1 6π for cases of on and off axis P1 centres respectively. Taking the relative proportions of on and off-axis P1 centres to be 25% and 75% respectively, we find the overall spectrum associated with the allowed transitions to be Similarly, the spectra associated with the disallowed transitions are given by and respectively, where The spectrum associated with the disallowed transitions is then S dis (ω) = 1 4 S on dis (ω) + 3 4 S off dis (ω).
By employing the full spectrum, S(ω E ) = S all (ω E ) + S dis (ω E ), in equation 12, we find the resulting external fielddependent relaxation rate of the NV centre to be where the effective couplings, , are due to integration over all possible NV-P1 separations. By taking the average P1 density to be 50 ppm, and the FID rate to be Γ 2 = 5.0 MHz, we may plot the resulting field-dependent relaxation rate of the NV spin (Supplementary Figure 1). | 3,966.2 | 2016-01-05T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Operational and geological controls of coupled poroelastic stressing and pore-pressure accumulation along faults: Induced earthquakes in Pohang, South Korea
Coupled poroelastic stressing and pore-pressure accumulation along pre-existing faults in deep basement contribute to recent occurrence of seismic events at subsurface energy exploration sites. Our coupled fluid-flow and geomechanical model describes the physical processes inducing seismicity corresponding to the sequential stimulation operations in Pohang, South Korea. Simulation results show that prolonged accumulation of poroelastic energy and pore pressure along a fault can nucleate seismic events larger than Mw3 even after terminating well operations. In particular the possibility of large seismic events can be increased by multiple-well operations with alternate injection and extraction that can enhance the degree of pore-pressure diffusion and subsequent stress transfer through a rigid and low-permeability rock to the fault. This study demonstrates that the proper mechanistic model and optimal well operations need to be accounted for to mitigate unexpected seismic hazards in the presence of the site-specific uncertainty such as hidden/undetected faults and stress regime.
Select model dimension (3-D) 2. Select governing physics interfaces (Darcy's Law and Solid Mechanics)
3. Add multiphysics interface Poroelasticity 4. Import well operation data (.txt format) 5. Generate geometry 6. Define hydrological and mechanical parameter values 7. Select Poroelastic Storage for Darcy's law 8. Define initial and boundary conditions for each physics interface 9. Locate fluid and matrix properties for each physics interface 10. Generate mesh 11. Define time steps for simulation and results 12. Run simulation 13. Visualize and export results for the Coulomb stress analysis In our three-dimensional (3-D) finite element model, two separated sections are assigned for the surrounding basement to enhance numerical efficiency: finer tetrahedral mesh within the inside cubic region whereas coarse tetrahedral mesh for the outer region ( Figure S1). Fine mapped mesh is implemented in the fault region to achieve better accuracy in mechanical solutions. Tetrahedral mesh is highly refined near the boundaries of the fault and the points for injection-extraction to resolve the strong pressure gradients caused by the contrast of material properties.
Hydrological model results with variation in basement permeability
The hydrological modeling approach has focused on the direct pore-pressure impact on the fault slip and suggested the enlargement of pressurized regions encountering the locked fault as a critical factor to induce earthquakes. A recent hydrological model tried to match stimulation history ( Figure S2A) and the temporal evolution of seismic events observed at Pohang by setting the hydraulic diffusivity of the basement rock surrounding the fault plane to 1 × 10 −2 m 2 /s (converted to permeability κ b = 7.6 × 10 −17 m 2 ) that was the practical upper limit of the hydraulic diffusivity estimated from hydraulic modeling calibration and analytical Jacob method (measured values within 1 × 10 −4 ≤ D b ≤ 1 × 10 −1 m 2 /s; refer to Fig. 6-1 and Section 6.2.1. in 1 ). Our uncoupled hydrological model results show that moderate to large earthquakes, including the M w 5.5 event, are unlikely to occur due to substantial dissipation of elevated pore pressure into the high-diffusivity basement after extraction or shut-in. (solid line in Figure S2B).
This pore-pressure behavior is similar to one obtained from the numerical model used for the Korean Government Commission Report 1 which also shows rapid pore-pressure buildup with stimulation and following attenuation after terminating stimulation ( Figure S3, reproduction of Figure O-15b given in the Report 1 ). Note that the hydrological model in 1 implemented two fault planes located at the hypocenters of M w 3.2 and M w 5.5 earthquakes. The presence of high-permeability fault plane nearby the M w 3.2 event causes less pore-pressure buildup after the second stimulation through PX-1 compared to the result from this study shown in Figures 3D and S2B.
Under a diffusion-dominant system, the pore-pressure buildup within the permeable fault is essential to nucleate large-magnitude earthquakes, which is controlled by a contrast of hydraulic diffusivity between a fault and a bounding Figure S1. Cubic meshes are implemented in the fault whereas tetrahedral meshes are used in the surrounding basement. Two regions are assigned for the surrounding domain to save the numerical costs by using coarser tetrahedral meshes in the outer region.
basement. Less permeable basement rocks will decelerate diffusion of pore pressure into the fault, but enhance the efficacy of trapping accumulated pore pressure within the fault after shut-in. Reducing an order of magnitude of the basement permeability (κ b = 7.6 × 10 −18 m 2 ) fails to generate immediate pore-pressure accumulation within the fault, such that pore-pressure evolution at the hypocenter is not in accord with the initial seismic events (dashed line in Figure S2B). Seismic events afterwards cannot be described by this model due to no substantial increases in pore pressure despite stimulation Phase 3 (injection through PX-2 around ∆t = 450 days) as well as no accumulation of pore pressure after shut-in (∆t ≥ 645 days).
Changing the basement permeability to κ b = 7.6 × 10 −19 m 2 /s generates monotonic increases of pore pressure at the Pohang earthquake hypocenter after the second injection operation (dotted line in Figure S2B). The low-diffusivity basement traps elevated pore-pressure within the fault zone, which may induce the post shut-in M w 5.5 earthquake. Notwithstanding, the postponed diffusion process fails to capture the seismic events during and after Phases 1 and 2 stimulation. The inconsistency between temporal patterns of pore-pressure changes from conventional hydrological modeling and consecutive occurrence of earthquakes addresses that the Pohang earthquake was most likely to involve additional physical mechanisms.
Enhancement of coupling effects by operational constraints
The operation of multiple wells can generate gradients in pressure and stress fields, which determines directional characteristics of diffusion and poroelastic stressing. Figure S4 shows how significantly well operation influences the coupled process of pore-pressure diffusion and poroelastic stressing. The single-well model shows that almost no critical impact of pore-pressure diffusion on the fault ( Figure S4B) because cyclic injection-extraction operations through PX-2 causes weak gradients in pore-pressure fields that inhibits the pore-pressure dissipation into the fault (refer to Figures 4A and 4B describing mechanism schematically). Instead, shear stressing as poroelastic response to compression plays a main role in positive increases of ∆τ corresponding to the sequential well operations (Figures S4D to S4F).
Poroelasticity model results with variation in basement permeability
From all simulations with the variation in geological and operational settings, we obtain the temporal changes in pore pressure, poroelastic stresses, and Coulomb stress fields along the middle of the fault plane. The vertical axis of each plot is from the fault top (z = −3.6 km) whereas the lower bound is the fault bottom (z = −4.6 km). Figure S5 shows the spatio-temporal distribution of Coulomb stress components along the fault middle: reference model results in the left column and high-permeability basement case in the right column.
Low-permeability basements in the reference model limit pore-pressure propagation, such that no changes in f ∆p is observed after the Phase 1 stimulation ( Figure S5A). The initiation of the Phase 2 stimulation, injecting a large amount of fluids at PX-1 while extraction at PX-2 simultaneously, causes substantial compression on the fault that increases f ∆p as poroelastic response. On the other hand, high-permeabiity basements allows rapid diffusion of pore pressure into the fault ( Figure S5B). A spatial and temporal pattern of positive ∆τ s + f ∆σ n represents compression acting on the fault, driven by the sequential stimulation activities at both sides of the fault (Figures S5C and S5D), which is consistent to the focal mechanisms and stress models showing that the fault was reactivated mainly by reverse slip 1 . The summation of pore pressure and normal and shear stresses gives ∆τ, and the spatio-temporal patterns are formed corresponding to the stimulation location and history ( Figure S5E). However, rapid dissipation of pore pressure back into the high-permeability basement cannot accumulate energy enough to cause large-magnitude earthquake after shut-in ( Figure S5F). Figure O-15b for hydrological model results from the Korean Government Commission Report 1 . Pore-pressure changes at the hypocenters of M w 5.5 seismic events: Case A for the mainshock fault (which has similar geometry and orientation with the fault modeled in this study) with a low-permeability core whereas Case B for the mainshock fault not including a low-permeability core. The rapid response of pore-pressure buildup/attenuation with each stimulation activity cannot support the occurrence of M w 5.5 earthquake after shut-in of all wells. | 1,979.6 | 2020-02-07T00:00:00.000 | [
"Geology"
] |
Characterization of the Shape Anisotropy of Superparamagnetic Iron Oxide Nanoparticles during Thermal Decomposition
Magnetosomes are near-perfect intracellular magnetite nanocrystals found in magnetotactic bacteria. Their synthetic imitation, known as superparamagnetic iron oxide nanoparticles (SPIONs), have found applications in a variety of (nano)medicinal fields such as magnetic resonance imaging contrast agents, multimodal imaging and drug carriers. In order to perform these functions in medicine, shape and size control of the SPIONs is vital. We sampled SPIONs at ten-minutes intervals during the high-temperature thermal decomposition reaction. Their shape (sphericity and anisotropy) and geometric description (volume and surface area) were retrieved using three-dimensional imaging techniques, which allowed to reconstruct each particle in three dimensions, followed by stereological quantification methods. The results, supported by small angle X-ray scattering characterization, reveal that SPIONs initially have a spherical shape, then grow increasingly asymmetric and irregular. A high heterogeneity in volume at the initial stages makes place for lower particle volume dispersity at later stages. The SPIONs settled into a preferred orientation on the support used for transmission electron microscopy imaging, which hides the extent of their anisotropic nature in the axial dimension, there by biasing the interpretation of standard 2D micrographs. This information could be feedback into the design of the chemical processes and the characterization strategies to improve the current applications of SPIONs in nanomedicine.
Introduction
Magnetosomes [1,2] are exceptional intracellular structures found in magnetotactic bacteria offering the cellular functionalities such as compasses for motility and orientation [3], oxygen chelation [4] or support in colonial self-assembly [5]. The core of magnetosomes consists of single-domain magnetite crystals [6], each possessing a magnetic moment that is thermally stable at physiological temperatures [7]. These magnetotactic crystals range from 30 nm to 120 nm in size [8,9]. The size and shape of the crystals can greatly affect their ability to perform their tasks and therefore the biomineralization is governed under precise control of a number of genetic factors [10][11][12]. Chemically produced superparamagnetic iron oxide nanoparticles (SPIONs) try to imitate the cores of these magnetosomes using magnetite (Fe 3 O 4 ) [13] and maghemite (γ-Fe 2 O 3 ) nanocrystals [14]. SPIONS can be produced at sizes ranging from roughly 10 nm to about 100 nm [15]. Their surface functionalization can be adjusted to generate biocompatibility [16] by means of well-described surface functionalization schemes [17,18].
SPIONs have been used in a broad portfolio of applications: in the oil industry [19], as chemical catalysts [20] or as effective separation technologies [21], but SPIONs are applied especially in biomedical technologies, owning to their biocompatibility and responsiveness to static and alternating magnetic fields. The global market for nanoparticles in biotechnology, drug development, and drug delivery has been estimated to have reached $79.8 billion in 2019, with a compound annual growth rate of 22% [22] and SPIONs play a significant role in this economical interest. For instance, as contrast enhancement agents for magnetic resonance imaging [23,24] and subsequent cell tracking [25] or as cell markers; for example, in stem cell applications [26]. Beyond the imaging applications, SPIONs have been suggested as drug delivery agents [27]. Reaching the target site is an issue many promising drug candidates face and advanced drug delivery using SPIONs may drastically influence modern medicine advancements [28]. Besides these complementary functions, magnetic hyperthermia is an emerging technology using SPIONs as a potential cancer treatment: after reaching the cancer cell the SPIONs are exposed to an alternating magnetic field (e.g., an MRI) and their response produces sufficient heat to destroy nearby cells [29][30][31].
As in the magnetosomes found in nature these applications rely on the accurate magnetization of the SPIONs, which depends significantly on their size and shape [32]. Shape control is one of the most exciting challenges in chemical nanotechnology [33,34], mainly because the interesting properties of nanoparticles can be tuned through variations in their morphology [35,36]. Additionally, shape can also affect their self-assembly behavior [37]. Although significant progress has been made, shape control of superparamagnetic oxides is still very challenging, in particular for small SPIONs, and syntheses that aim for shape control, except for few cases, such as cubes [38], and to a lesser extent nanorods [39], often result in rather broad particle size and shape distributions.
The introduction of shape anisotropy is particularly interesting in the context of magnetic fluid hyperthermia. It is one of the proposed strategies to increase the extent of heat generated by the particles without altering the chemical composition of the material used to fabricate the nanoparticles [40]. It is well established that shape anisotropy of nanoparticles leads to an enhancement of the dissipated heat since it adds to the magnetic anisotropy, and has a non-negligible effect on the time-dependent hysteresis of the particles. It has been shown, for example, that magnetite nanocubes [41] and nanorods [42,43] have a higher specific adsorption rate than spherical nanoparticles of similar size and made of the same material. While some investigations hint at the effect of sodium and potassium cations in driving the shape transition from spherical to cubic [44], the factors affecting the change in shape from spherical to ellipsoidal remain elusive, and might be related to the temperature profile imposed during the synthesis. For example, even though usually not explicitly mentioned in the literature, some of the high temperature organic syntheses of SPIONs lead to non-spherical nanoparticles, especially when larger sizes (about 20 nm diameter) are targeted.
Thus, a precise characterization of size and shape are extremely relevant for the biomedical functionality of SPIONs [45]. Yet typical size characterization techniques such as transmission electron microscopy [38], yields only two dimensional (2D) projections and assumptions on their form are needed to extract size. Indirect techniques (such as X-ray [46] and neutron [37] scattering) rely on the ensemble average of the particles, and thus, describe only an average anisotropy. Additionally, small-angle scattering techniques require mathematical models to describe particle shape [47]. Here, we synthesized SPIONs using the standard high-temperature decomposition method [48] and imaged the SPIONs using direct three dimensional (3D) imaging methods at consecutive time points during the thermal decomposition. A crucial aspect of this approach was the avoidance of shape models in the estimation of size (volume and surface area) and shape (sphericity and anisotropy) of each particle. To achieve this, we used geometrical techniques known as stereology [49]. Our results could be used to recognize bias in standard SPION size characterization methods and thereby help to improve SPIONs shape control, yielding better, more efficient materials for biomedicine.
SPIONs Synthesis
SPIONs were synthetized by thermally decomposing a previously synthetized iron oleate-complex using a modified literature procedure [48]. The iron oleate-complex was prepared by reacting iron chloride (FeCl 3 ·6H 2 O) with sodium oleate (equivalents 1:3). Then it was heated to 320 • C with a defined temperature ramp in presence of oleic acid in trioctylamine (98%) and kept at the final temperature for 35 min. During the synthesis 1 mL aliquots were extracted at different time points and quickly quenched. Then the resulting oleic acid coated SPIONs were separated by sequential centrifugations and re-dispersed in toluene.
Image Processing
The particles were automatically selected from the tomograms using the "Analyze particles" algorithm in FIJI [54] and each particle was cropped into a subtomogram of dimensions 40 nm × 40 nm × original depth. In order to avoid bias, particles that were part of an aggregate-identified by testing if the particles touched the edges of the box-were excluded. After exclusion of the aggregates, an average of 102 subtomograms (+/−30) could be established per time point (a total of 512 particles), each containing the 3D data of a single particle. Renderings of the data were performed by the Volume Viewer plugin.
For the automated analysis in silico, binarization of the data was required. Therefore, the threshold for each subtomogram was set using the standard IsoData algorithm, followed by a watershed step. Finally, values outside the bounding box of the central particle were removed, which reduced noise. This yielded an average of 85 particles per time point (±35). From a 2D profile the equivalent spherical radius [55] was retrieved and the used to calculate the model-based volume. Inversely, model-free data could also yield the equivalent spherical radius: the radius of a particle with the same volume but shaped as a sphere.
For automated particle size measurements in 2D, the 'measure particles' tool of ImageJ (version 1.52u) was used. For the measurement in 3D, we used the 3D object counter plugin [56].
Stereology, Sphericity and Anisotropy
The 'Grid' function [57] of FIJI was used to place geometric probes digitally and randomly on the tomographic slices [58]. The crossings of the test lines, known as the Cavalieri estimator [59], were used for volume estimates (area per point = 0.91 nm 2 ), and the vertical lines were used as the Fakir probe [60]. The ImageJ Reslice routine was used to obtain slices in the XZ and YZ orthogonal dimensions. Counting was done manually in ImageJ. For all calculations, the length density (spacing between the probes) was 0.95 nm in all dimensions. The estimated surface and volume were used to calculate sphericity using the following equation [61]: with V the stereologically estimated volume and A the stereologically estimated surface area of the particles. The anisotropy was calculated using the following equation [62]: with λ x , λ y , λ z the longest calipers at X, Y and Z angles. These quantities were retrieved by brute force computing using the transform plugin [63] of ImageJ. λ x (the maximum caliper length) was found by rotating the particle along the x-axis (0-180 • ) and measure the projection at each angle. Once found, the particle was rotated over this angle to align λ x in the XY plane. The Y angle, used to assess preferred orientations, was then measured using the 'Measure particles . . . ' tool in ImageJ. The maximum width (λ y ) was found in a similar fashion by rotating 180 • along the Z axis and again the particle was aligned in the XY plane. λ z was then the maximum particle width after rotating 90 • along the Z axis. This was repeated for all particles.
Statistics
All statistical tests and graphs were performed in R (Version 3.6.1) [64]. A shapiro-Wilk test confirmed normality (p > 0.05) for all statistical datasets except the sphericity (Ψ) data. Therefore, the sphericity data is shown as boxplots, whereas all other data by arithmetic means and standard deviation. Comparisons were preceded by a homoscedasticity test to assure the variances were homogenous. This was done with Fisher's f-test, using 5% as the significance level. If the Fischer's f-test for homogenous variance resulted in a statistic p > 0.05, then it was assumed that both variances were homogenous, in which case a classic Student's two-sample t-test was run to possible differences. If the Fischer's f-test returned a p < 0.05, then a heteroscedastic situation was assumed (it was assumed that the variances of the two groups are different), in which case a Welch t-statistic was performed.
The kernel density algorithm disperses the mass of the empirical distribution function over a regular grid of at least 512 points and then uses the fast Fourier transform to convolve this approximation with a discretized version of the kernel and then uses linear approximation to evaluate the density at the specified points.
The difference of the means is calculated to compare the radii of gyration estimated from SAXS (mean ± confidence intervals) and from TEM (per particle data). If the lower bounds of the 95% confidence interval of the difference of the means was below zero (meaning 0 is within the confidence intervals), then no significant difference was assumed.
Small-Angle X-Ray Scattering
Small-angle X-ray scattering (SAXS) spectra were recorded using a NanoMax-IQ camera (Rigaku Innovative Technologies, Auburn Hills, MI, USA). The suspensions were kept in a 1 mm capillary at room temperature during the measurements. The raw data were processed with background and all possible artefacts taken into account [65] (a description of all data reduction steps and sequence can be found in Table 1 of this reference [66]). The scattering spectra are presented as a function of the momentum transfer: Materials 2020, 13, 2018 where θ is the scattering angle and λ = 0.1524 nm is the photon wavelength. The radius of gyration R g was estimated in the Guinier regime, where a linear function was regressed against the linearized data defined by the natural log of the scattering intensity (I) against q 2 [47]: (4) Therefore, the natural log of the scattering intensity is proportional to , which is in fact the slope defined by the low-q SAXS data and therefore the linear regression then estimates R g .
Results
We used a modified version of Hyeon's iron oleate based recipe [48] to synthesize monodisperse SPIONs with a target size of about 20 nm using a high boiling point solvent to control the particles size [67,68]. Aliquots were taken at five different time points during the thermal decomposition process (see Figure A1): the first one upon reaching the 320 • C plateau (=onset), then at three successive intervals of ten minutes starting ten minutes after onset and finally the fifth and last time point after cooldown of the particles.
The samples were then characterized by transmission electron microscopy (TEM) and small angle X-ray scattering (SAXS). Tomographic tilt series [53,58] were acquired on the TEM ensuing the standard 2D profile but also 3D representation in silico of each single particle [27,69]. After tomographic reconstruction [52] at least 55 particles per time point were available for quantification. The time points in all figures are denoted by red minutes on yellow clock, with the blue circle denoting particles measured after cooling down the solution.
The particles appear as dark objects on a bright background in Figure 1. Each panel in Figure 1 shows a typical raw dataset at the designated time point. The orthogonally XY and YZ planes are shown: the position of the YZ plane is denoted by the white arrowheads in the XY plane. At each timepoint three particles were marked by yellow, violet and orange boxes in the XY plane: these particles were 3D rendered below each panel. They qualitatively show the difference in size and shape within this population of particles.
To characterize their size, two different quantification methods were used, both relying on the same dataset but with different assumptions and post-processing steps. Direct quantification of the entire particle's volume without image processing was possible by the stereological quantification method known as the Cavalieri estimator [70,71] (the readers should consult reference [58] for an extended explanation, including examples). The alternative was the standard in the literature: automated analysis of 2D profiles in silico after a binarization step. This quantification assumes that the particle is spherical to estimate both its radius and volume. To characterize their size, two different quantification methods were used, both relying on the same dataset but with different assumptions and post-processing steps. Direct quantification of the entire particle's volume without image processing was possible by the stereological quantification method known as the Cavalieri estimator [70,71] (the readers should consult reference [58] for an extended explanation, including examples). The alternative was the standard in the literature: automated analysis of 2D profiles in silico after a binarization step. This quantification assumes that the particle is spherical to estimate both its radius and volume.
Particle Volume
A wide variety of particle sizes could be observed at the onset of the 320 • C plateau ( Figure 1A, Figure 2, Table 1). The majority of these particles had a volume below 2000 nm 3 . The mean volume was 1570 nm 3 corresponding to a mean spherical radius of 6.86 nm (model-free) respectively 1921 nm 3 and 7.11 nm (model-based). There were a few large outliers of more than 5000 nm 3 , which were the source of very high particle volume dispersity (Ð): 65.2%.
Particle Volume
A wide variety of particle sizes could be observed at the onset of the 320 °C plateau ( Figure 1A, Figure 2, Table 1). The majority of these particles had a volume below 2000 nm 3 . The mean volume was 1570 nm 3 corresponding to a mean spherical radius of 6.86 nm (model-free) respectively 1921 nm 3 and 7.11 nm (model-based). There were a few large outliers of more than 5000 nm 3 , which were the source of very high particle volume dispersity (Ð): 65.2%. The density kernel plots in Figure 2A are highly skewed and all outliers were larger than the mean sized particle. There was no statistically significant difference found between the two quantification methods.
Ten minutes later, the nanocrystals had increased almost eightfold in volume ( Figure 1B, Table 1, Figure 2B): the mean volume of the particles was 12,085 nm 3 (model-free) and 10,921 nm 3 (modelbased), a significant difference (p < 0.05). The model-free quantification result showed an equivalent spherical radius of 14.1, compared to 13.6 nm in the model-based result. The volume increase represents an average growth of 17.52 nm 3 per second and the volume increase is highly significant compared to the first time point (p < 0.001). The smaller, younger cores catch up and larger ones grow slower due to a less favorable volume-to-surface ratio, yielding a decrease in polydispersity (but still rather high at 32.6%). 20 min after reaching the 320 °C plateau, the particles had further increased in volume ( Figure 1C, Table 1, Figure 2C) but at much lower growth rates than previously measured (about 3.73 nm 3 per second in average): the mean volume was 14,325 nm 3 (model-free) and 13,133 nm 3 (model-based), corresponding to an equivalent spherical radius of 15.0 nm and 14.5, respectively. Ð had dropped to The density kernel plots in Figure 2A are highly skewed and all outliers were larger than the mean sized particle. There was no statistically significant difference found between the two quantification methods.
Ten minutes later, the nanocrystals had increased almost eightfold in volume ( Figure 1B, Table 1, Figure 2B): the mean volume of the particles was 12,085 nm 3 (model-free) and 10,921 nm 3 (model-based), a significant difference (p < 0.05). The model-free quantification result showed an equivalent spherical radius of 14.1, compared to 13.6 nm in the model-based result. The volume increase represents an average growth of 17.52 nm 3 per second and the volume increase is highly significant compared to the first time point (p < 0.001). The smaller, younger cores catch up and larger ones grow slower due to a less favorable volume-to-surface ratio, yielding a decrease in polydispersity (but still rather high at 32.6%). 20 min after reaching the 320 • C plateau, the particles had further increased in volume ( Figure 1C, Table 1, Figure 2C) but at much lower growth rates than previously measured (about 3.73 nm 3 per second in average): the mean volume was 14,325 nm 3 (model-free) and 13,133 nm 3 (model-based), corresponding to an equivalent spherical radius of 15.0 nm and 14.5, respectively. Ð had dropped to 20.8%, the lowest since reaching the 320 • C plateau and these levels of variability were maintained for the rest of the reaction. The kernel density volume ( Figure 2C) reveals near-symmetry: most particles had volumes within a non-significant range from the median, and outliers are found equally on both sides. Significant statistical differences were again found between the volume quantification of the model-based and model-free techniques (p < 0.05).
The system continued this close-to-linear growth of 3-4 nm 3 per second in average and particle are always significantly (p < 0.001) larger than at the previous timepoint, except between 30 and cooldown. After 30 min, a mean particle volume of 16,239 nm 3 (model-free) and 13,979 nm 3 (model-based) were reached, again a significant difference (p < 0.01). The corresponding to an equivalent spherical radius of 15.6 nm and 14.8, respectively. After the reaction, the particles were allowed to cool down. A sample of the cooled down population showed slightly lower volumes in both quantification methods: mean volume of 15,324 (model-free) and 12,975 (model-based) or an equivalent spherical radius of 15.3 nm and 14.4 nm, respectively.
The model-based quantification systematically underestimated the volume and consequently the size of particles compared to the model-free quantification and the bias intensified in significance at later time points, including after cooling down.
The same particle batch was analyzed by small-angle X-ray scattering (Figures 3, A2 and A3). Only the radii of gyration R g were retrieved to avoid biased of X-ray scattering analysis by the use of mathematical models for the particle shape. The radius of gyration is the orientationally averaged distance squared to the center of the mass of the particle and has the advantage that it is entirely model-free and can be retrieved both from the SAXS data and through image processing from the TEM data ( Figure 3). There was no significant difference between the values of the radii of gyration estimated from SAXS and those estimated from TEM ( Figure 3): the lower bound of the confidence interval of the difference of the means is always lower than 0 (the ∆ values below each time point in Figure 3).
Materials 2020, 13, x; doi: FOR PEER REVIEW www.mdpi.com/journal/materials The system continued this close-to-linear growth of 3-4 nm 3 per second in average and particle are always significantly (p < 0.001) larger than at the previous timepoint, except between 30′ and cooldown. After 30 min, a mean particle volume of 16,239 nm 3 (model-free) and 13,979 nm 3 (modelbased) were reached, again a significant difference (p < 0.01). The corresponding to an equivalent spherical radius of 15.6 nm and 14.8, respectively.
After the reaction, the particles were allowed to cool down. A sample of the cooled down population showed slightly lower volumes in both quantification methods: mean volume of 15,324 (model-free) and 12,975 (model-based) or an equivalent spherical radius of 15.3 nm and 14.4 nm, respectively.
The model-based quantification systematically underestimated the volume and consequently the size of particles compared to the model-free quantification and the bias intensified in significance at later time points, including after cooling down.
The same particle batch was analyzed by small-angle X-ray scattering (Figure 3, Figure A2 and Figure A3). Only the radii of gyration were retrieved to avoid biased of X-ray scattering analysis by the use of mathematical models for the particle shape. The radius of gyration is the orientationally averaged distance squared to the center of the mass of the particle and has the advantage that it is entirely model-free and can be retrieved both from the SAXS data and through image processing from the TEM data ( Figure 3). There was no significant difference between the values of the radii of gyration estimated from SAXS and those estimated from TEM ( Figure 3): the lower bound of the confidence interval of the difference of the means is always lower than 0 (the Δ values below each time point in Figure 3). Comparison of the radii of gyration between SAXS and TEM measurements (model-free data). The lower bound 95% confidence interval of the difference between the means (denoted as Δ) is always negative, which means there is no significant differences between the SAXS and TEM measurements. Significant differences between time points in radius of gyration were observed between the onset time point and the 10 min time point and between the 20 min and 30 min time point. The larger error in the TEM-based quantification originates from smaller sample sizes and a different counting mode (TEM counting and SAXS fit). *** means p < 0.001. Figure 3. Comparison of the radii of gyration between SAXS and TEM measurements (model-free data). The lower bound 95% confidence interval of the difference between the means (denoted as ∆) is always negative, which means there is no significant differences between the SAXS and TEM measurements. Significant differences between time points in radius of gyration were observed between the onset time point and the 10 min time point and between the 20 min and 30 min time point. The larger error in the TEM-based quantification originates from smaller sample sizes and a different counting mode (TEM counting and SAXS fit). *** means p < 0.001.
Sphericity, Anisotropy and Preferred Orientation
Shape variation can only be acquired using the full description from the entire 3D tomographic datasets of the object, not by making assumptions based on 2D profiles. Here, we use sphericity Ψ and the anisotropy to assess shape variation.
Sphericity is a unit-less measure between 0 and 1 that describes how closely an object resembles a sphere (See Material and Methods for the equation) [61]. Figure 4A shows the distribution of Ψ with a value of 1 matching a perfect sphere, shown by the dotted line. The median sphericity, Ψ, was the highest measured (0.92) at the onset and the 75% percentile was near 1 (0.99). Ten minutes later, Ψ had dropped significantly (p < 0.001) to 0.83. The particles stayed approximately at these values for the next ten minutes (see Table 1 for the mean values of Ψ). At time point 30 min at 320 • C, Ψ had dropped further to 0.55, again significantly lower (p < 0.001) than ten minutes before. The values for Ψ did not change significantly anymore during cooldown. This means that the particles started near-spherical but depart from that shape to become ellipsoidal. There are two significant jumps in sphericity: between onset and 10 min and between 20 min and 30 min. The direction of the maximum caliper was calculated for each particle (see Figure A4 for a graphical representation of the methodology) and represented by the two angles ϕ -rotation along the x-axis -and θ -rotation along the Y axis had an optimum at all time points in the broad vicinity of 90° ( Figure 5A). Most significantly, after rotating each particle degrees (i.e., aligning the maximum anisotropy component in the XY orthogonal plane), a preferred orientation of the particles on the TEM grid was observed: there was an aversion for θ = 90° (the rotation in the Y-plane) but a preference peaked for θ at 69° and 111° ( Figure 5B). The smaller particles at the onset (red in Figure 5) had the narrowest-rotation peak, which also aligned the closest to 90° in the rotation. The same was observed for the θ: the earlier stage had narrower peaks and was found closer to the 90° minimum. The entire scatter diagram can be consulted in Figure A5. There is a significant drop (p < 0.001) in sphericity of the particles between onset of the 320 • C plateau and again ten minutes later. A second drop (again p < 0.001) is observed between 20 min and 30 min after onset of the 320 • C plateau. The red circles denote outliers. (B) shows the anisotropy of the particles at different time points (0 = isotropy, denoted by the dotted gray line). Again, a significant difference (p < 0.05) is seen between the situation at the onset of the 320 • C plateau and ten minutes later. (C) shows the development of the three orthogonal caliper lengths during the thermal decomposition. Plotted is the mean caliper of the longest, median and shortest axis. Note: for readability, the error bars are slightly shifted. * means p < 0.05; *** means p < 0.001.
The anisotropy data confirmed these observations. The anisotropy [62] describes the uniformity of the object and has magnitude of 0 for an isotropic body (a perfect sphere, shown as the dotted line in Figure 4B) but increases for anisotropic objects (see material and methods for equation). A significant (p < 0.05) difference in anisotropy magnitude was found between the first time point (onset of the 320 • C plateau) and ten minutes later, in accord with the sphericity data. Although noisier, the anisotropy follows the same trend as the Ψ: the particles start practically isotropic and grew increasingly anisotropic during the course of the reaction. The anisotropy was highest for the later stages and the cooled down particles, meaning these were the most anisotropic particles.
Two orthogonal axes were at all time points comparable in length: the difference between the largest caliper and the median caliper was not significant ( Figure 4C). However, the third orthogonal axis, the shortest of the three orthogonal dimensions was significantly smaller (p < 0.001, red line in Figure 4C) and the difference increased with increasing time.
The direction of the maximum caliper was calculated for each particle (see Figure A4 for a graphical representation of the methodology) and represented by the two angles ϕ -rotation along the x-axis -and θ -rotation along the Y axis had an optimum at all time points in the broad vicinity of 90 • (Figure 5A). Most significantly, after rotating each particle degrees (i.e., aligning the maximum anisotropy component in the XY orthogonal plane), a preferred orientation of the particles on the TEM grid was observed: there was an aversion for θ = 90 • (the rotation in the Y-plane) but a preference peaked for θ at 69 • and 111 • ( Figure 5B). The smaller particles at the onset (red in Figure 5) had the narrowest-rotation peak, which also aligned the closest to 90 • in the rotation. The same was observed for the θ: the earlier stage had narrower peaks and was found closer to the 90 • minimum. The entire scatter diagram can be consulted in Figure A5.
The Thermal Decomposition Process
Unlike the controlled biomineralization of magnetotactic crystals in magnetosomes, the onset of the 320 °C plateau is marked by nucleation events, which is a stochastic process [72]. Particle nuclei spawn into existence at different time points with growth only limited by diffusion [35]. Particles are present at different stages of their growth as reflected by the very high volume polydispersity. Nearspheroidal shape is also a characteristic of this stage as the number of crystalline facets was still limited. This was seen by sphericity values near 1 and an anisotropy close to 0. During no other stage the particles attain such near-spherical shape. Indeed, the initial growth of these nuclei is well understood and growth occurs isotropically along distinctive facets [72,73]. When the nuclei reached the growth phase, their volume increased rapidly. This is marked by strong significant differences between the onset of the 320 °C plateau and 10 min later: volume, equivalent spherical radius and radius of gyration are all significantly different between these times points. Then, volume variability
The Thermal Decomposition Process
Unlike the controlled biomineralization of magnetotactic crystals in magnetosomes, the onset of the 320 • C plateau is marked by nucleation events, which is a stochastic process [72]. Particle nuclei spawn into existence at different time points with growth only limited by diffusion [35]. Particles are present at different stages of their growth as reflected by the very high volume polydispersity. Near-spheroidal shape is also a characteristic of this stage as the number of crystalline facets was still limited. This was seen by sphericity values near 1 and an anisotropy close to 0. During no other stage the particles attain such near-spherical shape. Indeed, the initial growth of these nuclei is well understood and growth occurs isotropically along distinctive facets [72,73]. When the nuclei reached the growth phase, their volume increased rapidly. This is marked by strong significant differences between the onset of the 320 • C plateau and 10 min later: volume, equivalent spherical radius and radius of gyration are all significantly different between these times points. Then, volume variability drops as larger particles suffer from less favorable volume-to-surface ratios. The increased surface area allowed for the adsorption of more molecules on the surface and this governs, together with the temperature, the free interfacial energy density that triggers dynamic changes in the nanocrystal shape [74]: the start of shape variation. Whereas the particles at onset of the 320 • C plateau were nearly spheroidal, morphologies appeared that can be described as octahaedral, coffin-like or bean-like in the 2D profiles but appear as an oblate spheroid when considered in 3D. This morphology is a reflection of the solvent-depending preferential binding of facets [35]. Dramatic changes in magnetic moments were observed 30 min after reaching the 320 • C plateau [75], which could be attributed to the increase of shape anisotropy. Indeed, our data show a considerable drop in sphericity and an increase in anisotropy at the same time point.
Measurement Disagreements
The disagreement between the two size quantification methods means either the model-free quantification overestimated the size and volume of the particles or the model-based quantification method underestimated those.
Using the radius of gyration, another model-free size quantification technique based on SAXS data agrees to the with the model-free TEM measurements. Hence, the assumption of spherical models was flawed. The mismatch between the two methods increased with increasing anisotropy: relying only on modelling of 2D profiles will lead to ambiguous interpretations when dealing with anisotropic particles.
However, tomographic reconstructions also have associated artefacts: the data acquisition covers a limited tilt angle range from −60 • to +60 • , resulting in missing information between −60 • and −90 • as well as 60 • and 90 • . This so-called missing wedge effect emerges as poor resolution in the XZ dimension after reconstruction, visible especially in the polar regions and apparent as a slight elongation. This missing wedge effect is not relevant in 2D profiles, but may play a role in the overestimation of the particle's volume. Since the tomographic data is convoluted with the missing wedge effect the outcome of the quantification will be biased, i.e., overestimated due to the elongation along the Z axis. However, our data suggest that the missing wedge effect was not significantly distorting the interpretations in this setup because (A) all particles were recorded using the exact same tilting protocol, which produces the same missing wedge effect strength in all particles. Hence, the missing wedge alone cannot explain the observed increase in anisotropy; (B) the model-free TEM data was confirmed by SAXS data at all time points; (C) more spherical particles (at the onset of 320 • C) suffered less from the preferred orientation than more anisotropic particles observed at later stages as shown by the narrower peaks and the alignment with the 90 • angle. (D) the discrepancy between the model-based method and model-free method grows with increasing size. Therefore, we consider the impact of the missing wedge effect not to be significant on the relative trend over the measured time points, although it may shift the absolute values of the anisotropy and sphericity measures.
Model-Based Versus Model-Free Quantifcation
Our data demonstrates the inaccuracies of model-based quantification methods such as conventional TEM to characterize anisotropic nanoparticles. The source of the problem of model-dependent interpretations is the heterogeneous shape of the particles in its three dimensions. Thus, assuming a single shape-model is not appropriate to properly describe the ensemble of these populations. Conventional transmission electron microscopy is single-particle based and can portray a multitude of shapes in a mixture but produces a 2D projection of each 3D object, thereby neglecting the entire axial dimension. This again creates the need for model-based interpretations and may lead to biased size analysis. TEM, combined with tomography, is a procedure that reconstructs anisotropic objects in their three dimensions in silico and with isotropic voxels at sub-nanometer resolution [27,76] Based on these data, we found that SPIONs display a preferred orientation on the TEM support film: around 20-25 • tilted at the longest axis. It is unclear why exactly these positions are preferred, but referred orientations are a known and a delicate topic in single particle reconstruction [77][78][79]. The volume of an oblate spheroid tilted along such angles would be partially hidden in a 2D projection, which leads to an underestimation of the volume by a 6-9% (about upon assuming a spherical shape. Indeed, a correction factor of 7% nullifies all significant differences between the volume quantification methods for these particular objects.
On the one hand, this means that this bias in the TEM characterization is not of concern for most biomedical SPIONs related applications. A possible exception would be applications of heating ferrofluid to achieve hyperthermia [80], where it was shown that an error of 5% can half the temperature rise rate in magnetite nanoparticles [81]. On the other hand, there are nanotechnological applications, such as plasmonic sensing, where the error in precisely estimating the size and the shape is truly unforgiving [58,82].
Conclusions
In conclusion, we assessed how the accuracy of size and shape characterization of SPIONs by transmission electron microscopy. To that extend, we applied 3D TEM imaging techniques to fully describe, i.e., in all three dimensions, growing SPIONs during the thermal decomposition reaction. The heterogeneity of the samples could be statistically portrayed and from the analysis, it could be observed that SPIONs are almost spherical at onset but then grow increasingly more anisotropic in shape. The preferred orientation of the objects on TEM support films renders spherical shape assumptions void but the effect is insufficient to currently perturb the structure-function relationship of SPIONs.
Conclusions
In conclusion, we assessed how the accuracy of size and shape characterization of SPIONs by transmission electron microscopy. To that extend, we applied 3D TEM imaging techniques to fully describe, i.e. in all three dimensions, growing SPIONs during the thermal decomposition reaction. The heterogeneity of the samples could be statistically portrayed and from the analysis, it could be observed that SPIONs are almost spherical at onset but then grow increasingly more anisotropic in shape. The preferred orientation of the objects on TEM support films renders spherical shape assumptions void but the effect is insufficient to currently perturb the structure-function relationship of SPIONs. Figure A4. Clarification of how φ and θ, the angles of rotation around the X and Y axis, were retrieved. Left: a 3D object is projected in 2D (XY plane) but recorded at many different tilt angles (= a tilt series).
Middle: Based on a series of 2D projections in the XY plane at different angles, tomographic algorithms can reconstruct the Z-dimension. Right: 90° rotation around the X axis (φ = 90°) yields a side view of the particle. It is in this XZ plane that θ can be observed and measured. Figure A5. Scatter diagram of the 3D orientation of each particle on the TEM support. The time points are color-coded as shown on top. A 90° Y angle is avoided whereas the X angle broadly centers around 90°. Figure A4. Clarification of how ϕ and θ, the angles of rotation around the X and Y axis, were retrieved. Left: a 3D object is projected in 2D (XY plane) but recorded at many different tilt angles (= a tilt series).
Middle: Based on a series of 2D projections in the XY plane at different angles, tomographic algorithms can reconstruct the Z-dimension. Right: 90 • rotation around the X axis (ϕ = 90 • ) yields a side view of the particle. It is in this XZ plane that θ can be observed and measured. Figure A4. Clarification of how φ and θ, the angles of rotation around the X and Y axis, were retrieved. Left: a 3D object is projected in 2D (XY plane) but recorded at many different tilt angles (= a tilt series).
Middle: Based on a series of 2D projections in the XY plane at different angles, tomographic algorithms can reconstruct the Z-dimension. Right: 90° rotation around the X axis (φ = 90°) yields a side view of the particle. It is in this XZ plane that θ can be observed and measured. Figure A5. Scatter diagram of the 3D orientation of each particle on the TEM support. The time points are color-coded as shown on top. A 90° Y angle is avoided whereas the X angle broadly centers around 90°. | 9,312.8 | 2020-04-25T00:00:00.000 | [
"Materials Science"
] |
Defining and generating current in open quantum systems
Defining current in open quantum systems can be problematic: No general description exists for the current of operators not conserved by the system-environment interaction. We fill this gap by deriving a general formula for probability current on an arbitrary graph, universally applicable to any system-environment interaction. We furthermore provide a representation of the average current, whereby the operator is first measured weakly, then strongly. When the dynamics is of Lindblad form, we derive an explicit formula for the current. We exemplify our theory by analysing a simple Smoluchowski-Feynman-type ratchet, operating deep in the quantum regime. Consisting of only two interacting particles, each moving on a three-site ring, the ratchet displays several novel quantum effects, such as tunnelling-induced current inversion, which we relate to the onset of quantum contextuality, and steady-state entanglement generation in the presence of arbitrarily hot environment. The role of spatial symmetry in current generation is also studied.
I. INTRODUCTION
Transforming thermal, "disordered" motion into an "ordered form" is the purpose of heat engines and thermal motors [1]. In order to operate a heat engine, one only needs to create non-equilibrium conditions, e.g., in the form of two thermal baths at differing temperatures. Putting a thermal motor in motion, however, requires some kind of symmetry breaking in addition to thermal disequilibrium [2][3][4][5]. In the classical regime, thermal motors are rather well-understood [2][3][4][5][6][7][8][9][10][11][12]. For instance, autonomous motors propelled by temperature gradient and broken spatial symmetry have been investigated in the context of Brownian motors [9,10,12], with a design similar in spirit to the original Feynman ratchet [2].
On the other hand, despite their fundamental and technological importance, only now quantum thermal motors are receiving due attention in the literature [13][14][15][16][17][18]. Motivated by [12], in this work, we introduce a minimalistic model of an autonomous quantum rotor, consisting of two particles with 3-dimensional Hilbert spaces. Although we will work in the approximation of weak coupling to the baths, our analysis starts from a microscopic Hamiltonian of the global -rotor plus baths -system, in order to properly account for the symmetries (and their breaking). The model under consideration is a twoparticle 3-component Potts model (q = 3) [19], which is a "higher-spin" (spin-1, in our case) generalization of the Ising model, complemented with quantum tunnelling. Alternatively, one can think of two atoms on an optical lattice [20], each of which is confined to 3 lattice sites.
Here we restrict to the 3-position model as it is the minimal case exhibiting the symmetry breaking necessary for ordered motion to occur. Importantly, our motor does not require three-body interactions [13,15,17] in order to operate.
Despite the considerable attention quantum walks and quantum transport [21][22][23][24][25][26][27][28][29] in open quantum systems have received, the problem at hand, i.e., defining local particle current in a strongly interacting system undergoing a global dissipative evolution, has never been studied in the literature before. We fill this gap by deriving a surprisingly simple, yet universal, formula for particle current on a general quantum network, applicable well beyond the scope of the present paper.
Using standard techniques from the theory of open quantum systems, we fully characterize the nonequilibrium steady states of the rotor. This allows us to make a rigorous connection between the symmetries of the rotor's Hamiltonian and the main transport properties of the system: particle current and heat flux. Moreover, we find that an interesting current inversion phenomenon takes place: despite the fact that tunnelling in our model is rotationally symmetric and thus does not favour any specific direction of particle propagation (see Fig. 1 and Eq. (3)), the particle current can change its direction as the intensity of tunnelling is varied.
In addition to generating steady current, our machine can perform other useful tasks. Enabled by symmetry and fueled by local coherence, our machine is capable of converting uncorrelated states of the rotor into entangled steady states, for arbitrarily high temperatures of the baths. Moreover, in the presence of sufficiently strong tunnelling, powered by temperature difference, the machine can "charge" the initially "empty" rotor with extractable work, where by empty we mean that no work The rotor consists of two partitions a and b, each consisting of three locations a particle (represented by a green circle) can occupy. The particles can tunnel between sites at rate τ . The interaction between the particles is illustrated by the connecting green line. The blue and red semicircles illustrate the fact that each partition is coupled to its own thermal bath.
can be extracted from the state.
II. ROTOR
Our model rotor consists of two subsystems, a and b, each living in a 3-dimensional Hilbert space. These can be thought of as two distinguishable spin-1 particles. The self-Hamiltonians of these particles are taken to be zero in order to ensure translation invariance and periodicitythe state space of the particle is but three identical levels of the same energy. We then add a classical interaction potential of Potts form [19] between the particles, which corresponds to two spins pointing to one of the 3 equispaced directions specified by the angles 2πj/3. Such a potential mimics a dipole-dipole interaction. It can be immediately seen that the potential is periodic in each coordinate in the sense that U (j a + 3, j b ) = U (j a , j b + 3) = U (j a , j b ). However, unless φ = 0, the potential breaks the exchange symmetry in that swapping the particles bears an energetic cost: U ja,j b = U j b ,ja . The breaking of this type of symmetry is responsible for the occurrence of transport, as detailed in Sec. V.
The quantum analogue of this discrete stochastic classical model is described by the Hamiltonian which, in the basis of vectors |j a , j b , has matrix elements where |j is the state of the particle being at the position j. Let us now adopt the view where the quantum states are locations in space (see Fig. 1 for an illustration) to which the particle is confined. One can think of, e.g., an atom in an optical lattice [20]. Since the confining potential cannot be infinite, tunnelling between the sites will be present. We add tunnelling to the Hamiltonian in the form of hopping terms: where I is the identity operator in the corresponding Hilbert space and Here, the index is cyclic in the sense that |k + 3 ≡ |k .
In the first order in τ , such a Hamiltonian describes tunnelling subject to the following kinematic constraints: (i) there can be no tunnelling between a and b and (ii) the particles jump only one by one and never simultaneously. Note that simultaneous jumps are present in the higher orders in τ .
III. INTERACTION WITH THE ENVIRONMENT
Let us now take the next step and add dissipation to the picture. More specifically, we are interested in a setup where the spin a is coupled to a bath at temperature T a and spin b is connected to a bath at temperature T b . In the limit of weak interaction with the envronment, the standard way of describing the evolution of a quantum system is by means of Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation (ME). There are two standard approaches towards prescribing a GKSL ME to an open quantum system's dynamics. In one approach, one derives an equation for the state of the system from the exact dynamics of the system-plus-environment composite under the Born-Markov and secular approximations [30]. In the case of a multipartite system, the equation takes into account the "global" properties of the system's Hamiltonian even if the environment acts locally on the subsytems. The other approach, often referred to as "local" GKSL, ascribes local deissipators to the subsystems without taking into account the global properties of the system's Hamiltonian. This is an incomplete description and generally the thermal state is not a steadystate solution of the local ME when the temperatures of the baths are equal, which brings about thermodynamically inconsistent behaviour [31][32][33][34][35] such as spontaneous heat flow against the temperature gradient [31] or nonzero heat flow between two thermal baths at the same temperature [34,35]. This is contrasted with the global approach, where the dissipation operates on the global Hamiltonian and accounts for the interaction in a thermodynamically consistent manner. It is important to note, however, that, in the weak coupling regime of some models [36,37], the local ME leads to a steady state closer to that obtained via an exact solution of the system as a whole, including the baths. Moreover, as opposed to the global ME, it recovers the correct classical dynamics (see below).
A. Local master equation
The classical analogue of the system [12] consists of two three-state particles interacting via the potential in Eq. (1). In order for local detailed balance to be welldefined, the particles are not allowed to jump simultaneously. Denoting the probability of particle a being in state j a and particle b in state j b via p ja,j b , we can write the general ME in the form where W ja,j b |j a ,j b are the transition rates satisfying local detailed balance conditions: This picture directly translates into a local GKSL ME in the quantum analogue without tunnelling. Indeed, let us write the following quantum ME: where S[L][ρ] ≡ LρL † − 1 2 {L † L, ρ} (with {·, ·} denoting the anticommutator) and L ja,j a ,j b = |j a , j b j a , j b |, L ja,j b ,j b = |j a , j b j a , j b |. This is the standard quantum ME used in the theory of quantum random walks [27,28]. When the Hamiltonian is classical (i.e., H = H cl ), the steady-state solution of Eq. (7) is a diagonal density matrix with the diagonal being the steady-state solution of the classical ME (given by Eq. (5)). Moreover, if the initial density matrix is also diagonal, the probability vector composed of the its diagonal elements will evolve according to Eq. (5).
As mentioned above, the local GKSL ME may not be thermodynamically consistent. In our case, it becomes problematic when τ = 0. Indeed, when β a = β b = β, the steady-state solution of Eq. (7), ρ loc st is not a thermal state at inverse temperature β. In fact, ρ loc st − 1 Z e −βH ∝ τ (where Z = Tr e −βH ), so the deviation cannot be considered to be small.
B. Global master equation
In order to derive a consistent global GKSL equation, we will start with a microscopic Hamiltonian and use the Born, Markov, rotating-wave, and secular approximations [30]. This is a standard procedure [30], therefore we will only briefly summarize the relevant formulas.
The total system Hamiltonian is standardly chosen to be of the form where A α and B α are operators living, respectively, in the Hilbert spaces of the rotor and the baths, and H a and H b are the Hamiltonians of the baths. As a generic example, one can think of bosonic baths, composed of a large number of harmonic oscillators [30]. The resulting master equation will then be Here, the "jump operators" are given by where E m is an eigenvalue of H and Π Em is the eigenprojector corresponding to it. Note that the ω's are all the gaps of the system's bare Hamiltonian H. Note also that, as a further approximation, we omit the Lamb shift correction to the Hamiltonian in Eq. (9) [30]. Being composed of operators Λ α (ω) and commuting with H [30], the Lamb shift term has the same symmetry properties as H, which makes this omission "safe" (especially since we are working in the weak system-bath coupling regime, and the Lamb shift is a second-order effect [30]). The bath correlation functions, γ α (ω), ("transition rates", in the language of the preceding subsection) are given by γ α (ω) = ∞ −∞ dse iωs B † α (s)B α (0) , and, for a generic bosonic bath living in a single spatial dimension, amount to [30] γ α (ω) = g|ω| 1 − e −βα|ω| × e βαω when ω ≤ 0 1 when ω > 0 .
These functions satisfy the detailed balance condition: γα(ω) = e −βαω , which, combined with Eq. (10), guarantees that, when β a = β b , the thermal state is a steady solution for the global ME given by Eq. (9) [30]. This means that, unlike the local ME, the global ME is thermodynamically consistent. We will use γ α (ω) also when dealing with the classical and local GKSL MEs, which means that the transition rate from, say, j a j b to j a j b , will be given by We emphasize that Eq. (9) is qualitatively different from the local GKSL ME (Eq. (7)) in that the jump operators now operate between the eigenspaces of H, which are not diagonal in the "location" basis. Moreover, since the spectrum of H contains degeneracies and some of the gaps may be repeated, the jump operators (Eq. (10)) will generally not be of rank-1 (like operators L are) and will have non-zero non-diagonal elements. Moreover, the global ME does not coincide with the local ME even when the inter-particle interaction, which is controlled by K, goes to zero. Neither it does in the "classical" limit of τ → 0. Indeed, the jump operators of the global ME, due to their dependence on the spectrum of H (which is degenerate for all the values of the parameters), have a coherence structure which is different from that of the jump operators of the local ME. The discrepancy between the two descriptions is in fact exacerbated when K or τ go to zero, which is due to increase in the degeneracy of H. Given that there is no interaction between the particles when K 1, it is obvious that it is the description provided by the global ME that fails. This is related to the fact that, as K decreases, so do also some of the gaps in the spectrum of H, which, for small enough K's, leads to the breakdown of the secular approximation vital for deriving the global ME [30,36].
IV. CURRENT
Let us now turn to the problem of defining particle current on each individual side of the rotor. In the classical case (Eqs. (5)-(6)), the local particle current, say, on the side a, between neighbouring sites j a and j a (note that, since there are only three states and the system is periodic, all sites are neighbours) will be given by In the steady state, it is straightforward to show that J In the quantum regime, when working with local GKSL ME (Eq. (7)), the current is given by Eq. (12) whenever the Hamiltonian is diagonal (for instance, when it is equal to H cl ). This is due to the fact that the quantum dynamics is exactly the same as the classical stochastic dynamics. However, whenever the global GKSL ME is used or the state is not diagonal (e.g., when it is the steady solution of the dynamics when the system Hamiltonian is non-diagonal), defining the current is less straightforward. Indeed, although both the probabilities to find the particles at given sites and transition rates between these sites are always well-defined [21,26,27,29], defining the current in a state with non-zero coherences via Eq. (12) would be to ignore the coherent part of that state. Instead, in such situations, standard quantum transport theory defines the current through the continuity equation. For example, if one measures the transport in terms of particle velocity, then one considers the continuity equation for the position operator in the Heisenberg picture: dx/dt = −divJ [23-25, 38, 39]. Following that logic, let us introduce the position operator of the particle a at location j a : where I b is the identity operator acting on the Hilbert space of b. Now, in order to maintain full generality and simplify notation, let us write the ME in the general GKSL form Then, in the open-system Heisenberg picture, the position operator will evolve according tȯ where the dot over the object denotes the time derivative.
First of all, we notice immediately that the Hamiltonian part can be represented as where (17) is the tunnelling-induced current. This object is essentially the discretized version of the standard current associated with Schrödinger equation: 2mi Ψ † ∇Ψ − ∇Ψ † Ψ (where Ψ is the state in the "position" representation), and the associated current operator is 1 2m {p, |x x|} [40] (cf. Eq. (26)). A similar operator (17) is also used to describe the tunnelling spin current in spin-1 2 lattice systems [25,38,41].
Let us turn to the dissipative part of the RHS of Eq. (15), Keeping in mind the trivial identities Λ λ = Λ λ I and Λ † λ = IΛ † λ I, we use the identity resolution which takes into account the periodicity condition j + 3 ≡ j and holds for any j a , to represent D * [x ja ] as a sum of terms of the form x j a Λ † λ x j a Λ λ x j a . Now, as can be checked by direct inspection, all these terms can be rearranged in such a way that we can write where Here, the symbol (j a , j a , j a ) is introduced for ease of notation and stands for x j a Λ † λ x j a Λ λ x j a . Using the identity resolution (18) again, we can further simplify the above expression into Moreover, reading from Eqs. (15) and (17), it is a simple exercise to show that the total current operator, ja→ja+1 , can be written as Note that this expression is in the Heisenberg picture, and one can return to the Schrödinger picture by substituting the operator derivatives from Eq. (15) and considering all operators constant. Specifically, the average current at the moment of time t is given by ja→ja+1 are defined in, respectively, Eqs. (17) and (25), with all the operators in the Schrödinger picture.
Note also that the current defined by Eq. (26) trivially satisfies the continuity equation dx ja /dt = −divJ = J ja−1→ja − J ja→ja+1 for any dynamics that respects x ja−1 + x ja + x ja+1 = I for all times. This is a very general condition encompassing all conceivable physically meaningful dynamical maps. Indeed, in the Schrödinger picture, it is equivalent to the dynamical map being trace-preserving. In particular, this means that Eq. (26) describes the current also when the dynamics is non-Markovian. In such cases, however, the dependence of the state at a given time on (generally, all) the preceding states [30] will generically render using Eq. (26) impractical.
The remarkably simple Eq. (26) for the local current of a general compound system undergoing general dissipative evolution under the influence of its own (possibly) strongly interacting Hamiltonian and the environment, is the central result of this section. To the best of our knowledge, this formula has not been reported in the literature before. Moreover, quantum weak measurements [42,43] offer a neat interpretation for Eq. (26). Indeed, let us rewrite J ja→j a at the moment of time t as so that the average current reads where the symbol (j a ↔ j a ) denotes the repeating of the same term as on its left, but with the j a and j a indices interchanged. Here, p j a (t + ) = Tr(ρx j a (t + )) is the probability of detecting the particle at site j a at the moment of time t + , and is the real part of the weak value of x ja (t) on the preselected state ρ and postselected on x j a (t + ) [42][43][44]. More specifically, it can be interpreted as the conditional average of x ja at the moment of time t conditioned on the measurement of x j a at a later moment t + [44]. Since x ja is the projector on the location j a , the conditional average is the same as the conditional probability.
In other words, P x ja (t)|x j a (t+ ) is the probability of finding the particle a on the site j a at the moment of time t, as a result of a weak (minimal-disturbance) measurement of x ja (t) on the system in the state ρ, conditioned on a measurement of the particle at the site j a at a later moment of time t + . In turn, this prompts the interpretation of P x ja (t)|x j a (t+ ) p j a (t+ ) as the joint probability of finding the particle at location x ja at the moment of time t and at location x j a at the moment of time t + . Therefore, the formula for the current (28) fits the classical intuition of "flow forward minus flow backward", encapsulated in Eq. (12). Importantly, the location at t is measured weakly so that the state is disturbed minimally before the second measurement. In this way, the system's state is not lost while we observe it jump from j a to j a . Note that the distribution P x ja (t)|x j a (t+ ) p j a (t + ) = Re Tr x ja (t)x j a (t + )ρ is also known as Terletsky-Margenau-Hill quasiprobability distribution [45,46] (see also [47] for a further discussion on the connection between weak value and Terletsky-Margenau-Hill distribution). We also note that, besides providing an appropriate theoretical language for interpreting Eq. (26), weak values can also be directly accessed experimentally (see the relevant discussion in Ref. [43]). Despite the widespread applications of weak measurements in various areas of quantum physics (see, e.g., [43,[48][49][50][51]), to the best of our knowledge, the above connection with quantum transport has not been previously made in the literature.
In order to appreciate the importance of the first measurement being weak, let us see what the current looks like when both position measurements are non-weak (i.e., are standard quantum von Neumann measurements [30,52]). Let us turn to the Schrödinger picture for convenience, and measure the particle a at site j a at the moment of time t. The probability of finding the particle at site j a that will be given by p ja (t) = Tr(x ja ρ(t)), and, after the measurement, the state of the system will collapse . Within the period of time , the state will evolve into ρ ja + L[ρ ja ]+O( 2 ). Therefore, the probability of finding the particle at site x j a , at the moment of time t + , will be Tr x j a ρ ja + L[ρ ja ] + O( 2 ) . Taking the reverse flow into account, we thus find the two-strong-measurement (TSM) current to be given by Taking into account that x ja x j a = 0 (since j a = j a ), we immediately see that J coincides with the RHS of Eq. (20). Hence, making the first measurement strong, eliminates all coherent contributions to the current, given by Eqs. (17), (21)- (24). Note that the latter include also the tunnelling current.
Interestingly, when the ME is given by the local GKSL equation (see Eq. (7)), it is straightforward to show that the total current (26) is given by the sum of J (tun) ja→j a (given by Eq. (17)) and J (TSM) ja→j a (given by Eq. (30)), i.e., the environment-induced current can be measured by performing two strong measurements of the position. Substituting the local jump operators L ja,j a ,j b and L ja,j b ,j b into Eq. (20), we find that the thermal current, J (th) ja→j a , is given by ja→j a obtained from the local GKSL ME has the current in Eq. (12) as its classical limit. Recalling that the transition to the classical regime is indeed provided by the local GKSL equation, we thus summarize that the current in Eq. (26) reverts to the classical current (given by Eq. (12)) in the appropriate classical limit.
Lastly, let us remark that Eq. (26) is applicable to the most general situation of a particle undergoing an arbitrary trace-preserving evolution on an arbitrary N -vertex graph. Indeed, in such a case, the continuity equation takes the form where x j denotes the projector onto the vertex j. (Note that the divergences in Eqs. (16) and (19) are special cases of Eq. (31).) Now, keeping in mind that N j=1 x j = I, it is straightforward to show that defining the vertexvertex currents J k→j via Eq. (26) turns Eq. (31) into an identity. Furthermore, the interpretation in terms of weak measurements, which we developed above, applies in this most general situation without modifications. And, as above, when the dynamics is Markovian and can be described by a GKSL equation, the current operators J k→j can be directly read off from the master equation.
V. STEADY STATES AND SYMMETRIES
Transport is controlled by symmetry [4,23,[53][54][55]. This is universally observed in the literature, and the reference is made to the symmetries of the generator of the evolution. In this section, we will describe the symmetries of the Hamiltonian and show how they relate to the transport properties of the machine.
A. Classical ME
We are interested in the steady-state rotation of our system, and we will start by analysing the classical setting. There, the evolution is given by Eq. (5), which, upon collecting p ja,j b 's and the transition rates into, respectively, the vector p and matrix W, can be rewritten as dp dt = Wp. The stochastic matrix W, in turn, depends on the potential U through Eq. (6) (Eq. (11) will be our specific choice).
Since the phase space is discrete, all transformations are given by permutation matrices. And a configuration function, e.g., the steady state p, is symmetric under a transformation P if P p = p. Importantly, Eqs. (5)- (6) ensure that U and p have the same symmetries. Note, however, that, although W and U might have common symmetry transformations, due to the nonlinear relation between W and U , not all symmetries of U will in general be respected by W. The global "rigid" rotation symmetry of the potential (namely, U i+k,j+k = U i,j ) provides an example of the first kind: W is also symmetric under the global rotation, which is expressed in the fact that W ja+1 j b +1,j a +1 j b +1 = W jaj b ,j a j b . Interestingly, W maintains a single steady state despite this symmetry. As discussed in the next subsection, this is related to the fact that, although the local GKSL equation giving rise to the classical evolution (see Sec. III A) is, as a whole, symmetric under global rotations, its individual jump operators are not. Following the terminology of Ref. [53], one may say that the classical dynamics is hence only "weakly" symmetric under global rotations. Now, we observe that there is no current in the system whenever φ = kπ/3 (k is an arbitaray integer), and the current is non-zero whenever φ = kπ/3 (see Fig. 2). At the same time, we notice that, whenever φ = kπ/3, U j,i = U i,j+k . In other words, when φ = kπ/3, the potential is symmetric under the combination of particle exchange and a unilateral rotation. The disappearance of the current in this case can be explained by a simple physical argument. Indeed, exchanging the particles is equivalent to swapping the temperatures of the baths. On the other hand, taking, for simplicty, φ = 0, we see that the system and its state remain intact (recall that the steady state inherits all U 's symmetries). Put otherwise, inverting the direction of the temperature gradient does not alter the currents. Now, drawing our intuition from phenomenological non-equilibrium thermodynamics [56], where, for small temperature differences, the current is a linear function of the temperature gradient, we thus conclude that the current must be zero.
Interestingly, the generator of the evolution, W, is not symmetric under the above "generalized" exchange symmetry. Nevertheless, the increased degeneracy in U 's spectrum (U has 2 distinct eigenvalues when φ = kπ/3, and 3 when φ = kπ/3) brings about degeneracies in W, leaving it with only 5 distinct eigenvalues (contrasting W's having no degeneracies when φ = kπ/3), which in turn means that W is more symmetric when φ = kπ/3. B. Quantum ME
Local ME
The quantum and classical cases share many similarities when one considers the local GKSL ME. As discussed above, it is only weakly symmetric under global rotation. Indeed, the latter is given by the unitary operator R = ja,j b |j a + 1, j b + 1 j a , j b | (and its integer powers), and it is easy to see that R does not commute with the jump operators L µ . On the other hand, it can be seen by direct inspection that, denoting the right-hand side of Eq. (7) [53], and our local ME, given by Eq. (7), indeed has a unique steady state for any value of τ . This can be seen as an artifact of the global (system plus baths) Hamiltonian not being symmetric under global rotation.
As we can see in Fig. 2, where the average of J ja→ja+1 in the steady state (the same for all j a 's) is plotted against φ, the current is again zero whenever φ = kπ/3. The generalized exchange symmetry of the potential, discussed in the previous section, is also a symmetry of H. Indeed, the unitary operator corresponding to the generalized exchange is given by Ξ k = SWAP · I ⊗ R k b , where SWAP is the swap operator (for ∀ |ψ and |ξ , SWAP|ψξ = |ξψ ) and R b = 3 k=1 |k + 1 k| rotates the particle b by one step. It can be easily seen that [H, Ξ k ] = 0. Similarly to the classical case, although the generator of the evolution is not symmetric under Ξ k (Ξ k L loc [ρ]Ξ † k = L loc [Ξ k ρΞ † k ]), the steady state is. Moreover, taking into account that, in the steady state, unilateral rotation does not affect the average local current (J j b →j b = J j b +1→j b +1 ), we conclude that the transformation Ξ k is equivalent to swapping the temperatures of the baths. Hence, by the same physical argument as in the classical case, we restore the intuition as to why the current is zero for φ = kπ/3. Importantly, we note that the average steady-state current is 2π/3-periodic. Indeed, the potential satisfies U ja,j b (φ + 2π/3) = U ja,j b +1 (φ), which, along with the fact that the tunnelling parts of the Hamiltonian are local-rotation-invariant, means that a 2π/3 phase shift is equivalent to a unilateral rotation. On the other hand, the average local current in the steady state is invariant under local rotation, which proves the 2π/3-periodicity of the average current. This periodicity is evident in Fig. 2.
An interesting purely quantum effect can be read off from Fig. 2: the current changes its direction as the tunnelling rate increases. This is a purely quantum phenomenon, and it occurs only when T a is sufficiently low. Current inversion, albeit through the fundamentally different mechanism of external periodic driving, was also reported in Brownian ratchet systems [3,57].
As a final remark, let us note that the thermodynamic Local GKSL ME: The average current of particle a, Ja , versus the phase φ in the steady state. The blue curve corresponds to zero tunnelling and is equivalent to the classical case. The orange and green curves correspond to, respectively, τ = 0.2 and τ = 0.6. The other parameters are: where the temperature is lower, the quantum effects of the tunnelling become more dominant, and, with the increase of the tunnelling rate, are expressed by changes in the current's direction. No similar inversions in the direction of the current are observed for particle b, which is attached to a hottertemperature bath. As it can be clearly seen, the current is zero whenever φ = kπ/3. inconsistency of the local GKSL manifests itself when T a = T b . In such a case, we find that, although J a + J b = 0, J a = 0. This is, however, only a second-order effect in terms of the tunnelling rate: J a ∝ τ 2 .
Global ME
In order to properly account for the role of the symmetry breaking in the global GKSL ME, in this section, we will study the global, microscopically derived quantum ME (see Eq. (9)) and require the total Hamiltonian in Eq. (8) to be symmetric under global rotation. Given that H already commutes with R, we thus need to choose A a and A b also commuting with R. As shown in Ref. [53], such "strong" symmetry necessarily implies that the global GKSL ME has multiple steady states. More specifically, there are at least as many linearly independent steady states of the ME as there are distinct eigenvalues of R. Since R has 3 distinct eigenvalues, the steady-state subspace of L glob will be of 3 or more dimensions. Importantly, this ambiguity means that the steady state of the evolution bears some memory on the initial state [53,54]. Let us choose where X is given by Eq. (4). Being the arithmetic mean of the Gell-Mann matrices λ 1 , λ 4 , and λ 6 [58], X indeed generalizes the x-component of spin (the standard 2-dimensional Pauli X operator) from SU (2) to SU (3). With such a choice, we have [H tot , R] = 0, and, for k = kπ/3, our numerical analysis reveals that the steady state depends on two free parameters. Given that the strong symmetry with respect to R guarantees 3 trace-1 solutions to L glob [ρ] = 0 [53], we thus conclude that each eigensubspace of R contains a unique steady state and that there are no non-zero traceless solutions to L glob [ρ] = 0. Let us now write the eigenresolution of R as R = 3 k=1 e 2πi 3 k R k , where the cubic roots of 1 are the eigenvalues and R k are the eigneprojectors (R 2 = R * 1 ). By numerically finding an arbitrary solution for the degenerate linear system L glob [ρ] = 0 -call itρ -we determine the three mutually orthogonal steady states as ρ st k = R kρ R k Tr(R kρ ) , which of course do not depend onρ. An important property of the steady states is that, although they are not invariant under any of the three Ξ k 's, their marginals coincide for all choices of the parameters: Now, if the rotor's initial state is ρ 0 , then the steady state it eventually evolves into, will be Indeed, keeping in mind that k R k = I, we can write Since there are no non-zero traceless solutions to L glob [ρ] = 0 and R k ρ 0 R k (k = k ) are traceless (and remain so because the evolution is trace preserving), the second sum in the decomposition will vanish as the system converges to its steady state. Whereas each of the states R k ρ0R k Tr(R k ρ0) will evolve into its corresponding ρ st k , leaving us with Eq. (34). We can now use Eqs. (17) and (25) to determine the currents associated to all three ρ st k 's. First of all, for all the values of the parameters, all the average currents are independent of the site (e.g., J ja→ja+1 [ρ st ] is the same for all j a 's). Furthermore, we find (numerically) several other general properties that also hold for all values of the parameters. First, the average current in ρ st 3 is zero for both particles a and b: Next, (35), and (36), and using the numerical values of R 1 and R 2 , for an arbitrary initial state ρ 0 , we find The total average current versus the tunnelling rate τ , with the phase φ = π/6. The total current starts negative, and, depending on the initial state, can turn positive as τ increases. For both plots, the rest of the parameters are: where θ 0 = 3 ja,j b =1 j a , j b |ρ 0 |j a + 1, j b + 1 is the sum of some of the non-diagonal elements of the initial density matrix.
An important feature the global GKSL ME shares with the local GKSL ME, is that, whenever φ = kπ/3, J (th) = 0 for any initial state. In Fig. 3a, we plot the total (thermal plus tunnelling) current of particle a against the phase φ, for different values of τ . We see that, unlike the local ME, the tunnelling current is not zero when φ = kπ/3. Moreover, in stark contrast to the local ME, the average current for nearly zero tunnelling rate is noticeably smaller than the average currents for larger tunnelling rates.
When φ = kπ/3, the Hamiltonian of the rotor is symmetric under Ξ k , whereas the coupling operators, A a and A b (and therefore also H tot ), are not. Nevertheless, φ = kπ/3 is marked by an increased degeneracy in the spectrum of H tot , and as a consequence, of L glob , which is expressed in the fact that the steady state now depends on 5 free parameters, meaning that the steady-state space of L glob is 6-dimensional. The steady states are all symmetric under global rotation, but are not invariant under Ξ k . By noticing that, similarly to Sec. V B 1, due to the invariance of the local average current under local rotation, the Ξ k transformation is equivalent to swapping the temperatures of the baths, for any k. Hence, we can invoke the same physical argument as in the cases of classical and local GKSL MEs in order to interpret the nullification of the currents. We emphasize that, as in the above two cases, the appearance of non-zero current in the system is related to breaking of (some of) the symmetries of the generator of the evolution, taking place whenever φ = kπ/3. Finally, we note that the current is a 2π/3-periodic function of φ. As in Sec. V B 1, this is due to the fact that a 2π/3 shift in the phase is equivalent to a local rotation, and the local average current in the steady state is invariant under local rotation.
Furthermore, the current inversion phenomenon, observed in Sec. V B 1, is also present here. In Fig. 3b, we plot the total average current for all three basis steady states, as a function of τ . The phase is chosen to be φ = π/6. Keeping Eqs. (34) and (37) in mind, we see that, unless the initial state is completely in the subspace of R 3 , J < 0 for τ 1, and if Im θ 0 < 0, J will become positive for large enough τ . The inversion of the direction of the current is counterintuitive in that the tunnelling is symmetric with respect to local rotation and therefore does not favour a particular direction for particle flow.
Lastly, we note that, whenever T a = T b , J (th) α = 0 for any initial state. The tunnelling currents, on the other hand, do not necessarily turn to zero when T a = T b . This however does not contradict the second law of thermodynamics, and is simply due to the fact that, although the thermal state -for which the tunnelling currents are indeed zero -is a steady-state solution for the global ME, the basis steady states, ρ st k , are not thermal.
C. Heat
In the absence of external driving, heat always flows from hot to cold. Due to clear separation between the two baths in the total Hamiltonian (see Eq. (8) Since the system does not exchange energy with external media, in the steady state, the energy conservation requiresQ a +Q b = 0. Moreover, at thermal equilibrium, i.e., when T a = T b , the heat flow must be zero. In our For the global ME, we have numerically established the following properties of the heat flow: This means that, when the tunnelling is not zero, the heat flow can be controlled by changing the weights of the eigensubspaces of R in the initial state of the rotor. Such symmetrycontrolled manipulation is well-known in the literature (see, e.g., Ref. [55]). In Fig. 4, we plotQ[ρ st 1 ] (Q[ρ st 3 ] has the same behaviour) against φ, for various values of τ . First, we notice that the heat flow decreases with increasing the tunnelling rate, which is related to the fact that the higher the τ , the higher the relative weight of the non-interacting component of H. We further notice that the zeroes of J (th) , i.e., φ = kπ/3, correspond to global extrema ofQ. Lastly, analogously to the current, the heat flow is also a 2π/3-periodic function. This fact is evident from Fig. 4, and is proven as follows. As viewed from the perspective of the Hamiltonian, a 2π/3 phase shift is equivalent to a unilateral rotation; specifically, Being the eigenprojectors of H, Π Em undergo the same transformation, which, combined with the fact that [X, (10) and (32)). Hence, it also holds that We conclude this section by emphasizing that caution must be maintained when using the global ME for computing the heat flow for small K. As discussed in Sec. III B, the secular approximation, indispensable for the applicability of the global ME, is compromised when K approaches 0. Given the results in Refs. [36,37] for the Caldeira-Leggett model, it is reasonable to expect that the local ME would provide a more reliable description in that regime. This issue can be decisively settled only upon solving the global, system-plus-baths dynamics in the limit of infinitely large baths, doing which is however unfeasible (see, e.g., [30]).
VI. UTILISING THE STEADY STATE
In this section, we will study several useful properties of the steady states of the machine. Specifically, we will focus on the entanglement and the amount of unitarily extractable work (also known as ergotropy) in the rotor.
A. Entanglement
Entanglement is a key resource in basically all aspects of quantum technology [52,59]. Although thermal noise can sometimes be beneficial for creating and maintaining entanglement (see, e.g., [60][61][62]), at high temperatures, the effect of environment will typically be detrimental. Ineed, the maximally mixed state in any finitedimensional Hilbert space is not entangled, and there exists a finite-volume convex set around the maximally mixed state in which all states are not entangled [59]. In particular, this means that, for any Hamiltonian, there exists a finite temperature T [H], above which all thermal states are non-entangled. On the other hand, thermal disequilibrium is known to be beneficial for entanglement generation [61][62][63][64][65][66][67][68][69]. Therefore, a key question in this regard is whether a thermal disequilibrium created by temperatures higher than T [H] can generate entangled non-equilibrium states. Of especial interest would be the steady states, since these can be prepared and maintained reliably and robustly. In this section, we will show that a thermal initial state can be transformed into an entangled steady state only if it is already entangled. Thus, for thermal initial states the above question has a negative answer. However, we will show that, for initially uncorrelated states, local coherence can be traded for entanglement in the steady state, for any temperatures of the baths. In other words, powered by global rotation symmetry, our machine can convert a local quantum resource (coherence) into a global quantum resource (entanglement), even when the surroundings have such high temperatures that neither resource would survive thermalization.
Our rotor is a 3 × 3-dimensional system, and in 9 (or higher) dimensions is geneally very hard to tell whether a state is entangled or not [59]. In this section, we will use a very strong necessary condition, called positive partial transpose (PPT) criterion [59]. The partial transpose of ρ with respect to partition b, ρ Γ b , is defined as the matrix with entries j a , j b |ρ Γ b |j a , j b = j a , j b |ρ|j a , j b . Now, if ρ is not entangled, then ρ Γ b will be a non-negative matrix. Hence, ρ will necessarily be entangled if ρ Γ b has a negative eigenvalue. This condition is also sufficient Global GKSL ME. The steady-state entanglement between particles a and b, as measured by the negativity, versus the phase φ. Only the negativity for ρ st 1 (which is also equal to that of ρ st 0 ) is plotted since N (ρ st 2 ) < N (ρ st 1 ) has the same behaviour. The (inset) shows the steady-state negativity when the initial state is chosen to be the thermal state at the temperature of the hot bath, as a function of τ , with φ = π/3. The rest of the parameters are the same as those in Fig. 3.
when the joint Hilbert space dimension is ≤ 6, however, in higher dimensions, e.g., for our rotor, this condition is not sufficient: ρ can be entangled even when ρ Γ b ≥ 0 [59]. The entanglement detected by the PPT criterion can be measured by the negativity, N [ρ], defined as the sum of the absolute values of all negative eigenvalues of ρ Γ b [70]: where ||O|| = Tr √ O † O is the trace norm [52]. Similarly to the heat flux (see Sec. V C), we find numerically that (i) N (ρ st 1 ) = N (ρ st 2 ) > N (ρ st 3 ) > 0 and (ii) the global extrema of the negativity correspond to the zeroes of the thermal current (φ = kπ/3). In Fig. 5, we plot N (ρ st 1 ) against φ, for three different values of tunnelling: τ = 10 −3 , τ = 0.2, and τ = 0.4. It shows, in particular, that the negativity is a monotonically decreasing function of τ . This seemingly counterintuitive fact can be easily understood by first observing that the presence of entanglement even when τ → 0 is caused by the fact that, in order to obtain the basis steady state, we project on the highly entangled eigensubspaces of the global rotation matrix R (cf. Sec. V); and the decrease of entanglement with the increase of τ is caused by the increased weight of the non-interacting part of the Hamiltonian H − H cl (see Eq. (3)). The standard intuition is recovered when one considers realistic initial states. We show this in the inset of Fig. 5, where we plot the steady-state negativity versus τ . There, we fix K = 2 and φ = π/3, and, for any τ , choose the initial state to be the thermal state at the temperature of the hot bath: ρ 0 ∝ e −β b H . As expected, we see that there is no entanglement for weak tunnelling. Then, having reached a maximum at an intermediate value of τ , the entanglement decays as H becomes more and more dominated by the non-interacting tunnelling term.
The dependence of the entanglement in the steadystate of a global GKSL ME on the temperature difference of the baths has been extensively studied in qubit systems [61][62][63][64][65][66][67]. In highly asymmetric problems, situations when entanglement can be generated only when there is temperature difference have been reported [62]. In our model, we find that, for any values of all the other parameters fixed, if we set the cold temperature T a constant, then the negativity of all three basis steady states is a monotonically decreasing function of T b . Furthermore, the negativity at T b = T a is a monotonically decreasing function of T a . Interestingly, however, we find that, when both keeping T a constant and varying T b or fixing T b = T a and varying T a , the entanglement does not experience sudden death: the negativity decays gradually, reaching zero only asymptotically. Therefore, by choosing the initial state appropriately, we can guarantee the presence of entanglement in the steady state for arbitrarily large values T a and T b . More importantly, we can achieve this with uncorrelated initial states, albeit paying for that with local coherence. Indeed, as discussed around Eq. (37), for arbitrary initial state ρ 0 , On the other hand, when mixing the basis steady states, the entanglement vanishes, especially when the states are only weakly entangled. Therefore, in the limit of high T a and T b , in order for ρ st to be entangled, one of the λ k 's has to be close to 1. Let us pick for definiteness λ 1 (the analysis for λ 2 and λ 3 will be identical). Then requiring λ 1 ≈ 1 will be equivalent to |θ σ ||θ κ | cos(arg θ σ + arg θ κ + 4π/3) ≈ 1.
Finally, let us show that, analogously to the current and heat flux, the negativity is also a 2π/3-periodic function of φ (see Fig. 5). Indeed, as we have established in Sec. V C, shifting the phase by 2π/3 transforms the steady state On the other hand, negativity is invariant under local unitary transformations on the state [59], which means that a 2π/3 phase shift does not alter the negativity.
B. Work content
The steady states of our machine can be useful not only for quantum information, but also for thermodynamics. In particular, the non-equilibrium steady states of the rotor store work that can be extracted from it in a cyclic Hamiltonian process. In other words, the average energy of the rotor, with respect to its Hamiltonian H, can be decreased by unitarily evolving its state ρ st . The unitary Global GKSL ME. The ergotropy of the steady state when the initial state is a thermal state at the temperature of the cold bath, versus the tunnelling coefficient τ . The plot also shows the total current in order to show that the amount of work stored in the state is not related to the current flowing in it. As an aside, the plot for the current once again illustrates the phenomenon of current inversion. The other parameters are the same as those in Fig. 3.
operation that extracts maximal work is the one that diagonalizes ρ st in the basis of H in such a way that, if H's eigenbasis is chosen so that E 1 ≤ E 2 ≤ · · · , then the eigenvalues of ρ st , {r k }, have the reverse ordering: r 1 ≥ r 2 ≥ · · · [72]. The average work thus extracted, E, is called ergotropy [72] and is equal to The states with zero ergotropy are called passive [72,73], with the most prominent example being provided by the thermal states [73].
When the rotor's initial state is thermal at temperature T in , the ergotropy of the steady state will be a decreasing function of T in , turning to zero somewhere in between T a and T b . Now, given the infinite size of the baths, thermal states at the temperatures of the baths can be considered to be for free -one just puts the system in contact with the bath and waits for it to thermalize. Therefore, if T in = T a , the machine essentially takes in a free (and useless from the perspective of work extraction) state and evolves (and maintains for an arbitrarily long time) it into a state "charged" by work. In Fig. 6, we plot E[ρ st [τ Ta ]] as a function of τ . We see that, in addition to a temperature gradient, a sufficiently high tunnelling rate is necessary for the machine to work as a charger. The plot also shows the total steady-state current, in order to indicate the surprising fact that the work content of the steady state is not related to (let alone being caused by) the current it harbours.
VII. CONCLUSIONS
We have studied a model thermal rotor, consisting of two strongly interacting partitions, in the quantum regime. Our main purpose was to study the transport properties of the rotor, specifically, the conditions under which there is non-zero particle current in the steady state of the rotor.
Before proceeding to analyzing the specifics of our model, we first had to define local particle current in the given situation, i.e., when the particle is a part of a larger quantum system immersed in a dissipative environment. We derive a novel general formula for the particle current applicable not only in our setup, but also to arbitrary open quantum networks. We furthermore provide an interpretation for our formula in terms of quantum weak measurements. Given the generality of the formula, this interpretation thus applies to such widely used definitions of current as the tunnelling current in Eq. (17).
Having defined the main quantity of interest, we go on analyzing the machine when its dynamics is described by both local and global quantum GKSL master equations. The former is applicable when the interaction between the particles is weak, while the latter is necessary when the particles are coupled strongly. In both cases, we study the steady state current from the perspective of the symmetries of the rotor Hamiltonian, and find that the particle current due to thermal hopping (J (th) ) can occur only when, in addition to a non-zero thermal gradient, the particle exchange symmetry of the Hamiltonian is broken. We also study the classical limit of the machine, and find that the same symmetry-breaking mechanism applies also there. Lastly, we show that the steady state current can reverse its direction as the tunnelling rate is varied. This is purely quantum effect, hitherto thought to take place only in the presence of strong external driving [3,57]. This is a somewhat counterintuitive effect, especially in the light of the fact that the tunnelling in our model is rotationally symmetric and thus, in itself, does not introduce a preferred direction for particle flow.
By studying the entanglement and ergotropy of the steady states, we furthermore prove that our machine can also be useful in such practical tasks as (i) converting uncorrelated states into entangled states in the presence of hot dissipative environment and (ii) evolving initially thermal states, which as useless for work extraction, into non-passive steady states, from which non-zero work can be extracted. Both tasks are enabled by purely quantum resources. In the first case, the entanglement generation is made possible by feeding the machine with coherent local states. In other words, the evolution to the steady state is a realization of a completely positive trace preserving map, converting local coherence into entanglement such that the entanglement on the output depends on the amount of coherence at the input, akin to the analysis in Ref. [74]. In the second case, "charging" of the initially thermal rotor is possible only if the tunnelling rate is sufficiently high. | 13,095.6 | 2018-06-22T00:00:00.000 | [
"Physics"
] |
Robust Processing of Nonstationary Signals
Techniques for processing signals corrupted by non-Gaussian noise are referred to as the robust techniques. They have been established and used in science in the past 40 years. The principles of robust statistics have found fruitful applications in numerous signal-processing disciplines especially in digital image processing and signal processing for communications. Median, myriad, meridian, L filters (with their modifications), and signal-adaptive realizations form a powerful toolbox for diverse applications. All of these filters have low-pass characteristic. This characteristic limits their application in analysis of diverse nonstationary signals where impulse, heavy-tailed, or other forms of the non-Gaussian noise can appear: FM, radar and speech signal processing, and so forth. Recent research activities and studies have shown that combination of nonstationary signals and non-Gaussian noise can be observed in some novel emerging applications such as internet traffic monitoring and digital video coding. Several techniques have been recently proposed for handling signal filtering, parametric/nonparametric estimation, and feature extraction, of nonstationary and signals with high-frequency content corrupted by non-Gaussian noise. One approach is based on filtering in time domain. Here, the standard median/myriad forms are modified in such a manner to allow negative and complex-valued weights. This group of techniques is able to produce all filtering characteristics: high-pass, stop-band, and band-pass. As an alternative, the robust filtering techniques are proposed in spectral (frequency-Fourier, DCT, wavelet, or in the time-frequency) domain. The idea is to determine robust transforms having ability to eliminate or surpass influence of non-Gaussian noise. Then, filtering, parameter estimation, and/or feature extraction is performed using the standard means. Other alternatives are based on the standard approaches (optimization, iterative, and ML strategies) modified for nonstationary signals or signals with high-frequency content. Since these techniques are increasingly popular, the goal of this special issue is to review and compare them, propose new techniques, study novel application fields, and to consider their implementations. In this special issue, we have been able to select 11 papers on a variety of related topics. The first three papers are related to processing of FM signals in the spectral and the time-frequency domains. The main tool is the robust DFT that can be used for development of various robust tools in the spectral domain. The paper " An overview of the adaptive robust DFT " (A. Roenko et al.) presents an overview of the basic principles and applications of the robust-DFT approach, which is used for robust processing of …
Techniques for processing signals corrupted by non-Gaussian noise are referred to as the robust techniques.They have been established and used in science in the past 40 years.The principles of robust statistics have found fruitful applications in numerous signal-processing disciplines especially in digital image processing and signal processing for communications.Median, myriad, meridian, L filters (with their modifications), and signal-adaptive realizations form a powerful toolbox for diverse applications.All of these filters have lowpass characteristic.This characteristic limits their application in analysis of diverse nonstationary signals where impulse, heavy-tailed, or other forms of the non-Gaussian noise can appear: FM, radar and speech signal processing, and so forth.Recent research activities and studies have shown that combination of nonstationary signals and non-Gaussian noise can be observed in some novel emerging applications such as internet traffic monitoring and digital video coding.
Several techniques have been recently proposed for handling signal filtering, parametric/nonparametric estimation, and feature extraction, of nonstationary and signals with high-frequency content corrupted by non-Gaussian noise.One approach is based on filtering in time domain.Here, the standard median/myriad forms are modified in such a manner to allow negative and complex-valued weights.This group of techniques is able to produce all filtering characteristics: high-pass, stop-band, and band-pass.As an alternative, the robust filtering techniques are proposed in spectral (frequency-Fourier, DCT, wavelet, or in the timefrequency) domain.The idea is to determine robust transforms having ability to eliminate or surpass influence of non-Gaussian noise.Then, filtering, parameter estimation, and/or feature extraction is performed using the standard means.Other alternatives are based on the standard approaches (optimization, iterative, and ML strategies) modified for nonstationary signals or signals with high-frequency content.
Since these techniques are increasingly popular, the goal of this special issue is to review and compare them, propose new techniques, study novel application fields, and to consider their implementations.
In this special issue, we have been able to select 11 papers on a variety of related topics.
The first three papers are related to processing of FM signals in the spectral and the time-frequency domains.The main tool is the robust DFT that can be used for development of various robust tools in the spectral domain.
The paper "An overview of the adaptive robust DFT" (A.Roenko et al.) presents an overview of the basic principles and applications of the robust-DFT approach, which is used for robust processing of frequency-modulated signals embedded in non-Gaussian heavy-tailed noise.In particular, it has concentrated on the spectral analysis and filtering of signals corrupted by impulsive distortions using adaptive and nonadaptive robust estimators.Several adaptive estimators of location parameter are considered, and it is shown that their application is preferable with respect to nonadaptive counterparts.This fact is demonstrated by efficiency comparison of adaptive and nonadaptive robust DFT methods for different noise environments.
The paper entitled "Robust time-frequency distributions with complex-lag argument" (N.Žarić et al.) considers obtaining highly concentrated time-frequency representations for signals corrupted with impulsive/heavy-tailed noise.The proposed approach combines the robust DFT evaluation in order to get filtered signal with removed and/or reduced influence of the impulsive noise and the time-frequency representations with the complex time argument for producing highly concentrated representations.The proposed approach has been tested for the instantaneous frequency estimation showing high accuracy and stability.In addition, the approach is modified for multicomponent signals case.
The third paper in this section "Two-Dimensional harmonic retrieval in correlative noise based on genetic algorithm" (S.Wu et al.) considers the two-dimensional (2-D) harmonic retrieval in the presence of correlative zero-mean and multiplicative and additive noise.First, a 2-D fourth-order time-average moment spectrum which has maximal values at the harmonic frequencies is introduced.Then, the problem of harmonic retrieval is treated as a problem of finding the maximal values in the GA.Utilizing the global searching ability of the GA, this method can improve the frequency estimation performance.The effectiveness of the proposed algorithm is demonstrated through computer simulations.
The second section is related to the image filtering and restoration with three papers proposing novel techniques in this quite competitive field.
Filtering of impulse noise for digital images has been considered in paper "Impulse noise filtering using robust pixelwise S-estimate of variance" (V.Crnojević et al.).The Sestimate is used as an alternative technique for estimating variance to commonly accepted tools such as the MAD estimator.Namely, the S-estimate has shown excellent accuracy for nonsymmetric skewed noise distributions.It is important to note that such distributions are frequently encountered in the transition regions of images.The derived S-estimator of variance is used for efficient iterative technique for impulse noise filtering.The stopping criteria of the algorithm are also developed using the S-estimator.Efficiency and accuracy of the proposed filter have been demonstrated on numerical examples and tested against the state-of-the-art in the field.
A new variational image model for image restoration using a combination of the curvelet shrinkage method and the total variation (TV) functional is presented in "Image variational denoising using gradient fidelity on curvelet shrinkage" (L.Xiao et al.).The staircasing effect and curveletlike artifacts are suppressed using the multiscale curvelet shrinkage.A new gradient fidelity term is designed to force the gradients of desired image to be close to the curvelet approximation gradients.To improve the ability to preserve the details of edges and texture, the spatialvarying parameters are adaptively estimated in the iterative process of the gradient descent flow algorithm.Numerical experiments demonstrate that the proposed method has good performance in alleviating both the staircase effect and curvelet-like artifacts, while preserving fine details.
The generalized Cauchy distribution (GCD) is developed in "A generalized Cauchy distribution framework for problems requiring robust behavior" (R. E. Carillo et al.).Accurate pdf estimation and modeling is important for development of sample processing theories and methods.The GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes.This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion.Notably, the proposed framework subsumes generalized Gaussian distribution (GGD) family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques.This robust framework can be adapted to a variety of applications in signal processing.As examples, four practical applications under this framework are presented: (1) filtering for power line communications, (2) estimation in sensor networks with noisy channels, (3) reconstruction methods for compressed sensing, and (4) fuzzy clustering.
The section on its own is the paper "Two-Stage outlier elimination for robust curve and surface fitting" (J.Yu et al.).The authors proposed approach for outlier elimination based on the two-stage procedure with proximity-based outlier detection followed by model-based one.Depending on the hard/soft threshold of the connectivity of observations, two algorithms are developed for the proximity-based outlier detection: graph-component-based and eigenspace-based.The second stage iteratively refits and retests the information about shape or contour until convergence.These two stages are convenient for removing various types of outliers that can appear.Comparing existing approaches, the proposed technique produces significantly improved results for ellipse/ellipsoid fitting for large portion of outliers and high level of noise.
The section related to applications is particularly strong.The paper "Channel characterization and robust tracking for diversity reception over time-variant off-body wireless communication channels" (P.Van Torre et al.) considers application of the robust processing tools in communication systems.It seems that the novel and future communication schemes will be important user and motivation field for tools developed in the robust processing of nonstationary signals.
In the paper, 2.45 GHz band, indoor wireless off-body data communication with moving person is considered.This communication can be problematic due to time-variant signal fading and the consequent variation in channel parameters.Off-body communication specifically suffers from the combined effects of fading, shadowing, and path loss due to time-variant multipath propagation in combination with shadowing by the human body.Measurements are performed to analyze the autocorrelation, coherence time, and power spectral density for a person equipped with a wearable receiver system moving at different speeds for different configurations and antenna positions.Diversity reception with multiple textile antennas integrated in the clothing provides improved link reliability.For the dynamic channel estimation, a scheme using hard decision feedback after MRC with adaptive low-pass filtering is demonstrated to be successful in providing robust data detection for long data bursts, in the presence of dramatic channel variation.
The paper "Data fusion for improved respiration rate estimation" (S.Nemati et al.) considers very difficult problems of estimation of respiratory rates from passively breathing subjects.The main novelty in the paper is the estimation using various sources.Namely, in practice, the best source is commonly selected according to the available criterion while other recordings are discarded.In the proposed approach, the various data sources are fused using an instance of the Kalman filter based on developed signal quality index.The proposed technique is not only tested on both real recordings, but also on the case of the artificially added noise.The proposed technique has shown reasonable robustness to the noise influence.The real data set used in the study is obtained from 30 subjects and contains the ECG and respiration and peripheral tonometry.
The paper "Improved noise minimum statistics estimation algorithm for using in a speech passing noise rejection headset" (S.Sayedtabaee et al.) deals with the practical industrial noise produced by rotating machinery (in this case, angle grinder).The problem is the fact that the strong angle grinder noise should be removed but oral communication should be preserved as much as possible.The headset for removing such noise is constructed with the installed microphone and speaker.The spectral substraction method is modified in order to achieve the angle grinder noise removal.Noise is estimated employing a multiband adaptive scheme.The algorithm adopts to changes of the noise characteristics in very fast manner with minimal distortion of other useful signals.The accuracy of the algorithm is tested using objective and subjective measures.
The paper "Adaptive wavelet transform method to identify cracks in gears" (A.Belsak et al.) describes de-noising method based on wavelet analysis which takes prior information about impulse probability density into consideration.This method is used to identify transient information from vibration signals of a gear unit with a fatigue crack in the tooth root.This important practical problem due to a crack in the tooth root is one of the most dangerous problems that can cause failure in gear unit operation.The proposed robust technique employs filtering since recorded signals are quite noisy, making determination of properties of individual components a very difficult task.
We would like to thank all authors for their contribution to our issue, the reviewers for their help in selecting papers, technical staff of the Hindawi Publishing Corporation, and finally the editor Phillip Regalia for his support and capability to work on this special issue. | 2,998.8 | 2010-02-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Survival Analysis in Modeling the Birth Interval of the First Child in Indonesia
First birth interval is one of the examples of survival data. One of the characteristics of survival data is its observation period that is fully unobservable or censored. Analyzing the censored data using ordinary methods will lead to bias, so that reducing such bias required a certain method called survival analysis. There are two methods used in survival analysis that are parametric and non-parametric method. The objective of this paper is to determine the appropriate method for modeling the birth of the first child. The exponential model with the inclusion of covariates is used as parametric method, considering that the newly married couples tend to have desire for having baby as soon as possible and such desire will be weakened by increasing age of marriage. The data that will be analyzed were taken from the Indonesia Demographic and Health Survey (IDHS) 2012. The result of data analysis shows that the birth of the first child data is not exponentially distributed thus the Cox proportional hazard method is used. Because of the suspicion that disproportional covariate exists, then the proportional hazard test is conducted to show that the covariate of age is not proportional, the generalized Cox proportional method is used, namely Cox extended that allows the inclusion of disproportional covariates. The result of analysis using Cox extended model indicates that the factors affecting the birth of the first child in Indonesia are the area of residence, educational history and its age.
Introduction
In demography, there are three things which take effect, namely mortality, migration, and fertility.The birth interval of the first child can be used as one of indicators of fertility.The birth interval of the first child is defined as difference between the age of marriage and the age of birth of the first child.In fact, the length of birth interval of the first child of every married woman is not same.According to existing research, the birth interval of the first child is determined by many kinds of social and culture even physiology factors.According to [1], there are several factors that affect the birth interval of the first child, which are the area of residence, educational level, age, the knowledge of contraception and employment status.Most of married women have given birth to their first child a couple months right after the marriage, so that the obtained data are the complete one, but the others who do not have children yet are classified as the censored data.Thus the birth interval of the first child is an example of the survival data.The survival data are the data that indicate the period of time since the initial observation to a happened event.The characteristic of the survival data is its survival time, which usually could not be fully observed (censored) [2].If all the expected events have happened, and are able to be fully observed, some analysis methods can be done; unfortunately, the survival data are censored [3].Analyzing the survival data using the ordinary method would be inappropriate because of causing bias [4].A certain method to analyze it was needed for reducing such bias.This certain method is called survival analysis.
Definitions Associated with Survival Analysis
Definition 1: Survival time measures the period of time from the initial event to happened event such as failure, death, respond, symptoms, and etc. [5].
Definition 2: Survival function is a function indicates the probability of an individual who survives until or more than t time (experiencing the event after t time) [6].Consider random variable T, then the survival function define as, ( ) ( ) Consider f as the probability density function, then the survival function also defines as complement of cumulative function F as, ( Definition 3: Hazard function is a function indicates the probability of an individual of having risk or experiencing event such as failure or die at t time in the condition that this individual survived to t time, the function given by: ( ) ( ) From those definitions above, the relation between the survival function and hazard function are obtained.
Using the definition of the conditional probability, obtained:
Model
In this research, there are two survival methods that will be used, i.e. exponential parametric and non-parametric method.
Parametric Method
Survival time can be analyzed by using the accelerated failure time (AFT) model.In the survival time, this model assumes that the logarithm relation of survival time T and its covariates are linear and can be written as , exp exp
Kaplan-Meier Model
In the Kaplan-Meier method, assuming the distribution of the data is discrete.According to [10], Kaplan-Meier is a method used to compare the survival time of two covariate groups.The advantage of this method is that the non-parametric method does not require the knowledge of a particular distribution [11].This method is appropriate because the data used are the individual data, but still appropriate for small, medium and large data sizes.Suppose the time of the birth of the first child denotes by r and the number of women who are married denotes by n, where r ≤ n.The probability of the birth of the first child in every interval j estimated by , and its survival probability estimated by ˆ. j j j j n d p n The estimator for survival function of Kaplan-Meier method is given by, ( )
Cox Proportional Hazard and Non-Proportional Hazard Model
Cox proportional hazard is a model usually used as multivariate approach to analyze the data [12].The characteristic of Cox proportional hazard model indicates that every different individual has proportional hazard function, that is, ( ) ( ) , the ratio of the hazard function of two individuals with the inclusion of cova- riates ( ) ( ) are constant.This means that the ratio of failure risk of two individuals is the same and does not depend on how long they survived.[13] explains that the general form of Cox proportional hazard model is: where x denotes covariate, but he did not make any assumptions about the form of ( ) 0 h t itself, which is called the baseline of hazard function, because it is the value of hazard function when 0. x = Sometimes the time-dependent covariate was found, so it is not met with the proportional assumption, then the form above was developed into Cox extended model: , exp For checking the proportional assumption of covariate, the Cox extended model can be written as: where ( ) j g t denotes function of time and it is important to determine the proper form of ( ). ( ) If the test result j δ is significant then the Cox extended model is better than the Cox proportional hazard model, thus the ratio of hazard function is a function of time. 3) ( ) j g t heavyside function.When this function is used then obtained the constant of hazard ratio for dif- ferent time intervals.
Parameter Estimation
In estimating parameter 1 2 , , , , Cox using the maximum likelihood estimation method (the maximum likelihood estimator) by only considering the individual probability who are experiencing the event called partial likelihood [14].Estimation j β using partial likelihood means maximizing the partial likelihood function.Partial likelihood function is the joint probability survival function of uncensored data formed by the function of unknown value of parameter.[7] stated that parameter estimation β can be proved by taking survival individ- ual cases such as the death event.Suppose there are n individuals with r individuals who are having death then there exist ( ) n r − individuals censored.Assumed that there is only an individual who died at a certain time of death (there is no ties).Again suppose that ( ) ( ) ( ) ( ) is the ordered and uncensored survival time.The probability of the i-th individual death at time ( ) in terms of ( ) as the only exact time of death of , , , , r t t t t and the covariate for died individual at time ( ) denote as: ( ) ( ) Thus the likelihood function of the conditional probability above is , , , , n t t t t and i δ denotes the indicator of event by valued 0, th individual right censored 1, others.
Then the equation of likelihood function can be written as If equation turned to be logarithm then obtained Parameter estimator β can be obtained by maximizing the log function ( ) L β , so that the solution be ( ) The solution of such equation is hardly solved analytically but more easily solved using numeric method.
Data
The data used in this research taken from the result of Indonesia Demographic and Health Survey in 2002.The samples used are the data of two provinces, that are West Papua and Special Area of Yogyakarta as representation of high and low fertility level area.The data limited to the birth interval of the first child, of first time married woman.
Operational Definition
The dependent variable used in this research is the birth interval of the first child of a first time married woman.Whereas the independent variables that estimated to affect the birth interval of the first child are: 1) Area of residence, grouped in the smallest administrative unit of area, that are urban and countryside area.The area of residence divided into two categories, that are city = 1 and village = 2.
2) Education, schools are the formal schools start from the primary, junior, and high school, including the equated education.Those who never sign into formal education or ever been in primary school but never getting a passing mark are classified as not finished the primary school.The highest education divided into four categories, that are not finished the primary school = 0, finished the primary school = 1, finished the junior high school = 2, and finished the senior high school or higher = 3.
3) Employment status, working is the activity of doing the job with purpose of obtaining or earning income or profit for at least one hour per week continuously and uninterrupted (including as unpaid family workers who helped in the business/economic activity).Employment status are categorized into unemployment = 0 and employment = 1.
4) Knowledge about contraception, divided by two categories, unknown = 0 and known = 1.5) Age of mother, age of mother/first time married woman expressed in years.
Results
The Kolmogorov-Smirnov goodness of fit test can be applied to test whether a sample theoretically follows a population distribution.Based on the calculating results obtained that the value of D = 0.258 and the critical value of table D* for significance level 0.05 is 0.04025.Since the calculating value of D is greater than the critical value of table D then reject H 0 , which means that the data is not exponentially distributed.Furthermore, since the data of birth interval of the first child is not exponentially distributed then the exponential method is not used for analyzing the factors that supposed dominantly affecting the birth interval of the first child.
As an illustration, the birth interval data of the first child will be analyzed using the Kaplan-Meier method.Its respond variable is the time of marriage to the time of having the first child and the area of residence is the variable that affect the survival level (0 = village, 1 = city).
The survival analysis results using the Kaplan-Meier method for independent area of residence of variable can be shown in graphic as follows.
From Figure 1 shown that the survived individual who lived in the villages are different with those who lived in the cities.Next the hypothesis test will be conducted to see whether this difference significantly affect or just coincidence, by the following hypothesis, ( : is the statistic test value then the rejection area is H 0 .The log rank test results based on the area of residence covariate status are presented in the Table 1.
By using the significance level 0.05 of chi-square test with degree of freedom 1 obtained W value significant enough to reject H 0 .So, it can be concluded that there is significantly difference among the survival level of the birth of the first child, who lived in the village and in the city.
If the survival data that will be compared are comes from more than two individual groups, e.g.we are interested in seeing the characteristic difference of the area of residence, education level, age, knowledge about contraception and so forth, then the Kaplan-Meier method will be impractical.This caused of in the Kaplan-Meier method, every two of population groups must be tested separately, so if there are several groups then repeated testing must be conducted.If the respondents have several characteristics, then the Cox proportional hazard method able to explain the influence of such characteristics to the respond variables simultaneously.
Table 2 shows that the explanatory variables significantly affected to the interval birth of the first child are the area of residence, education 1, education 2, education 3, and age.Next the estimation to the proportional assumption of every covariate was done.In this paper, the only method used to check the covariate that does not met the proportional assumption in the birth interval of the first child is Schoenfeld residuals method.The results are presented in Table 2.
The covariate of p-value of individual aged less than 0.05 means there is correlation between such covariate to the time rating until that individual had her first child, so that the age covariate does not met the proportional assumption.Thus the data re-modeled using the Cox extended model.Before the analysis is done, conducted the test previously to see more appropriate form of ( ) j g t .This test uses the AIC criteria.The AIC is a measure for selecting the best regression model, introduced by Hirotugu Akaike in 1973.The AIC method is based on the maximum likelihood estimation [15], with the equation as follows where L indicates the likelihood function, q indicates the sum of parameter β , and α indicates the speci- fied constant.The value of α that often used is between 2 and 6.Based on the AIC method, the best regression model is the model that has the smallest value of AIC [9].The value of AIC formed in ( ) is the smallest among the two other models (Table 3).Thus this is the best model.
Next the test was done to see significantly affected covariate to the respond variable.The results are presented in Table 4.
In Table 4 seen that the explanatory variables that significantly affect to the birth interval of the first child are the area of residence, education 1, education 2, education 3, and age.
Interpretation
The value of hazard ratio allows us to compare among multiple groups in the survival analysis [16].Table 5 shows that the area of residence, education 1, education 2, education 3, and age variables are significantly affect to the individual risk of having first child.From the value of hazard ratio seen that individual who lived in the city had 0.720 times lower risk of having their first child than who lived in village one, and the educational history of individual who finished their primary, junior, senior high school or higher will increase the risk of having their first child respectively 1.708, 2.648, 4.361 times of individual who had lower educational history or not finished their primary school at all.Every additional age of one year will reduce the risk of having first child as 4.5%.
Table 4 shows that the area of residence, education 1, education 2, education 3, and age variables are the significantly affected to the individual risk of having her first child.From the value of hazard ratio seen that individual who lived in the city had 0.719 times lower risk of having their first child than who lived in village one, and the educational history of individual who finished her primary, junior, senior high school or higher will increase the risk of having their first child respectively 1.730, 2.648, 4.235 times of individual who had lower educational history or not finished their primary school at all.Every additional age of one year will reduce the risk of having first child as 3.0166%.
Conclusions
The result of distribution tests by using the Kolmogorov-Smirnov test indicates that the data do not have a particular distribution, so the Cox proportional hazard model of non-parametric method is a more appropriate method for modeling the birth interval data of the first child.However, after testing the assumption of proportional hazard model, it turned out that one of the covariates did not meet the proportional assumption, so that the data re-modeled by using generalized Cox proportional hazard model called Cox extended model.
Although there are no differences among the explanatory variables that significantly affect the using of Cox proportional as Cox extended model, but the interpretation of both models is still different and based on the conducted test.The Cox extended model is the best model in modeling the birth data of the first child in Indonesia.
1 σ.
of σ indicates the scale para- meter and ε indicates the error.[9]states that for inclusion of covariates in the exponential distribution, we use the equation above and choose T is the exponential distribution with hazard function, probability density function, and survival function respectively is the simplest form by resulting the Coxproportinal hazard model.2) vector for individual who died at time ( ) Multiplication in the likelihood func- tion only for uncensored individual.The censored individual excluding in the numerator but in the denominator is the sum of data consist of n observation of survival time 1 2 3
Figure 1 .
Figure 1.Survival function of Kaplan-Meier method for independent area of residence variables.
Table 1 .
The log rank test result to see the difference of the survival level of the birth of the first child based on the area of residence covariate.
Table 2 .
Correlation and p-value of explanatory variables.
Table 3 .
The value of AIC model.
Table 4 .
Parameter estimator, p-value, and the hazard ratio by using the cox extended model.
Table 5 .
Parameter estimator, p-value, and hazard ratio using cox proportional hazard model. | 4,235 | 2014-03-28T00:00:00.000 | [
"Mathematics"
] |
Exact embeddings of JT gravity in strings and M-theory
We show that two-dimensional JT gravity, the holographic dual of the IR fixed point of the SYK model, can be obtained from the consistent Kaluza–Klein reduction of a class of EMD theories in general D dimensions. For D=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D=4$$\end{document}, 5, the EMD theories can be themselves embedded in supergravities. These exact embeddings provide the holographic duals in the framework of strings and M-theory. We find that a class of JT gravity solutions can be lifted to become time-dependent charged extremal black holes. They can be further lifted, for example, to describe the D1/D5-branes where the worldsheet is the Milne universe, rather than the typical Minkowski spacetime.
Introduction
The AdS/CFT correspondence [1,2] serves as a bridge that connects some conformal field theory (CFT d ) in d dimensions and gravity in the anti-de Sitter (AdS D ) background in D = d + 1 dimensions. This holographic duality was best studied between the N = 4 D = 4 superconformal field theory and type IIB string in the AdS 5 × S 5 background. The duality is however expected to be applicable for wider classes of theories, possibly even beyond conformal field theories. The duality remains largely conjectural, and the simplest examples to prove this duality may be associated with the integrable models that can be solved completely. However, such models with conformal symmetries are hard to come by.
Recently, Sachdev-Ye-Kitaev (SYK) model [3][4][5], which describes random all-to-all interactions between N Majorana fermions in 0 + 1 dimension, has drawn a large amount of attentions due to its integrability in the large N limit. The SYK model exhibits approximate conformal symmetry in the infrared (IR) limit, suggesting that the SYK model may be a CFT 1 at low energy. It was shown to be maximally chaotic [4][5][6] in the sense that its out-of-time order correlators exhibit Lyapunov exponents and butterfly effects [7] and they saturate [8] the chaos bound established by black holes [7]. Therefore, the IR limit of the SYK model may have an AdS 2 bulk gravity dual.
The NCFT 1 property is related to the fact that the bulk theory can not be Einstein gravity in D = 2 dimensions. Specifically, the Einstein-Hilbert action is simply a topological constant and thus gives no dynamics. Thus nontrivial D = 2 gravity must non-minimally couple to a matter field. The simplest example is perhaps the Jackiw-Teitelboim (JT) gravity [31,32] When 0 is negative, the JT model admits the AdS 2 vacuum; however, the full AdS 2 symmetry is broken by the nontrivial dilaton [33]. JT gravity may thus provide a gravity dual of the IR limit of the SYK model [34] (see Appendix A for details), and the SYK/AdS 2 duality can thus be addressed in the context of JT gravity [15,[33][34][35][36][37][38][39][40].
In fact, one may consider more complicated dilaton gravities in two dimensions, which were extensively studied in the last century for addressing basic problems of quantum gravity (see, e.g. [41] for a review.) Two-dimensional gravities received new attention in the light of holography. Almheiri and Polchinski (AP) recently introduced a general family of dilaton-gravity models [42] For appropriate potential U , the theory admits AdS 2 vacuum with constant˜ 2 = 0 . One can now perform a perturbation, the effective action for the linear perturbation is then precisely the JT gravity [43]. The AP class of models can be obtained from higher dimensional theories such as strings and M-theory via Kaluza-Klein reductions. This provides an understanding of SYK models from the higher dimensional point of view. Indeed, many higher dimensional extremal black holes has near horizon geometries as an AdS 2 × M [44][45][46][47][49][50][51][52], and the near horizon region can be effectively described by D = 2 dilaton gravities (1.3) in many situations [43][44][45][46][47][48].
However, the above embeddings of JT gravity is at the linear perturbation level. From the higher-dimensional point of view, AdS 2 spacetimes typically arise from the near-horizon geometry of some extremal black holes. Thus JT gravity as a linear perturbation of the AP class describes the leadingorder approximation away from the extremality. Since JT gravity itself only captures the IR behavior of SYK models, these embeddings have to deal with the subtleties associated with these two competing approximations. In this paper we seek exact embeddings of JT gravity in higher dimensions so that the leading-order approximation away from the extremality has a broader range of validity. The simplest example is perhaps Einstein gravity with a cosmological constant in three dimensions, with the reduction ansatz being ds 2 3 = ds 2 2 + 2 dz 2 [14,[53][54][55][56]. Since AdS 3 emerges naturally in strings and M-theory, the reduction ansatz provides a direct link between SYK and string theories. Ref. [53] also obtained JT gravity coupled to a Maxwell field from the STU supergravity model in four dimensions [57] by the Kaluza-Klein reduction on S 2 .
In this paper, we present an alternative exact embedding of JT gravity in higher dimensions. We construct a class of Einstein-Maxwell-Dilaton (EMD) theories in general D dimensions with appropriate dilaton couplings and scalar potential. We demonstrate that JT gravity can be obtained from the EMD theories via consistent Kaluza-Klein reductions. It turns out that for D = 4 and D = 5, the EMD theories without the scalar potential can be embedded in supergravities, which themselves can be obtained from the Kaluza-Klein reduction of strings and M-theory.
The paper is organized as follows. In Sect. 2, we present a class of EMD theories in D dimensions and express them in the f (R)-frame where the manifest kinetic term of the dilaton vanishes. We then perform consistent Kaluza-Klein reductions and show that JT gravity can indeed emerge. In Sect. 3, we consider solutions in JT gravity and oxidize them to become solutions in the EMD theories in higher dimensions. We find that a class of JT gravity solutions are related to the previously-known time-dependent extremal black holes. In Sect. 4, we consider the EMD theories in four and five dimensions and show they are consistent truncations of the bosonic sector of supergravities and/or gauged supergravities. This allows to embed the solutions in strings and Mtheory. We conclude the paper in Sect. 5. In the appendix, we give some detail review of how JT gravity can give rise to the Schwarzian action.
A class of EMD theories
We begin with a class of EMD theories considered in [58]. The theories consist of the metric, a scalar φ, and two U (1) gauge fields A and A. The Lagrangian in the Einstein frame is given by where F = d A, F = dA and the dilatonic coupling constants satisfy The scalar potential V is inspired by those in gauge supergravities, given in terms of a super-potential W [58] For reasons that will become apparent, in this paper, we are particularly interested in the case with The potential is thus Note that if we set A = 0 and also turn off the scalar potential, the remainder of the theory is simply the Kaluza-Klein theory with A being the Kaluza-Klein vector. The EMD theories were inspired by gauged supergravities. In D = 4 and 5, the Lagrangians are the consistent truncations of the bosonic sector of the respective gauged STU models. Their embeddings in M-theory and type IIB strings via Kaluza-Klein sphere reductions were given in [59]. We now make a constant shift of φ, and redefine Dropping the tilde, the Lagrangian takes the same form as (2.1), but with V now given by This allows us to set the parameter g 1 and g 2 to zero independently.
Conformal transformation
We now make a conformal transformation [60] the Lagrangian, after dropping the tildes, becomes The conformal transformation (2.8) is such that the dilaton's kinetic term is absent and the equation of motion for becomes algebraic. This can be generally done in supegravities or gauged supergravities and such a conformally transformed theory was referred as the f (R)-version of supergravity in [60]. In this paper we shall also refer (2.9) as gravity in the f (R)-frame.
In order to make contact with JT gravity through dimensional reduction, we take A = 0 and g 2 = 0, the resulting EMD theory in the f (R)-frame is In four and five dimensions, the theories can be obtained from taking appropriate limit of the bosonic sector of the STU gauged supergravity models. When g 1 = 0, they can be truncated consistently from supergravities and hence can be embedded in string and M-theory.
JT gravity from Kaluza-Klein reduction
In this subsection, we show that JT gravity can be obtained from this EMD theory (2.11) by the consistent Kaluza-Klein reduction. The internal space is taken to be The corresponding Ricci scalar is given by (2.12) In order for the reduction to be consistent, we take all the singlet of the isometry group of the internal space. The reduction ansatz is thus given by where μ = 0, 1 and x μ are respectively the indices and coordinates of the metric ds 2 2 . The Kaluza-Klein reduction in the Einstein frame down to dimensions higher than or equal to three was obtained in [61]. We find that the reduced Lagrangian from (2.11) in two dimensions is 14) The equation of motion associated with A is which can be solved by where λ is an integration constant associated with the electric charge and (2) is the volume 2-form of ds 2 2 . The variations associated with and ϕ yield Finally, the equation of motion associated with the variation of g μν , when (2.17) and (2.18) are applied, is given by Taking the trace yields This equation, together with the two scalar equations (2.17) and (2.18), imply that we can take ϕ = ϕ 0 to be constant, provided that the charge parameter λ is The remainder equations can be summarized as It is now straightforward to see that the Eq. (2.22) can be derived from the action of JT gravity (1.2). In Appendix A, we review how the JT action, together with the bounary terms of the Gibbons-Hawking type and the holographic counterterm, give rise to the Schwarzian action in the appropriate AdS 2 background. (A more general argument for how the Schwarzian action arises from AP models was presented in [53].) To conclude, JT gravity with (2.23) can be obtained from the Kaluza-Klein reduction of the D-dimensional EMD theory (2.11) and the consistent reduction ansatz is In the case of D = 3, nontrivial results require the absorbing of the (D − 3) factor into g 2 1 . It is instructive simply to introduceg 2 1 = (D − 3)g 2 1 and declare thatg 2 1 is non-vanishing in D = 3. To be specific, we see that the D = 3 theory in the Einstein frame is In the f (R)-frame, it becomes The reduction ansatz from D = 3 to D = 2 is The resulting two-dimensional theory is then the JT theory (1.2) with 0 = − 1 2g 2 1 . Note that equations of motion of the resulting JT gravity are independent of the constant parameter ϕ 0 .
The D = 3 case perfectly illustrated the difference between our embedding of JT gravity in higher dimensions and those discussed previously in literature. In [14], (also see [53][54][55][56],) the higher-dimensional theory is pure AdS gravity and the scalar in JT gravity arises as the radius of the compactifying circle z. The running of the JT scalar is driven by this breathing mode of the internal space. On the other hand, in our embedding, the internal radius is fixed consistently by the equations of motion to be a constant. The JT scalar is a direct descendant of the dilaton in higher dimensions. Both embeddings are possible due to the fact that JT gravity is not conformal in that it has a running dilaton, which may arise directly from the higher-dimensional theory or from the modulus parameter of the compactifying space in a theory that has no scalar.
Oxidations and time-dependent black holes
In the previous section, we demonstrate that JT gravity in two dimensions can be obtained from the consistent Kaluza-Klein reduction of the EMD theory (2.11) on an internal Einstein space. This allows us to oxidize all the two-dimensional solutions to higher dimensions. In particular, we find that some special two-dimensional solutions become the decoupling limit of time-dependent extremal black holes.
Oxidation of the solutions
The two-dimensional metric is Einstein with a negative cosmological constant. We can thus take the metric to be where α, β, γ are integral constants. The theory also admits the black hole solution For g 1 = 0, corresponding V = 0 in D dimensions, it follows from (2.23) that we must require k = 1, and hence In this case, the two-dimensional solutions can be lifted to become D-dimensional ones. In the Einstein frame, they take the form For g 1 = 0, we can have all k = 1, 0, −1, and 0 is given by (2.23). The D-dimensional solutions become
Time-dependent extremal black holes
The general EMD theory (2.1) admits a class of charged black hole solutions [58]. For the dilaton coupling choice (2.4) and vanishing scalar potential, the charged extremal black hole is given by To make contact with the solutions in JT gravity, we set q = 0 and furthermore we take the decoupling limit with 1 in H dropped. The solution becomes where we have redefined the time coordinate byt = 2 √ t. Comparing with the Kaluza-Klein reduction ansatz (3.6), we see that ϕ 0 = r 0 . Performing the Kaluza-Klein reduction on the S D−2 sphere, and redefining the r coordinate and parameters bỹ we arrive, after dropping all the tildes, at the two-dimensional solution (3.2) of JT gravity with α = 0.
Embeddings in strings and M-theory
In the previous sections, we show that JT gravity can be obtained from consistent Kaluza-Klein reduction of a class of EMD theory (2.11) in general D dimensions. In four and five dimensions, the EMD theories can be embedded in supergravities, allowing exact embeddings of JT gravity in strings and M-theory. This will provide a better understanding of the time-dependence of the solutions.
The D = 5 theory
We first consider the D = 5 EMD theory (2.11). It follows from the discussion in Sect. 2 that in the Einstein frame, the Lagrangian is given by For g 1 = 0, the theory can be embedded into the U (1) 3 supergravity with two field strengths set equal whilst the third set zero. The theory can be obtained from N = (1, 0) supergravity in D = 6 via Kaluza-Klein reduction on S 1 [63]. The relevant six-dimensional theory is Einstein theory coupled to a self-dual 3-form The (truncated) reduction ansatz is given by 3) The D = 5 time-dependent extremal black hole (3.7) with q = 0 becomes (3) , This is the self-dual string, with the flat worldsheet metric being the two-dimensional Milne universe, namely where t = 1 4t 2 . The solution can be further lifted to become the intersecting D1/D5 system, or M2/M5-branes. The near horizon geometry has an AdS 3 factor whose boundary is the two-dimensional Milne universe rather than the Minkowski spacetime.
We can also lift the full two-dimensional solution (3.2) back to D = 6 directly, and we find In particular, when α = 0, this is precisely the near-horizon geometry of the Milne self-dual string discussed above.
The D = 4 theory
The EMD theory (2.11) in four dimensions in the Einstein frame is given by For g 1 = 0, the theory can be embedded into the STU supergravity model, with three equal gauge potentials set equal and the fourth one set zero. The embedding of JT gravity in this theory is part of more general Kaluza-Klein reductions obtained in [53]. The D = 4 theory (g 1 = 0) can be obtained from the S 1 reduction of minimal supergravity in five dimensions, with The five-dimensional Maxwell field descends down to four dimensions directly. The time-dependent black hole solution becomes Turning off the charge by setting q = 0, and hence H = 1, the metric describes a Kasner-type cosmological solution.
We can also lift the solution (3.5) to five dimensions and find where is given in (3.2). This solution with α = 0 is precisely the decoupling limit of (4.9) with the 1 in H dropped. The D = 4 EMD theory with g 1 = 0 can also be embedded in M-theory with the reduction ansatz given by where d 2 6 is Euclidean or Ricci-flat Calabi-Yau space with a harmonic 2-form I (2) . The oxidized solution can be viewed as the M2/M2/M2 intersection, or M2-brane wrapping around the Calabi-Yau 2-cycles.
We have examined the EMD theories (2.11) for D = 4 and 5, with g 1 = 0. These theories can be embedded into supergravities. This provides many routes of JT gravity to strings and M-theory, since there are many different ways of embedding of these EMD theories in the fundamental theories (see e.g. [64]). The higher dimensional solutions are related to M-branes, D-branes and their intersections, see e.g. [65][66][67] 4.3 D = 4, 5 theories with g 1 = 0 When g 1 = 0, the theory can also be obtained from taking an appropriate limit of gauged supergravities, by taking g 2 = 0 in the scalar potential (2.7). However, the sphere reduction ansätze constructed for g 1 g 2 = 0 [59] do not appear to allow the g 2 = 0 limit. It turns out that although the D = 4, 5 theories with g 2 = 0 and g 1 = 0 cannot be obtained Mtheory or strings directly from sphere reductions, they can be obtained from S 1 reduction of gauged supergravities that can be reduced from higher dimensions on spheres.
For example, the Lagrangian for the bosonic sector of minimal gauged supergravity in five dimensions is (4.12) For the pure electric ansatz we consider in this paper, the last F F A term can be dropped. The reduction ansatz (4.8) with the F descending directly gives rise to precisely the D = 4 EMD theory (4.7), with g 1 = 2g. The lifting of the (3.2) leads to The five-dimensional gauged supergravity (4.12) can itself be obtained from type IIB supergravity on S 5 , and the reduction ansatz can be found in [59]. Its (singular) embedding in M-theory was also obtained [68]. The effective cosmological constant 0 (2.23) in JT gravity now has contributions from D3-brane charges as well as the gauged supergravity R-charges associated with rotations of D3-branes. The D = 5 theory can also be obtained from D = 6 Einstein gravity coupled to cosmological constant and a selfdual 3-form (4.14) The reduction ansatz (4.3) gives precisely (4.1) with g 2 1 = 5g 2 /2. The resulting solution is given by The (4.14) theory is a consistent truncation of the bosonic sector of D = 6, N = (1, 1) gauged supergravity that can be embedded in massive type IIA theory [69]. The effective cosmological constant in JT gravity in this embedding is related to the D4/D8-brane charges. Finally, it is worth noting that for F = 0, the theory (2.11) in general D dimensions can all be obtained from the circle reduction of Einstein gravity with a cosmological constant in D + 1 dimensions [60].
Conclusions
In this paper, we demonstrated that JT gravity in two dimensions could be obtained from the consistent Kaluza-Klein reduction on a class of EMD theories (2.11) in general D dimensions. For D = 4 and 5, the EMD theories are truncations of the bosonic sector of supergravities. This allows one to embed JT gravities in strings or M-theory, providing stringy interpretations of the SYK model.
The exact embeddings of this paper also allow one to understand the solutions of JT gravity in the light of higher dimensional theories. For g 1 = 0, we find that a class of JT gravity solutions are related to the time-dependent extremal charged black holes in the EMD theories. The solutions can be further lifted to become, for example, intersecting D1/D5brane, where the worldsheet is the 2d-Milne universe instead of the more traditional 2d-Minkowski spacetime. They can also be lifted to the time-dependent M2/M2/M2 intersections. For g 1 = 0, we find that the cosmological constant in JT gravity is related to D3-brane or D4/D8-brane charges depending on the specific route of the embeddings.
The exact embeddings of this paper imply that we can obtain the Schwarzian action directly in higher dimensions. The subtlety is that the worldsheet or worldvolume of branes should be described by the Milner metric rather than the usual Minkowski metric. The fact that the Milne universe appears in the worldsheet or worldvolume is tantalizing. Nondilatonic extremal p-branes such as the M-branes or the D3brane have in general AdS d × S D−d as their near-horizon geometry. When the boundary (the brane worldvolume) of the AdS d is the Milne universe, we have where d 2 d−2,−1 is taken to be some compact metrics with negative cosmological constant. The Kaluza-Klein reduction on d 2 d−2,−1 yields naturally a two-dimensional gravity with nearly AdS 2 vacuum geometry. It would be of great interest to investigate the corresponding holographic NCFT 1 .
where a prime denotes a derivative with respect to u, and is an infinitesimal constant representing the UV cutoff. Equation (A.3) implies that t (u) and z(u) are not independent dynamic fields asymptotically, they are related by where inf is given by The rigid AdS geometry causes the bulk action to vanish identically, and the total action will be given by the boundary action, namely To evaluate the vector n = n μ ∂ μ normal to the boundary, we note that the tangent vector is given by where we have used (A.4). Since the normal vector is defined by T μ n μ = 0 , n μ n ν g μν = 1 , (A.8) We find, up to the 2 order, that n 0 = t t 2 1 + 2 Sch(t (u), u) , where Sch(t (u), u) is the Schwarzian derivative The normal vector n μ , the tangent vector T μ and the metric g μν satisfy the identity It follows that the extrinsic curvature can be rewritten as Substituting (A.7) and (A.9) into (A.12), we find K = 1 + Sch(t (u), u) 2 + · · · , (A. 13) where the dots represent those with higher powers of . Therefore, it is straightforward to see that (A.6) is given by Sch(t (u), u). It is of interest to note that had one considered inf as a dynamical field, then Eq. (A.16) would imply that the system had a ghost excitation. | 5,384.4 | 2018-09-01T00:00:00.000 | [
"Physics"
] |
Weighted Brain Network Metrics for Decoding Action Intention Understanding Based on EEG
Background: Understanding the action intentions of others is important for social and human-robot interactions. Recently, many state-of-the-art approaches have been proposed for decoding action intention understanding. Although these methods have some advantages, it is still necessary to design other tools that can more efficiently classify the action intention understanding signals. New Method: Based on EEG, we first applied phase lag index (PLI) and weighted phase lag index (WPLI) to construct functional connectivity matrices in five frequency bands and 63 micro-time windows, then calculated nine graph metrics from these matrices and subsequently used the network metrics as features to classify different brain signals related to action intention understanding. Results: Compared with the single methods (PLI or WPLI), the combination method (PLI+WPLI) demonstrates some overwhelming victories. Most of the average classification accuracies exceed 70%, and some of them approach 80%. In statistical tests of brain network, many significantly different edges appear in the frontal, occipital, parietal, and temporal regions. Conclusions: Weighted brain networks can effectively retain data information. The integrated method proposed in this study is extremely effective for investigating action intention understanding. Both the mirror neuron and mentalizing systems participate as collaborators in the process of action intention understanding.
Machine learning is an extremely important tool that is widely applied in biomedical engineering (Dindo et al., 2017;Mcfarland and Wolpaw, 2017;Ofner et al., 2017;Pereira et al., 2017;Bockbrader et al., 2018). For studying action intention understanding, good classification accuracy is one of the most critical factors (Dindo et al., 2017;Ofner et al., 2017;Bockbrader et al., 2018). In recent years, many researchers have performed numerous experiments with machine learning, but most of the classification results are unsatisfactory (Zhang et al., 2015Liu et al., 2017;Pereira et al., 2017). After investigating these previous studies, we determined two important reasons for these poor classification results. One is the extraction of useless features, and the other is the selection of a small number of samples for training classifier model. For feature extraction, many methods (e.g., time domain and frequency domain analyses) have been introduced to neuroscience (Ortigue et al., 2010;Carter et al., 2011;Ge et al., 2017;Liu et al., 2017;Pereira et al., 2017;Pomiechowska and Csibra, 2017;Zhang et al., 2017), but these methods do not perform sufficiently well in terms of the actual results. For sample collection, due to limitations in recruiting participants, it is very difficult to obtain a large number of samples (Zhang et al., 2015Liu et al., 2017;Pereira et al., 2017).
In view of the above introduction, we implement classification tasks for EEG signals related to action intention understanding from the perspectives of both feature extraction and sample collection in this study. To extract useful features, we first aim to obtain reliable time series in the source space by sLORETA. And then, we use the phase lag index (PLI) (Stam et al., 2007) and weighted phase lag index (WPLI) (Vinck et al., 2011) to construct dynamic brain networks in multiple micro-time windows and specific frequency bands. It is worth mentioning that many other methods (synchronization likelihood Stam and Dijk, 2002, phase lock value Lachaux et al., 1999, Pearson correlation, etc.) have the weakness of volume conduction effect when computing the brain network based on EEG signals (Stam et al., 2007;Niso et al., 2013), whereas both the PLI and WPLI methods can solve this problem well (Stam et al., 2007;Vinck et al., 2011). In recent studies about action intention understanding, Zhang et al. (2017) obtain better experimental results with the WPLI than the results of Zhang et al. (2015) that based on phase synchronization and Pearson correlation. Hence, we naturally think that select the PLI and WPLI to construct brain network. Hard back to the subject, we finally calculate a number of graph complexity measures as the classification features. Notably, many studies attach great importance to using a binary network to decode brain signals related to action intention understanding (Zhang et al., 2015. However, others argue that network thresholding easily results in the loss of some useful information (Phillips et al., 2015;Ahmadlou and Adeli, 2017). This is mainly because weighted brain networks are very sensitive to the threshold. Recently, Ahmadlou and Adeli (2017) proposed a new approach that adopted two weighted undirected graph complexity measures to study autism and aging issues and achieved satisfactory statistical results. Considering these facts, it is naturally thought that a weighted brain network can be used to decode action intention understanding. To collect more a larger number of samples, we converted each subject in a certain class of brain signals into two subjects by constructing two brain networks, one from the PLI and the other from the WPLI. Because our final goal is to classify the different kinds of brain signals related to action intention understanding, the transformation is feasible.
Our method is mainly based on the state-of-the-art dynamic time-frequency brain network technique, which has numerous advantages. For instance, it can consider both time and frequency feature information, locate activated brain areas, and discover potential topological relationships among regions of interest (ROIs). These merits can help us decode action intention understanding more comprehensively than single time or frequency analyses (Rubinov and Sporns, 2010;Zhang et al., 2015Zhang et al., , 2017Vecchio et al., 2017;Cignetti et al., 2018). The scheme of sample reconstruction is very important for this study, as it improves the classification accuracy efficiently, especially for the classification of similar action intention stimuli.
Subjects
A total of 30 healthy subjects were recruited for EEG data acquisition. All participants did not use any prescribed medication, and they also did not have any neurological disease. Before the start of the experiments, they were asked to read and sign an informed consent form. When finished with the tasks, all participants received monetary compensation. After deleting the data from 5 subjects that had been heavily degraded by bad channels, we finally collected EEG data from 25 subjects (17 males, 8 females; age 19-25 years, mean ± SD: 22.96 ± 1.54; all right-handed). This study was supported by the Academic Committee of the School of Biological Sciences and Medical Engineering, Southeast University, China.
Experimental Paradigm
For data acquisition, all participants were told to view three hand-cup interaction pictures that were shown on a computer monitor with E-prime 2.0. The subjects were asked to only silently judge the intention of the hand-cup interaction. The three action intentions were drinking water, moving the cup and simply touching the cup. This design comes from the research of Ortigue et al. (2010). Figure 1 shows the experimental stimuli and procedure. The three kinds of hand-cup interaction stimuli are presented in Figure 1A, where Ug (use grip) denotes a hand that is grasping a cup with the intention of drinking water, Tg (transport grip) represents a hand that is grasping a cup with the intention of moving it, and Sc (simple contact) denotes a hand that is touching a cup without any clear intention. Figure 1B illustrates the experimental stimuli presented in a trial, which are shown sequentially over the indicated time course. During the experiment, a white cross first appeared in the center of the screen for 150 ms. Then, a cup was shown on the screen for 500 ms. After the cup disappeared, a hand-cup interaction stimulus was displayed on the screen for 2,000 ms. Once the hand-cup interaction appeared on the screen, the subjects were to immediately and silently judge the actor's intention. Before the next trial, the cross was shown again for a random time that varies from 1,000 to 2,000 ms.
All participants underwent a 12-trial practice session before the formal experiment. Across all participants, the practice session lasted for an average of approximately 24 min. To alleviate visual fatigue caused by repeated experimentation, we presented the cups with a color chosen randomly among seven different colors for each trial. Because before formal EEG signal acquisition experiment, all participants are informed that they only need to judge what is the intention of the actor's gesture, and the color of the cup has nothing to do with the performer's intention. Thereupon, the actor's gesture is more important in the stimulus procedure when comparing with the factor of cup color. The mixture of color variables does not affect the classification effect. In this research, each action intention condition was shown in 98 trials.
Data Collection
The signals were obtained by using 64 AgCl electrodes positioned with the international 10-20 system. We set the sampling rate to 500 Hz. The M1 electrode served as a reference electrode and was placed on the left mastoid, and the GND electrode served as a ground electrode and was placed at the center of the frontal scalp. Additionally, four other channels (HEOR, VEOU, HEOL, and VEOL) were placed around the eyes of the participants to record electrooculographic (EOG) signals. All the data collection tasks were carried out in Neuroscan 4.3.
Preprocessing the Raw Data
To obtain clean data, we applied two popular neuroscience computer programs, Neuroscan 4.3 and EEGLAB 14.0 (Arnaud and Scott, 2004), to implement several preprocessing steps for the raw EEG signals.
Based previous experimental experience, it remains difficult to clean EEG data with ICA in EEGLAB. Hence, we applied ocular processing to replace ICA in Neuroscan. Given that the mastoid reference is active and effective in detecting somatosensory evoked potentials, we re-referenced data from the unilateral mastoid electrode (M1) to the average of the bilateral mastoid electrodes (M1, M2). As with the ocular processing, the rereferencing was also conducted in Neuroscan.
After finishing the ocular processing and re-referencing, we chose the required electrodes (see Figure 2A, a total of 60 channels were preserved for each subject) with EEGLAB. Then, we applied the Basic FIR filter in EEGLAB to extract the full frequency band (1-30 Hz) data. And then, we segmented the EEG data with event types in discrete time windows (−0.65 s to 2.5 s) and subtracted the baseline signal, obtained from −0.65 to 0 s. In the end, we deleted artifacts with a threshold range that varied from −75 to 75 µv; i.e., voltage signals between −75 and 75 µv were retained and otherwise removed as artifacts. A total of 679 trials were deleted, and an average of 267 trials were retained for each subject in this study.
Construction of a Functional Connectivity Matrix
For constructing a functional connectivity matrix, many algorithms have been proposed by researchers in recent years (Schreiber, 2000;Baccalá and Sameshima, 2001;Nolte et al., 2004;Niso et al., 2013). However, most of these methods are unable to contend with volume conduction. To effectively solve this problem, Stam et al. (2007) proposed the phase lag index (PLI).
where θ denotes the instantaneous phase difference between time series A(t) and B(t) at the nth sample time point. We obtained the instantaneous phase by adopting the Hilbert transform.
However, across many experiments, people find that the PLI algorithm still has some shortcomings. A significant weakness of the PLI is that it is easily affected by noise. Based on the PLI, researchers designed a reinforcement method named the weighted phase lag index (WPLI) (Vinck et al., 2011). LetS(X) be the imaginary component of the cross-spectrum between time series A(t) and B(t). Then, the WPLI is defined by: where and | | denote the mean and absolute value operations, respectively, and sign is the signum function.
In order to carry out whole brain research, we first transformed the time series of the 60 scalp electrodes into 84 brain regions of interest in the source space (see Figure 2B) with sLORETA, and then used the PLI and WPLI to construct functional connectivity matrices in 5 frequency bands and 63 time windows. Notably, we applied the filter of sLORETA to extract the specific frequency band (1-4, 4-8, 8-13, and 13-30 Hz; i.e., delta, theta, alpha, and beta sub-bands, respectively) data. Because many experiments in the following sections involve time windows, we will illustrate the corresponding relations among the sample points, time ranges and time window numbers (see Figure 3).
As for the brain network, each brain region is defined as a network node, and the functional connection between any two regions denotes an edge of the network in this study.
Network Metrics
After obtaining the functional connectivity matrices that were computed from the PLI and WPLI, we directly calculated the network metrics for these matrices. In this study, we applied nine graph measures-graph index complexity Cr (Kim and Wilhelm, 2008), graph density GD (Gomezpilar et al., 2017), Shannon graph complexity SGC (Gomezpilar et al., 2017), average neighbor degree K (Barrat et al., 2004;Rubinov and Sporns, 2010), efficiency complexity Ce (Kim and Wilhelm, 2008), global efficiency Ge (Latora and Marchiori, 2001;Rubinov and Sporns, 2010), clustering coefficient C (Onnela et al., 2005), characteristic path length L (Watts and Strogatz, 1998), and small-world SW (Humphries and Gurney, 2008;Rubinov and Sporns, 2010)-to decode action intention understanding. FIGURE 3 | The corresponding relation among sample points, time ranges, and time window numbers. The blue, green, and yellow rows denote the sample points, time ranges and time window numbers, respectively. For example, the digit "1" in the yellow row corresponds to "−650 to −600" in the green row and "1 to 25" in the blue row. That is, the first time window is from −650 ms to −600 ms, which contains the 1st to the 25th time points.
After we obtained the network metrics, we then applied them to decode action intention understanding, include classifying different intentions and exploring how brain activity changed with time. Figure 4 shows the steps of our method. The key point of our new method is that combining the two methods (PLI and WPLI) for constructing the brain network increases the number of training samples and features, which is different from the single methods that only use PLI or WPLI to extract features. Our binary classification task is performed at the group level. Each stimulus has 25 samples (the number of participants), 9 graph metrics and 63 time windows. Hence, for a single method (PLI or WPLI) using the fusion time windows, the dimensions of the dataset are 50 × 567 (50 samples, 567 features) for each frequency band and 50 × 2,835 (50 samples, 2,835 features) for the fusion bands. For the new method (PLI+WPLI) using the fusion time windows, the dimensions of the dataset are 100 × 567 (100 samples, 567 features) for each frequency band and 100 × 2,835 (100 samples, 2,835 features) for the fusion bands. Similarly, for the dynamic time windows, we obtain the dataset dimensions: 50 × 9 for PLI or WPLI for single bands, 50 × 45 for PLI or WPLI for the fusion bands, 100 × 9 for the new method for single bands and 100 × 45 for the new method for the fusion bands.
RESULTS
In this section, we describe the experimental results obtained mainly by using the weighted brain network metrics. Our experimental results consist of four parts: time series analysis, feature selection, binary classification and brain network analysis. Details of the four parts are given in the following subsections.
Time Series Analysis
To determine whether differences exist under different stimuli, we analyzed the voltage signals from 650 milliseconds before to 2,500 milliseconds after formal stimulation. Figure 5 shows the average ERPs of the three hand-cup interactions across all subjects and all trials. As indicated by t-tests (p < 0.05), we can see that the three ERPs are often significantly different between −650 and 2,500 ms; in particular, the amplitudes around P300 (∼0-600 ms) of the Ug, Tg, and Sc ERPs are extremely different. Additionally, we can also see that each kind of ERP has an exact and significant P300 component (see the magenta dotted line).
Feature Selection
In this study, we selected the weighted brain network metrics to use as classification features. As introduced previously, all the functional connectivity matrices were constructed with the PLI and WPLI, and the nine metrics-Cr, GD, SGC, K, Ce, Ge, C, L, and SW-were computed in the delta, theta, alpha, beta and full frequency bands. Figure 6 shows the dynamic changes in the nine metrics in both the alpha and beta bands. After applying t-tests (p < 0.05), we can see that each metric has a different effect, with some reflecting greater differences than others between the two specific frequency bands. For instance, GD, Ge, K, and L are better than Cr, SGC, Ce, C, and SW in the alpha band. Additionally, we can also see that most of the metrics can effectively reflect the differences of the paired intentions in many time windows, especially in the time windows around some specific components, such as C100, N170, and P300.
Classification Accuracy
In this study, we adopted weighted brain network (PLI and WPLI) metrics to carry out action intention understanding classifications. Binary classification, i.e., a one-vs.-one strategy, was implemented. We designed three pairwise action intention understanding tasks, "Ug-vs-Tg, " "Tg-vs-Sc, " and "Ug-vs-Sc." The classical classifier, SVM, was chosen for data classification. For the parameters of the classifier, we selected a polynomial kernel function with the order set as 1. For each classification task, we used 5-fold cross-validation to avoid overfitting, and each 5-fold cross-validation was implemented 50 times. The classification results are the means of the 50 implementations of the 5-fold cross-validation. Figure 7 shows the classification results in different time windows. From subfigures 1-15, we can see that there are some peak accuracies around the specific components, e.g., C100, N170, and P300, especially in the alpha and beta frequency bands.
Additionally, we can also see that the classification accuracies for both "Tg-vs-Sc" and "Ug-vs-Sc" are better than for "Ug-vs-Tg" in most cases. However, except for a few time windows, the classification results in all other time windows are unsatisfactory. Most of the classification accuracies are lower than 60%. For the three methods, PLI, WPLI, and PLI+WPLI, there is no obvious advantage to any one of them. Figure 8 demonstrates the classification accuracies in the fusion time windows (i.e., FIGURE 5 | Average ERP at the group-level under different stimulus conditions. The yellow, cyan, and magenta vertical lines represent the end time of the symbol "+," cup, and hand-cup interaction presentations, respectively. The magenta, blue, and green curves denote the average amplitudes across all subjects for the three stimulus conditions Ug, Tg, and Sc, respectively. The magenta dotted line denotes the start of the P300 component. The red, yellow and cyan "*" symbols at the bottom of the plots denote p < 0.05 according to t-tests, where the colors, respectively, correspond to Ug-vs-Tg, Ug-vs-Sc, and Tg-vs-Sc. FIGURE 6 | T-tests of brain network metrics. The red, green, and blue curves denote the values of the metrics under the Ug, Tg, and Sc conditions, respectively. The cyan, black and magenta "*" symbols denote significant differences determined by t-tests (p < 0.05) for the Tg-vs-Sc, Ug-vs-Sc, and Ug-vs-Tg paired intentions, respectively. The horizontal and vertical axes represent the time window number and graph metric value, respectively. FIGURE 7 | Classification accuracies in different time windows. The subfigure 1-5 are the classification results that applied graph metrics which obtained by using the PLI to construct brain network in five bands, subfigure 6-10 are the results obtained by WPLI on the same conditions as 1-5, and subfigure 11-15 are the results that calculated by combining both PLI and WPLI.
the merging of the brain network metric features from the 63 time windows into a large dataset). As shown in Figure 8A, the PLI, PLI+WPLI, and WPLI methods have different average accuracies for the different frequency bands. The low frequency band performance is the worst for both WPLI and our new method. Notably, all methods perform well on the alpha band. Additionally, the Ug-vs-Tg classification reaches levels that are not worse than those of both Ug-vs-Sc and Tg-vs-Sc. Figure 8B shows the comparisons of the classification accuracies among the three different methods. From the six subpictures in Figure 8, we can see that our new method performs better than the other two methods except for the low frequency band, where the new method performs worse than PLI. In terms of concrete results, most of the average classification accuracies of the novel method-are over 60%, with some approaching 80%, while some of the maximum classification accuracies approach 90%; for example, see the results for 4-8, 8-13, and 1-30 Hz. More details on the classification accuracy are shown in Table 1.
To estimate our classification model, we also calculated two important estimation measures in machine learning, sensitivity and specificity, for each classification that was computed for the fusion time windows. As shown in Table 1, the two estimation metrics are also satisfactory, which is consistent with the results of the classification accuracies.
Brain Network Analysis
To more effectively decode the brain signals related to action intention understanding, we also implemented experiments on brain network analysis with our novel method. In this study, we mainly carried out the experiments from two perspectives: analyzing the difference in the whole brain network between two kinds of action intentions by the rank-sum test and finding the connectivity edges that are obviously uncommon. It is important to note that these two perspectives are based on the dynamic time windows and the two specific frequency bands, alpha and beta. Figure 9 shows the results of the pairwise statistical test for the whole brain. We can see that there are many time windows that are significantly different for both the alpha and beta frequency bands (red domains). In general, the alpha band outperforms the beta band in this regard. We can also see that the time windows for the specific ERP components exhibit significant differences (see both Figures 3, 9), e.g., the 19th time window. We first performed the rank-sum test at a significance level of 0.01, and then carried out strict FDR correction at the same significance level. Figure 10 shows the results of the t-tests for the connectivity edges in multiple time windows. Because we have 63 time windows in total, it is difficult to display all the brain graphs. Hence, for both the alpha and beta bands, we chose 8 time windows that all showed significant differences for all three pairwise intentions from the whole brain network rank-sum test (see Figure 9), for example, the 49th time window in the alpha band and the 55th time window in the beta band. These 8 time windows contain the signals obtained before and after presentation of the formal stimuli (see Figures 1, 3, 5, 10), which can sufficiently satisfy our study task. FIGURE 9 | Rank-sum test for the whole brain network. The red domains denote significant differences (p < 0.01), and the blue domains represent no significant differences. Each row contains 63 micro-time windows. Figures 10A,B, we can see that there are a greater number of significant connectivity edges in the alpha band, while the beta band is sparser. Additionally, for the alpha band, there are more connectivity edges in the 24th, 41st, 49th, and 63rd time windows than in the others shown. Ug-vs-Tg and Ug-vs-Sc have more connectivity edges than Tg-vs-Sc. In both the alpha and beta frequency bands, many of the larger nodes are found in the frontal, parietal, and occipital lobes. A few larger nodes are found in the limbic lobe and sub-lobar regions.
DISCUSSION
The main aim of this study was to estimate the performance of the novel method, which uses weighted brain network metrics obtained from both the PLI and WPLI to classify different action intention understanding signals and explore neuronal correlation mechanisms. Some important findings are obtained by the abundance of experimental results. The details of these findings are discussed in detail in the following subsections. Comparisons within the beta frequency band. The red, yellow, green, cyan, blue, and purple-red nodes are from the temporal lobe, the limbic lobe, the frontal lobe, the occipital lobe, the sub-lobar region, and the parietal lobe, respectively. The size of the node denotes its degree; the larger the size is, the greater the degree. The digits under the blue arrow represent the time window numbers. All connectivity edges were obtained by t-test after FDR correction (p < 0.01).
Analyses of Time Series and Feature Extraction
There are several specific ERP components (e.g., C100, N170) in EEG time series, especially P300 (see Figure 5). This suggests that cognition of action intention understanding is closely correlated with these specific components, which is consistent with other authors' studies (Dong et al., 2010;Ortigue et al., 2010;Deschrijver et al., 2017;Zhang et al., 2017). For the five frequency bands, many significant differences (p < 0.05) occur at the time points of these components, which indicates that different intention understandings cause different degrees of brain activity in the same band . Aside from these specific components, some significant differences appear at the other time points, e.g., from 400 to 2,000 ms. Beudt and Jacobsen (2015) find that the mentalizing system is activated later than the mirror neuron system, which often responds to early ERP components (e.g., C100 and N170). Ge et al. (2017) and Zhang et al. (2017) also found that the mentalizing system deeply processes the stimulus following the reaction of the mirror neuron system. Thus, there are many significant differences after 400 ms. Overall, Figure 5 proves that our data are clean and reliable and can be used to implement the other experimental tasks.
Our study is mainly based on micro-time windows. Each window was set with a width of 50 ms, and a total of 63 time windows were constructed. In general, the greater the difference between features, the more useful they are (Rodríguez-Bermúdez et al., 2013;Zhang et al., 2015;Miao and Niu, 2016;Ahmadlou and Adeli, 2017;Urbanowicz et al., 2018). From the number of significant " * " symbols (p < 0.05) shown in Figure 6, we can see that we successfully extracted features that are uncommon between pairs of action intention understanding brain signals. Many significant differences were found between the corresponding time windows for these pairs for the nine graph measures, and each measure has a different efficiency (Kim and Wilhelm, 2008). Both the alpha and beta bands showed satisfactory statistical results, which suggests that action intention understanding is closely correlated with these two bands (Hari, 2006;Ortigue et al., 2010;Avanzini et al., 2012).
Analyses of Classification Results
Some previous studies on action intention understanding indicate that some differences in the signals occur over time (Ge et al., 2017;Zhang et al., 2017), especially for the special ERP components. Although the classification accuracies in Figure 7 are not very high, the information concerning the change in accuracy is still consistent with previous studies.
The results in Figure 8 and Table 1 suggest that combining PLI with WPLI to produce fusion time windows is a successful method for classifying brain signals. A feasible explanation is that the new approach effectively increases the number of training samples and features, which is extremely important for machine learning (Zhang et al., 2015;Kumar et al., 2016;Miao and Niu, 2016;Pippa et al., 2017;Kang et al., 2018;Urbanowicz et al., 2018). Obviously, compared with PLI or WPLI alone, the combination method has more samples; compared with single time windows, the combination method has more features. Therefore, the new method is more suitable for classification.
Recently, Ahmadlou and Adeli (2017) pointed out that the weighted brain network can retain useful information better than the binary network in neuroscience research. The main reason is that the binary network is sensitive to the threshold (Phillips et al., 2015). Using similar EEG data, other authors' approaches in classifying action intention understanding signals with binary networks (Zhang et al., 2015 perform worse than our new method, especially for classifying extremely similar stimuli. Notably, these comparisons are indirect. In a word, adopting weighted brain network metrics as the classification features is another good method for improving classification accuracy. Regarding the accuracies for different frequency bands, we know that the alpha band and whole frequency band (1-30 Hz) yield the best results (see Figure 8). Some previous studies have indicated that major reactions during action observation depend on the alpha and beta bands over the motor areas, especially in the alpha band over the occipito-parietal areas (Hari, 2006;Ortigue et al., 2010;Avanzini et al., 2012). The strong reaction in the alpha band caused a significant difference between the responses to the pairwise stimuli. The t-test results in Figures 5, 6 prove this point. Hence, satisfactory accuracies can be obtained, as shown in Figure 8. Why does the whole frequency band also achieve satisfactory result similar to those of the alpha band? We think that this is due to the whole frequency band capturing both alpha and beta band information.
Analyses of the Dynamic Brain Network
This study is mainly based on multiple micro-time windows, a total of 63 windows from 650 ms before formal stimulation to 2,500 ms after formal stimulation, which can make full use of the signal differences in every time period. Figures 5, 6 present some differences that vary with time, and the results of statistical tests illustrated in both Figures 9, 10 also tell us that action intention understanding is closely correlated with some specific time periods (Ortigue et al., 2010;Ge et al., 2017;Zhang et al., 2017). In Figures 9, 10, compared with the beta band, a greater number of statistically significant p-values are illustrated for the alpha band (p < 0.01). According to previous studies (Hari, 2006;Ortigue et al., 2010;Avanzini et al., 2012), reactions in the alpha band are more easily induced in response to observing others' actions. Hence, these actions result in the greater number of differences in this band.
Theories concerning the mirror neuron and mental systems indicate that the purposes of others' actions are possibly discriminated by the observers' natural perceptions or indirect inferences (Gallese and Goldman, 1998;Rizzolatti et al., 2001;Rizzolatti and Craighero, 2004;Fogassi et al., 2005;Brass et al., 2007;James, 2008;Liew et al., 2011). Other studies on neuroimaging note that not only mirror neurons but also mental areas take part in action intention understanding (Blakemore and Decety, 2001;De Lang et al., 2008;Van Overwalle and Baetens, 2009;Becchio et al., 2012;Oztop et al., 2013;Catmur, 2015;Tidoni and Candidi, 2016;Ge et al., 2017;Zhang et al., 2017). In the last figure, many large nodes appear in the frontal, occipital, parietal, and temporal cortexes, which suggests that action intention understanding is correlated with these domains. Rizzolatti and Craighero (2004) found that some regions respond when humans observe each other's action behaviors. These regions mainly consist of the inferior parietal lobule, the ventral premotor cortex, the inferior frontal gyrus and the Broca area of the left frontal lobe. Fogassi et al. (2005) find that there is a significant difference in the response in the inferior parietal lobule when monkeys observe actions that look the same but actually denote different intentions. Interestingly, the results in Figure 10 are consistent with this study (Rizzolatti et al., 2001;Rizzolatti and Craighero, 2004;Fogassi et al., 2005), which reconfirms that the mirror neuron system takes part in action intention understanding. Both Figures 10A,B show that the temporal domain has some large nodes under the Ug-vs-Sc and Ugvs-Sc conditions. Amodio and Frith (2006) and Saxe (2006) note that the mentalizing brain networks mainly consist of the medial prefrontal and temporoparietal junctions and the superior temporal cortexes. Hence, it can be inferred that the action intention understanding in our experiment also involves the mentalizing brain network. Compared with the Ug and Tg stimuli, the action behavior Sc is more abnormal. Therefore, this stimulus can easily cause the significant differences observed in the pairwise Ug-vs-Sc and Tg-vs-Sc comparisons. The essential reason for this is that the mentalizing areas play an important role in responding to abnormal action behaviors (Blakemore and Decety, 2001;Liew et al., 2011;Becchio et al., 2012;Catmur, 2015).
Overall, from the experimental results in Figures 5, 10, we can conclude that both the mirror neuron and mentalizing systems participate in the process of action intention understanding, which is consistent with the results of previous studies (De Lang et al., 2008;Becchio et al., 2012;Catmur, 2015;Ge et al., 2017;Zhang et al., 2017). Whether the relationship between the mirror neuron and mentalizing systems is independent or cooperative in the process of brain activity is debated by many researchers (Van Overwalle and Baetens, 2009;Virji-Babul et al., 2010;Libero et al., 2014;Catmur, 2015;Tidoni and Candidi, 2016;Zhang et al., 2017). Actually, from the experimental results obtained here, we can conclude that the relationship between the mirror neuron and mentalizing systems is cooperative and is capable of encoding the complex dynamical brain signals related to action intention understanding.
Limitations
Obviously, the new method has many merits in decoding action intention understanding. However, it still needs to be improved. First, an abundance of graph metrics have been proposed for studying network properties in recent years (Kim and Wilhelm, 2008;Rubinov and Sporns, 2010;Gomezpilar et al., 2017). We adopted nine graph measures in total, but other measures (Newman, 2004;Claussen, 2007;Rubinov and Sporns, 2010) might be more useful for exploring action intention understanding. In follow-up research, we will aim to apply new graph measures as classification features. Additionally, we adopted one of the most popular classifiers, SVM, to carry out binary classification. Previous machine learning experience has shown that many classifiers typically obtain different results with the same dataset (Rodríguez-Bermúdez et al., 2013;Miao and Niu, 2016;Urbanowicz et al., 2018). Whether there exist some other classifiers that perform better than the SVM for action intention understanding classification needs to be determined with experimental data. Therefore, this is another future research goal. Finally, which system is the most important in the process of an agent observing another's actions, i.e., whether the mirror neuron or the mentalizing system dominates the action intention understanding, has not been thoroughly decoded (Becchio et al., 2012;Marsh et al., 2014;Catmur, 2015;Tidoni and Candidi, 2016;Ge et al., 2017;Pomiechowska and Csibra, 2017). Therefore, a more comprehensive study on the neuronal mechanism underlying action intention understanding needs to be implemented in the future.
CONCLUSION
In summary, this study highlights a combination method that decodes the brain signals related to action intention understanding by combining the weighted brain network metrics of both the PLI and WPLI. Sample and feature fusion efficiently improve the classification accuracy, especially for similar action intention stimuli. Compared with the low frequency and beta bands, the differences in the action intention understanding brain signals are more obvious in the alpha band. The new approach can be universally applied for many studies. Brain activity signals collected by MRI, fMRI, MEG, and NIRS can be analyzed with our novel method. Other psychological and cognitive behavior data (e.g., mathematical deduction and emotion recognition) analyses can also use the new method. Overall, it has the advantage of generality.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Academic Committee of the School of Biological Sciences and Medical Engineering, Southeast University, China. The patients/participants provided their written informed consent to participate in this study. | 8,167.6 | 2020-07-02T00:00:00.000 | [
"Computer Science"
] |
Synthesis of biotinylated probes of artemisinin for affinity labeling
In this data article, we described the synthetic routes to four biotinylated probes (2, 3, 4, and 5) of artemisinin and the associated experimental procedures. We also provided the physical data for the synthesized compounds. These synthesized biotinylated probes of artemisinin are useful molecular tools for the affinity-labeling study of target receptor proteins of artemisinin in tropical pathogens such as Trypanosoma, Leishmania, and Schistosoma. The data provided herein are related to “Biotinylated probes of artemisinin with labeling affinity toward Trypanosoma brucei brucei target proteins”, by Konziase (Anal. Biochem. (2015)).
How data was acquired
Chemical reactions; normal phase column chromatography; NMR spectroscopy: JNM-GX-500 (JEOL), Lambda 500 (JEOL), Inova 600 (Varian); mass spectroscopy: JMS SX-102 (JEOL); IR spectroscopy: FT-IR-5300 (JASCO); polarimetry: DIP-370 (JASCO) Data format Analyzed, text, schemes Experimental factors N/A Experimental features Chemical reactions were performed under argon gas unless otherwise indicated; the diazirinecontaining probes were synthesized in brown opaque chemical flasks or transparent chemical flasks wrapped with aluminium foil due to photosensitivity. Data source location Osaka, Japan Data accessibility Data are available with this article
Value of the data
To reproduce all the experiments described in the research article ref [1]. To detect and isolate trypanosomal candidate target proteins of artemisinin.
To study the target receptors of artemisinin in Leishmania or Schistosoma.
General
1 H-NMR and 13 C-NMR spectra in CDCl 3 or CD 3 OD with TMS as the internal standard were recorded using a JNM-GX-500 or Lambda 500 (JEOL, Tokyo, Japan) NMR spectrometer operating at 500 MHz and 125 MHz, respectively. 2D NMR data in CDCl 3 were recorded using a Varian Inova 600 (Varian, Tokyo, Japan) NMR spectrometer operating at 600 MHz. Chemical shifts (δ) were reported in parts per million (ppm) and the multiplicities were designated as follows: s (singlet), d (doublet), t (triplet), m (multiplet), dd (doublet of doublets), ddd (doublet of doublet of doublets), brd (broad doublet), tlike (triplet like), dt (doublet of triplets). The coupling constants (J) were reported in Hz. Fast atom bombardment (FAB) and high-resolution fast atom bombardment (HR-FAB) mass spectra were recorded with a JMS SX-102 (JEOL, Tokyo, Japan) spectrometer in positive ion mode using magic bullet (5:1 dithiothreitol/dithioerythritol; Tokyo Kasei Kogyo) or m-nitrobenzyl alcohol as the matrix. Infrared (IR) spectra were recorded by a diffusion-reflection method on KBr powder using an FT-IR-5300 (JASCO, Tokyo, Japan) spectrometer. Shoulder bands in the IR spectra were designated by sh. Optical rotations were measured in a 0.5 dm length cell with a DIP-370 (JASCO, Tokyo, Japan) digital polarimeter. For column chromatography, silica gel (Fuji Sylisia BW-200 or Merck 60-230 mesh) and octadecyl silane ODS (Cosmosil 75C 18 OPN, Nacalai-Tesque) were used. Chemical reactions were performed under Ar gas unless otherwise indicated. TLC analyses were performed using normalphase pre-coated plates (Kiesel gel 60F 254 , Merck) and reversed-phase high-performance thin-layer chromatography (HPTLC) plates (RP-18 WF 254S , Merck). The spots on the thin-layer chromatograms were detected under UV light at 254 and 366 nm and visualized with either p-anisaldehyde/H 2 SO 4 (5 mL of AcOH, 25 mL of c-H 2 SO 4 , 425 mL of EtOH, and 25 mL of water) or phosphomolybdic acid (5 g in 100 mL of EtOH) spraying reagents and subsequent heating.
Preparation of probe 3 from 9: To a solution of 6 (5 mg, 0.007 mmol) in THF (0.2 mL) were added 2,4,6-trichlorobenzoyl chloride (1.1 μL, 1.7 mg, 0.9 mol equiv. to 6) and Et 3 N (2.14 μL, 1.56 mg, 2 mol equiv. to 6), and the entire mixture was stirred at room temperature overnight (ca. 15 h). Then, the previously prepared 9 (5.1 mg, 0.016 mmol) was added, and the mixture was stirred for 1 h at room temperature. Next, DMAP (0.47 mg, 0.5 mol equiv. to 6) was added, and an additional stirring was performed overnight at room temperature. The reaction mixture was directly evaporated under reduced pressure, affording a crude product (18.2 mg) that was applied to SiO 2 column chromatography (
Synthetic route to the biotinylated photoaffinity probe 4
We began the process with γ-butyrolactone (10) that underwent methanolysis in the first step, followed by protection of the primary alcohol with tert-butyldimethylsilyl chloride (TBSCl) in dichloromethane in the second step, affording 11, which was hydrolyzed under basic conditions to yield the tert-butyldimethylsilyl (TBS)-ether carboxylic acid 12. Condensation of 12 with 7 under EDCI Á HCl and DMAP in tetrahydrofuran (THF) led to 13, which was deprotected using a Dowex cation resin (50WX8, 100-200 mesh, H Cation Exchange Resin, Sigma-Aldrich) in MeOH, affording 14. Finally, condensation of 14 with 6 using EDCI Á HCl and DMAP in THF afforded probe 4 in a 28% yield (Scheme 3).
Preparation of 13 from 12: To a solution of 12 (19 mg, 0.087 mmol) in THF (1.2 mL) were added 7 (6.2 mg, 0.022 mmol), EDCI Á HCl (31.68 mg, 2 mol equiv. to 12), and DMAP (5.30 mg, 0.5 mol equiv. to 12). The mixture was stirred at room temperature for 2 h. Then, EDCI Á HCl (31.68 mg) and DMAP (5.30 mg) were added again, and an additional stirring was performed for 2 h. Following a reaction work up with brine, the mixture was extracted with EtOAc. The EtOAc layer was washed once with 5% HCl, once with saturated aqueous NaHCO 3 , once with brine, and then dried over MgSO4. Subsequent evaporation under reduced pressure produced 13 (18 mg) quantitatively (Scheme 3).
Compound Preparation of probe 4 from 14: To a solution of 6 (0.7 mg, 0.001 mmol) in THF-CH 3 CN (1:1, 90 μL) were added the previously prepared 14 (0.8 mg, 0.002 mmol), EDCI Á HCl (0.63 mg, 3 mol equiv. to 6), and DMAP (0.07 mg, 0.5 mol equiv. to 6), and then the mixture was stirred at room temperature for 3 h. Then, EDCI Á HCl (0.63 mg) and DMAP (0.07 mg) were added, and an additional stirring was performed at room temperature for 30 min, followed by another stirring performed at 40 1C for 2 h. Subsequently, the reaction mixture was evaporated under reduced pressure, affording a crude product (4.2 mg) that was applied to SiO 2 column chromatography (CHCl 3
Synthetic route to the biotinylated affinity probe 5
We started with d-biotin (15) that underwent a Curtius rearrangement in the first step, followed by condensation with tetraethylene glycol (16) in the second step to afford 17, which underwent a Michael addition to 18 and then hydrolyzed under basic conditions to afford the affinity labeling unit 19. Condensation of 19 with 7 using EDCI Á HCl and DMAP in THF-CH 3 CN¼1:1 afforded probe 5 in a 68% yield (Scheme 4). | 1,556.8 | 2015-05-05T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The collisional behavior of ESI-generated protonated molecules of some carbamate FAAH inhibitors isosteres and its relationships with biological activity
Author(s): Valitutti, Giovanni; Duranti, Andrea; Mor, Marco; Piersanti, Giovanni; Piomelli, Daniele; Rivara, Silvia; Tontini, Andrea; Tarzia, Giorgio; Traldi, Pietro
JMS Letters
Dear Sir,
The collisional behavior of ESI-generated protonated molecules of some carbamate FAAH inhibitors isosteres and its relationships with biological activity
Recently, we reported some studies meant to rationalize the mode of action of a class of compounds acting as fatty acid amide hydrolase (FAAH) inhibitors [1 -3] and characterized by an N-alkylcarbamic acid O-aryl ester structure. [4,5] The enzyme inactivation is considered to take place through two distinct and consecutive processes, reported in Scheme 1, i.e. formation of a noncovalent complex (recognition step) and a nucleophilic attack by Ser241 [6,7] on the carbamate, leading to an irreversible inactivation of the enzyme by carbamoylation (inactivation step). The recognition step, related to the stereoelectronic complementarity between the inhibitors and the enzyme active site, was studied by molecular modeling, [5,8 -10] which, however, could not completely account for the inactivation step. As a part of a wider program of application of mass spectrometry (MS) to structure-activity relationships (SARs) studies we hypothesized that the inactivation reaction could be related to the propensity of the C(O)-O bond to cleavage. We started therefore an investigation on collisional experiments on the ESI-generated protonated molecules. Interestingly, the energetics of this process, obtained by breakdown curves, [11] showed a linear correlation between the propensity of the C(O)-O bond to be cleaved under collisional condition and the IC 50 (half maximal inhibitory concentration of FAAH hydrolysis of [ 3 H]AEA in rat cortical membranes) values for the examined compounds. [12] In a further study the same approach was applied to a series of biphenyl-3-ylcarbamates with electronwithdrawing or electron-donating substituents on the distal or proximal phenyl ring. [13] The results we obtained warned us that caution must be taken when trying to extend previous results to a more complex series of structures, but in general supported the usefulness of MS in SARs, at least when reactivity factors contribute to the biological activity.
During the exploration of the series of the N-alkylcarbamic acid Oaryl esters a small number of putative bioisosteres (1-6) ( Fig. 1) were synthesized to establish the role of the carbamate function on the observed FAAH inhibitory behavior. [5] The fact that the compounds 1-6 did not inhibit FAAH [5] led us to investigate their MS behavior to verify whether substantial differences in C(O)-O bond cleavage existed in comparison with the previously studied active O-arylcarbamate. Compounds 1-6 were analyzed in ESI conditions and the related [M + H] + were employed for collisional experiments.
ESI experiments were performed using an LCQ Deca instrument (Thermo, San José, CA, USA) operating in positive ion mode. Compounds 1-6 were dissolved in CH 3 OH and their 10 −6 M solutions were directly infused into the ESI source. Spray voltage, capillary voltage, and entrance capillary temperature were 4 kV, 8 V, and 220 • C, respectively. MS/MS and MS 3 experiments were obtained by resonance excitation [14] of the preselected ion. He pressure inside the trap was kept constant (2.8 × 10 −5 Torr directly read by ion gauge, in absence of the N 2 stream). The isolation width was set at 2 mass units and the scan rate was 0.5 s −1 . In the previous investigations on the collisional behavior of [M + H] + of carbamates, [12,13] the most favored decomposition route was that related to the C(O)-O bond cleavage (as in Fig. 1, cleavage γ ), whereas this was not always true for 1-6, possibly because of the different bond strength of the groups replacing the carbamic one. Thus, the [M + H] + species of compound 1 leads to the product ion spectra reported in Fig. 2. The primary fragmentation processes might be rationalized by admitting an initial protonation of the nitrogen atom, but it must be taken into account that the protonation can more reasonably take place on the R 3 group, which, because of the electron delocalization, represents the most basic site of the amide (or thioamide) moiety. The protonation on R 3 is confirmed by the high abundance of [M + H] + ion for 2 and 4; in these cases the negative charge on the R 3 group is reinforced by both the nitrogen-donating groups. The most abundant species, at m/z 186, is due to the cleavage of the cyclohexyl-NH bond with Hrearrangement (Scheme 2, cleavage α). The ion at m/z 141 originates from the C(OH)-CH 2 bond cleavage, whereas that of the NH-C(OH) bond leads both to ions at m/z 169 and m/z 100. MS 3 experiments show that the ion at m/z 186 decomposes, through NH 3 and CH 3 NO losses, leading to ions at m/z 169 and 141, respectively. The latter species is also produced by the acylium ion at m/z 169 through a decarbonylation process.
The observed fragmentation pathways can also be explained by an isomerization of [M + H] + ions into a proton-bound dimer associating cyclohexylamine and naphtylketene, giving account for the formation of protonated cyclohexylamine (m/z 100) and the acylium cation at m/z 169. Analogous isomerizations can be invoked for the protonated molecules of the other compounds under investigation.
It is to be noted that all the observed fragmentation pathways lead to even electron product ions, in agreement with the even electron rule. [15] Thus, for compound 1 the most favored decomposition route is no longer that observed in the carbamate derivatives (as in Fig. 1, cleavage γ ), but rather that due to the cleavage of cyclohexyl-NH bond (as in Fig. 1, cleavage α). This behavior might be explained by the weaker CH-NH bond strength relatively to that of the C(OH)-CH 2 . Of course this hypothesis would have to be confirmed by theoretical calculations, but they are not available in our lab.
In the case of [M + H] + of compound 2 the most abundant collisionally induced fragmentation product was that due to the C(OH)-NH bond cleavage with 2H rearrangement (Fig. 1, cleavage γ ; Fig. 3 (Table 2). This seems to suggest that protonation took place on the sulfur atom. It should be considered, however, that different intramolecular protonbridged forms can be present, as those shown in Fig. 4, which could explain the observed behavior.
Compound 4 showed the formation of naphthyl-NH 3 + at m/z 144 (Table 2) and a further decomposition product at m/z 188, because of the cleavage of the NH-C(SH) bond (Fig. 1, cleavage β) formally corresponding to naphthyl-NH-CH SH + ion.
An interesting decomposition was observed in the case of 5, whose collisional spectra of [ at m/z 145 and m/z 161. In the case of the former ion the structure naphthyl-OH 2 + could be easily assigned, while for the latter the ion naphthyl-SH 2 + could be proposed; this ion would originate from a Newman-Kwart rearrangement (converting phenols to thiophenols, as shown by the example reported in Scheme 4) [16] induced by a protonation reaction leading to compound 3. In fact, the most abundant ion from 3 is the naphthyl-SH 2 + one, which is also observed in the spectrum of 6, together with an ion at m/z 142, because of the cleavage of the C(S)-S bond, with charge localization on the cyclohexyl-containing species.
The above results indicate that compounds 1-6 behave quite differently from what was previously observed in the case of N-alkylcarbamic acid O-aryl esters. The naphthyl-R 2 H + ion is produced by collision of [M + H] + in the present case as well, but many concurrent decomposition pathways are present, mainly because of protonation sites and bond strengths different from those of the carbamates. The lack of FAAH inhibitory activity of compounds 1-6 may thus be explained also on the basis of the MS data reported here, clearly indicating a different reactivity of the putative bioisosteres in comparison with that of the parent carbamic acid ester FAAH inhibitors. | 1,874.6 | 2009-04-01T00:00:00.000 | [
"Biology"
] |
Spatio-Temporal Analysis of Land Use / Cover Change of Lokoja-A Confluence Town
Land use/land cover information is essential for a number of planning and management activities. The general patterns of land use/cover as they were recorded in remotely sensed data were discussed in this study. Multi-date satellite imageries (Landsat TM 1986, Landsat ETM+ 2001, Landsat ETM+ 2006 of 30m spatial resolution respectively and SPOT 5, 2007 of 10m spatial resolution) were obtained and used for the study. These images were enhanced, resampled, georeferenced and classified for the assessment of spatio-temporal pattern of land use/cover change in the study area. The study also utilized topographical map of the study area, derived from sheet number 247 of 1963, scale 1:50,000 to identify features, which were used as ground control point for image geo-referencing. ILWIS 3.2 Academic software was used to process the image data. The result of ground truthing was combined with visual image interpretation as training sites for supervised classification. Six different land uses/covers were identified and used to classify the image data. The results showed that the natural environments (vegetation, wetland resources, water bodies and mountainous terrain) were being threatened, as they reduced continually in the areal extent over time and space while the social environment (built up area) expanded tremendously. The study discovered that urbanization processes majorly responsible for land use/cover change in Lokoja. In conclusion, the study advanced our frontier of knowledge on land use/cover study by providing information on the status of natural and social environment in Lokoja, a confluence town, between 1986 and 2007 using remotely sensed images and Geographic Information Systems (GIS) technology. Keyword: land use/cover, multi-date satellite imageries, image classification, Lokoja, Nigeria
Introduction
Scientific research community called for substantive study of land use changes during the 1972 Stockholm Conference on the Human Environment, and again 20 years later, at the 1992 United Nations Conference on Environment and Development (UNCED).At the same time, International Geosphere and Biosphere Programme (IGBP) and International Human Dimension Programme (IHDP) co organized a working group to set up research agenda and promote research activity for land use and land cover (LULC) changes.This is because it was realized that land use and land cover (LULC) change is a major issue of global environmental changes (Prakasam, 2010).
Land use and land cover change (LULCC); also known as land change is a general term for the human modification of Earth's terrestrial surface (Ellis, 2011).Scientists and the public alike now understand that contemporary change in many realms of the biosphere is largely the product of human activities (Turner et al. 1994).Some of these activities are due to specific management practices and the rest are due to social, political and economical forces that control land uses (Medley et al., 1995).Although humans have been modifying land to obtain food and other essentials for thousands of years, the current rates, extents and intensities of LULCC are far greater than ever in history, driving unprecedented changes in ecosystems and environmental processes at local, regional and global scales (Turner et al., 1994;Ellis, 2011).As Meyer and Turner (1992) asserted, changes in land use and land cover affect global systems (e.g.atmosphere, climate, forest, and sea level) and have a significant effect in localised places where they occur.Because of their great influence in global warming, loss of biodiversity, and impact in human life, the International Geosphere-Biosphere Program (IGBP) and the International Human Dimension Program (IHDP) initiated a joint international program of study on Land Use /Cover Change (LUCC).They recognized the necessity to improve understanding, modelling, and projecting land dynamics from global to regional scale and focusing particularly on the spatial explicitness of processes and outcomes (Geoghegan et al., 2001).
Land use/land cover (LULC) changes in tropical regions are of major concern due to the widespread and rapid changes in the distribution and characteristics of tropical forests (Myers 1993;Houghton, 1994).However, changes in land cover and in the way people use the land have become recognized over the last 15 years as important global environmental changes in their own right (Turner, 2002).In recent times, the knowledge about land use and land cover has become increasingly important as the nation plans to overcome the problems of haphazard, uncontrolled development, deteriorating environmental quality, loss of prime agricultural lands, destruction of important wetlands, and loss of fish and wildlife habitat.Land use data are therefore, needed in the analysis of environmental processes and problems that must be understood if living conditions and standards are to be improved or maintained at current levels (Medley et al., 1995;Anderson et al., 2001).
The study area, Lokoja, is a confluence town, undergone rapid urbanization and tremendous economic growth during the last few years.The growing urbanization in the city has created pressure for the changes in the land use pattern.These changes have rapidly transformed the city from a subsistence agrarian economy into rapidly commercial economy.Infrastructural development such as, road networks, housing estate development among others, has further enhanced land use change in the area.The sharp changes in the land use pattern during the past few years have become matter of concern and this has stimulated the choice of this study area.
As opined by Lambin et al. (2003), remote sensing has an important contribution to make in documenting the actual change in land use/land cover on regional and global scales.And there is a consensus of opinion among scientists that satellite imageries and Geographic Information Systems (GIS) provide a reliable means for adequate and regular monitoring of forest estate and land use change (Burrough, 1990;Goodchild et al., 1993;Ayeni, 2001;Lambin et al., 2003).This study therefore, relied on satellite images and Geographic Information Systems (GIS) to analyze land use and land cover change in Lokoja, a confluence town in Nigeria between 1986 and 2007.Specifically, the study identified and mapped the categories of land use/cover in the study area; analyzed the spatial and temporal pattern and assessed the underlying factors and decision variables for land use change.
The Study Area
The study area is located between latitude 7 o 45΄27.56΄΄and 7 o 51΄04.34΄΄Nand longitude 6 o 41΄55.64΄΄and 6 o 45΄36.58΄΄E,within the lower Niger-Benue trough.It has an estimated landmass of 63.82 sq.km.It shares common boundaries with Niger, Kwara, Nassarawa states respectively and the Federal Capital Territory to the north; Benue state to the East; Adavi and Okehi Local Government Areas (LGAs) by the south and Kabba Bunu (LGA) by west (Figures 1 & 2).
The annual rainfall in the area is between 1016 mm and 1524 mm with its mean annual temperature not falling below 27 0 C. The rainy season lasts from April to October when the dry season lasts from November to March.The land rises from about 300 metres along the Niger-Benue confluence, to the heights of between 300 and 600 metres above the sea level in the uplands.Lokoja is drained by Niger and Benue rivers and their tributaries.The confluence of the Niger and Benue rivers which could be viewed from the top of Mount Patti is located within the study area.The River Benue is navigable as far as Garua in the rainy season floods, but up to Makurdi in Benue State in the dry season (Ogunjumo, 2000).
The general relief is undulating and characterized by high hills.The Niger-Benue trough is a Y-shaped lowland area which divides the sub-humid zone into three parts.It has been deeply dissected by erosion into tabular hills separated by river valleys.The flood plains of the Niger and Benue river valleys in Lokoja have the hydromorphic soils which contain a mixture of coarse alluvial and colluvial deposits (Areola, 2004).The alluvial soils along the valleys of the rivers are sandy, while the adjoining laterite soils are deeply weathered and grey or reddish in colour, sticky and permeable.The soils are generally characterized by a sandy surface horizon overlying a weakly structured clay accumulation.
The main vegetation type in Lokoja is Guinea savanna or parkland savanna with tall grasses and some trees.These are green in the rainy season with fresh leaves and tall grasses, but the land is open during the dry season, showing charred trees and the remains burnt grasses.The trees which grow in clusters are up to six metres tall, interspersed with grasses which grow up to about three metres.These trees include locust bean, shea butter, oil bean and the isoberlinia trees.The different types of vegetation are not in their natural luxuriant state owing to the careless human use of the forest and the resultant derived deciduous and savanna vegetation.
The creation of Kogi state on 27th August, 1991 with Lokoja as the capital brought an influx of population to the area due to its status as an administrative headquarters.According to 1991 census, Lokoja had the population of about 77,516 people, which increased to 195,261 in 2006(Nigeria Official Gazette, 2009).The increase in human population brought rapid development, which modified land use pattern in the area.Agriculture serves as the main occupation of the people.However, the status of the Lokoja as an administrative headquarters brought some institutions into the area, which put many people in the public institutions like the Kogi State Polytechnic, Specialist Hospital and other governmental offices.
The major means of transportation is by road.However, other means of transport is by water especially when goods are to be distributed to locations that are not motorable.The ecological problems in Lokoja include leaching, erosion and general impoverishment of the soil.These problems are compounded by the annual bush burning of the savanna that further exposed the top soil to more erosion.Floods pose problems on the flood plains during the rainy season, while aridity is a problem to several areas at short distances from the rivers during the dry season.Figures 3, 4, 5 & 6).The images were obtained in the same season (Table 1).While Landsat TM 1986, Landsat ETM+ 2001and Landsat ETM+ 2006 were obtained in December, January and December respectively, SPOT 5 was obtained in April 2007.It is instructive to note that the rainy season spans the period of seven ( 7) months (April till October) when the dry season lasts for five (5) months (November till March) in the study area.
All the images were enhanced, resampled, georeferenced and classified for the assessment of spatio-temporal pattern of land use/cover change in the study area.The study also utilized topographical map of the study area, derived from sheet number 247 of 1963, scale 1:50,000, obtained from Federal Survey Office, Lagos.It was used as guide to identify features, which were used as ground control point for image geo-referencing.
The process of digital image analysis begins with the extraction of the sub-scene from the original dataset for all the images used.ILWIS 3.2 Academic software was used to process all the image data.A common window covering the same geographical coordinates of the study area was extracted from the scene of the images obtained.The sub-map operation of ILWIS 3.2 Academic allows the user to specify a rectangular part of a raster map to be used.To extract the study area from the whole scene of the images obtained, the numbers of rows and columns of the area were specified.
To improve the visual interpretation of the image data, all the images were enhanced into natural colour composite.This enabled the image data to relate colours and patterns in the image data to the real world features.In Landsat TM 1986, Landsat ETM+ 2001and 2006 respectively, channel 7 was assigned to red plane, channel 4 to green plane, and channel 3 to blue plane.This gives Red, Green, Blue, bands (RGB-743), which produced natural colour composite.For SPOT 5, 2007, channel 1 was assigned red plane, channel 2 to green and channel 3 to blue plane.The band combination then consisted of Red, Green and Blue, (RGB-123) natural colour composite.In natural colour composite, vegetation is depicted as green, water in shades of blue and bare soil in shades of brown and gray.
Re-sampling was carried out to equalize the spatial resolution of the datasets that possess varying spatial resolution.The Procedure involved automatically adjusting one or more raster datasets to ensure that the spatial resolution of all datasets corresponds for accurate spatial operations.The Landsat images have a spatial resolution of 30m while SPOT 5 image was 10m.Based on these differences, the SPOT image was re-sampled to the nearest neighbour.
GIS data files must have a real-world coordinate system if they are to have valid coverage.To make the image data valid to real world, they were georeferenced to the same coordinate system using the topographical map of the area.The process of georeferencing in this study started with the identification of features on the image data, which can be clearly recognized on the topographical map and whose geographical locations were clearly defined.The intersections of streams and of the highways were used as ground control points (GCPs).The latitude and longitude of the GCPs of visible features obtained in the base map were used to register the coordinates of the image data.Using these ground-control points, the computer produced a number of equations that transformed the location of all the pixels on the distorted image to a properly orientated image.All the images were georeferenced to Universal Transverse Mercator projection of WGS84 coordinate system, zone 31N with Clarke 1880 Spheriod.
In this study, the satellite images were classified using supervised classification method.The combine process of visual image interpretation of tones/colours, patterns, shape, size, and texture of the imageries and digital image processing were used to identify homogeneous groups of pixels, which represent various land use classes of interest.This process is commonly referred to as training sites because the spectral characteristics of those known areas were used to train the classification algorithm for eventual land use/cover mapping of the images.To validate the tonal values recorded on the satellite images with the features obtained on the ground and also to know what type of land use/cover was actually present, the study engaged in ground truthing.Before the ground truthing, map of the study area was printed and was used as guide to locate and identify features both on ground and on the image data.The geographical locations of the identified features on the ground were clearly defined.These were used as training samples for supervised classification of the remotely sensed images.Six land uses/land covers were clearly identified during ground truthing, which were used to classify the image data.These are vegetation (savanna, woodland, fallow or shrub), wetlands, bare soils, water body (rivers, streams, ponds and dams), bare rocks and built-up areas.Source: Author's data analysis
Accuracy and Reliability Assessment of the Classified Image
In order to determine the level of accuracy of the image data, a confusion matrix operation was performed and generated.The accuracy assessment of four temporal data shows that most of the land use types were classified with acceptable level of accuracies and the overall accuracy of the land uses makes the study reliable for planning.The summary of the reliability and accuracy assessment of the classified satellite imageries are depicted in Table 2.The average accuracy of Landsat TM of 1986 was 99.96% while the average reliability was 99.67% and overall accuracy was 99.85%.Landsat ETM+ 2001 average accuracy was 76.02%, average reliability 75.80% and overall accuracy was 74.56%.The average accuracy of Landsat ETM+ data of 2006 showed 87.44% while the reliability was 82.34% and its overall accuracy revealed 78.26%.Though the spatial resolution of SPOT 5, 2007 was better than Landsat data, there were some noises in the image data, which to some extent affected the result of the classification.However, the average accuracy and reliability still showed some levels of acceptance.While the average accuracy was 65.09%, the reliability was 90.17% and the overall accuracy was 49.63%.
Temporal Pattern of Land Use/Cover between 1986 and 2007
All the identified land uses/covers experienced changes within the period of 21 years (1986-2007) (Table 3).In 1986, bare rocks occupied 6,598.48hectares, which represented 6.7% of the entire land use/cover in the area.In 2001, the land cover manifested an evidence of reduction when the land area covered was 3,808.3965hectares, representing a reduction of 2,790.08 hectares over the period of 15 years.This may be connected to the quarry activities in some parts of the study area, which consequently reduced the area of bare rocks.In 2006, there seems to be increase in the area of bare rocks (5,117.82 hectares) as against what it was in 2001.In reality, the area did not increase but the period of the year at which the image was acquired was responsible for it.It is instructive to note that Landsat ETM+ 2006 was obtained in January, the peak of dry season in the study area when vegetation had dried up, and every land cover including bare rocks is opened up.There was no significant difference in the areal extent of the land cover in 2007 in spite of the improvement in the spatial resolution of the image data.This may be due to a year difference, which was assumed to be too small to record significant changes.The disparity in the area of bare rocks especially between 2001 and 2006 may be due to the variations in the accuracy levels of the classified image data as shown in Table 2.However, the overall change in the areal extent of bare rocks reveals that between 1986 and 2007, the land cover reduced from 6,598.48 to 5,117.81 hectares representing a percentage reduction of 22.4%.It should be noted that since the study area had become the state headquarters, the development of infrastructural facilities had increased, which led to the creation of more quarry sites and in effect reduced the areal extent of the bare rocks.
The area of bare soils decreased steadily with time.In 1986, bare soils covered 949.52 hectares but decreased to 937.01 hectares in 2001, giving a decrease of 12.51 hectares of the land area.In 2006, the land area of bare soils diminished to 723.21 hectares, representing a decrease of 213.8 hectares.There was no much difference in the land area in 2007 as the area decreased by 0.08 hectares, which is not significant probably because of a year difference.It was observed that since the study area had become the capital city, the demand for land for residential and commercial activities increased.Thus, vacant or undeveloped lands situated within the built up areas were acquired.This explains why the area of bare soils decreased steadily.The overall change shows that the area of bare soils decreased by 226.23 hectares, representing a decrease of 23.8% between 1986 and 2007 (Table 3).
The built up area gained more land areas with time.In 1986, the land use occupied 2,800.72 hectares, which only constituted 2.8% of the entire land use area.In 2001, the area increased to 38,393.10 hectares, occupied 38.8% of the entire area.This shows that between 1986 and 2001, the built up area had increased by 35,592.38 hectares.In 2006, the areal extent of built up area had grown considerably to 47909.04 hectares, which accounted for 48.4% of the land area.Close examination of Table 3 shows that among the land uses, only built up area increased tremendously.The growth was so much that it gained 45,109.4 hectares from other land covers, amounting to 1610.6% increase within the period of 21 years (1986 to 2007).This result shows the reality on ground because since 1991 when the study area (Lokoja) became the state headquarters of Kogi state, influx of people from various parts of the country and outside the country for social and economic activities continue to increase.The implication is that, some land uses/covers such as, forest areas were being converted to residential and commercial land uses, which upholds the results of Jaiyeoba (2002) and Adeoye and Ayeni (2010).Savanna, woodland, fallow and shrub were classified as vegetation in the study.This area decreased with time.In 1986, the entire area classified as vegetation was 81,009.84hectares, which amounted to 81.8% of the entire land area.In 2001, it decreased to 48,079.27hectares, given a loss of 32,930.6 hectares.The trend continued in 2006 when the land area diminished to 39,807.6 hectares from 48,079.27, with a loss of 8,271.67 hectares.In 2007, similar area of vegetal cover was maintained with the record of 2006.The overall assessment shows that the area of vegetation in the study area decreased by 50.9% over the period of 21 years.Observation made during the field survey shows that vegetal areas were paving way for residential quarters, industrial areas and expansion of roads, which corroborates with the findings of Olofin (2000); IPCC (2000); FAO (2001); Jaiyeoba (2002) and Adeoye and Ayeni (2010).
The fluctuation in the area occupied by water body as shown in Table 3 could be attributed to recent climatic variability.Even though the images were acquired in the same time period, that is, the period of dry season in the study area, the intensity of rainfall during the raining season as well as the duration of downpour is no longer stable.This in effect could account for the variation in the volume of water and the area occupied by water body.In 1986, the surface area of water body was 3,216.10hectares but increased to 3,950.95 hectares in 2001.In 2006, the area decreased to 3,566.31hectares, giving a decline of 384.65 hectares.It should be noted that some streams and rivers dried up during the dry season including part of rivers Niger and Benue.The decline thus, means that the volume of water in the river basin reduced whereas the river banks remained the same.The record of 2007 was not different from that of 2006 in spite of the low accuracy level in Table 2.
Most farming activities were done on wetland areas.The wetlands in the study area revealed evidence of reduction in size and area.In 1986, it occupied 4,487.28hectares but decreased to 1,937.49hectares by 2007, representing loss of 2,549.79hectares, which is 56.8% loss.The decrease in the areal extent of the ecosystem could be attributed to the influx of people into the area, which put more pressure on the limited resources including the wetlands.For instance, places that were used for urban agriculture were converted to filling stations, shopping malls and residential area.During the field survey, it was observed that wetland areas that were zone of flora and fauna ecosystems were subsequently converted to Fadama cropping, residential quarters and commercial centres.The implication is that if nothing is done to protect this ecosystem, there may be an increase in urban disasters such as flooding, which wetlands can mitigate.Exposed rocks, otherwise termed as bare rocks were mostly found in the western part of river Niger (Figure 8).The mountainous terrain decreased tremendously as the city expanded, leaving the land cover to the north western part of river Niger and few patches to the north of river Benue (Figures 9,10 & 11).The reason was due to the influx of people to the study area when it became the state headquarters in 1991.At this time, more quarry sites were opened to cope with high demand of stones for road construction and housing projects.This consequently affected the original area of the land cover.
The undeveloped lands, uncultivated and exposed lands especially along rivers Niger and Benue were classified as the bare soils in the study.In 1986, it was not a common feature though found in few locations void of vegetation (Figure 8).Between 2001 and 2007, the land cover became one of the prominent features because of the status of the study area as the political headquarters.Most lands were exposed and left uncultivated.However, there were variations in their spatial expansion with time (Figures 9,10 & 11).
The built up areas expanded across rivers Niger and Benue with time.Many rural settlements around Lokoja were brought to the limelight because of the status of the area.Besides, many newly residential quarters sprang up and spread across the Niger (Figures 8,9,10 & 11).In 1986, the built up areas were majorly found at lower Niger but expanded to both sides of river Niger and Benue with several settlements and newly developed residential quarters, which varied in sizes (Figures 9,10 & 11).This was as a result of increase in human population, which increased man's needs for shelter and places of commercial activities.
The vegetation of the study area is evergreen especially during the wet season when the savanna vegetation and the riparian vegetation found along the river courses are found in their brighter state.Observation made during the field survey revealed that savanna were set on fire by the unidentified people, which often turned the field brown during the dry season.The vegetation was almost gone in 2006 as the expansion of the residential quarters seriously encroached into the area covered by vegetation.
The areas covered with shallow water or water logged, swampy/marshy and muddy was found all over the study in 1986 especially around river Niger and Benue (Figure 8).But the spatial coverage became thinner and thinner with time owned to the expansion of settlement and anthropogenic activities.The study area is a confluence town that is, the town at which river Niger and river Benue meet thus, water body is a prominent features.However, the variation in the spatial pattern is dictated by the seasonal variations, climatic variation and the period at which the image data were obtained (Figures 8, 9, 10 &11).It is evident in the study that the natural environment (vegetation, rocks and wetland resources) were being threatened while the social environment (settlement or built up area) was expanding.This is not unconnected with the recent influx of people into the study area, when it became the political headquarters.The increase in human population attracted infrastructural development and expansion of housing estate, which consequently impacted negative influence on the natural environment.From the study, it is obvious that urbanization processes majorly contributed to land use/cover change in Lokoja.It is therefore, necessary that the natural environments be protected by our urban planners and policy makers owning to tremendous social and economic benefits they offer to man.
Figure 1 .
Figure 1.Lokoja and other Local Government Areas in Kogi State
Table 1 .
Characteristics of Datasets for the study
Table 2 .
Accuracy assessment
Table 3 .
Temporal pattern of Land use/cover between 1986 and 2007 | 6,000.4 | 2012-09-20T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Improved Wireless Sensor
— this paper drills down and systematically analyzes the improved DV-Hop algorithm about the origin of its position errors. A new method is hereby given for modifying such position errors. Beyond that, the particle swarm-quasi-Newton algorithm is improved, intercepting the combined DV-Hop location algorithm according to the number of known nodes which has been defined by a threshold N. The coordinate of unknown nodes is calculated and analyzed by the combined particle swarm-quasi-Newton algorithm. Then this paper makes a validation analysis on the results by the way of emulation. The results show that the improved algorithm is relatively superior with high precision and lower position error than improved DV-Hop as intercepted.
Introduction
With continuous advances in the science and technology, the wireless sensor network, as a brand new hotspot, has won the favor of more and more scholars [1], since it has been integrated into our lives more closely, ubiquitous everywhere. It's always there, and closely related to our daily life [2]. Nowadays, there are a lot of research areas about the wireless sensor network, of which the node localization algorithm has long been a hot topic since it is most basic and essential. It is therefore required to focus on the information elements of supervised object when data monitor analysis and radio communication are made, especially on the location of the observation point [3]. It makes no sense if the node position is undeterminable, so that it is vital for us to explore the node location [4].
In a broad sense, the wireless sensor is a means of access the information technology, where there is a huge mass of nodes at a relatively low price. The plenty of nodes are arranged in a predefined zone where they further make up a spontaneous multihop system. In doing so, it not only enables a cooperative sensing, but also acquire and process the information covered by all the perceived objects within the network coverage area, which is then aggregated and sent to the observer [5]. Three requisite elements are generally included in this phase, i.e. sensor, observer and object to be perceived [6]. In relation to the traditional wireless network, the wireless sensor network has a lower energy uncomplemented timely and the denser distribution of nodes [7]. Its network topology is susceptible to a variety of factors and subjected to change with them, so that it requires a higher level in respect of the wireless sensor network self-organization [8].
This paper explores and analyzes the DV-Hop algorithm, one of the localization algorithms for wireless sensor network. The combined particle swarm-quasi-Newton algorithm is applied for the improvement of DV-Hop algorithm as intercepted based on the number of known nodes, and is re-improved to a fine level. In the end, the particle swarm-quasi-Newton algorithm is integrated with improved DV-Hop algorithm organically for the purpose to make a simulation analysis and compare the results.on) [9].
2
Analogue calculation based on the particle swarm and quasinewton algorithms The particle swarm algorithm we discuss here is a kind of algorithm with adaptive analysis that auto operates in accordance with given conditions. It features simple design principle, easy to comprehend, and less parameters, however it is prone to convergence stagnation in the latter part [10]. For this reason, this paper proposes a type of algorithm to incorporate the strengths of particle swarm algorithm in the global search and local fine search of the quasi-Newton algorithm for the DV-Hop improvement in its localization algorithm, whereby to make simulation and analysis on the DV-Hop and two improved DV-Hops, respectively. In the end, the results are extracted from the simulation analysis for subsequent comprehensive and systematical investigation [11].
Analysis of position error as the function of monitoring area
Here we simulate how the primitive DV-Hop, the DV-Hops improved by the particle swarm algorithm and by the combined particle swarm-quasi-Newton algorithm are subjected to change with the monitoring areas, and then make a comparison on them [12].
1. Set the nodes of sensors distributed randomly to 100, the number of known nodes to 10, the node communication radius to 50m, and the side lengths of the monitoring area to 100m, 150m, 200m, 250m, 300m, respectively; 2. Set the nodes of sensors distributed randomly to 200, the number of known nodes to 30, the node communication radius to 50m, and the side lengths of the monitoring area to 100m, 150m, 200m, 250m, 300m, 350m, 400m, respectively; As given in the above two cases, the plots changes in the average position errors of the above three algorithms available from the simulation experiment are shown in Fig. 2 As analyzed on the above curve, the trend of the overall change in the position errors tell us the position error of the DV-Hop improved based on particle swarm algorithm is less than that of the other two; the localization algorithm of DV-Hop improved based on the particle swarm-quasi-Newton algorithm gets a precision higher than that of the DV-Hop improved simply by the particle swarm algorithm. Comparative data from the emulation experiment reveals that the combined particle-quasi-Newton algorithm has been proven to be highly effective in improving the location precision of DV-Hop, which exactly paraphrases the correct viewpoint of this paper.
Simulation analysis of position errors as the function of sensor node communication radius
The simulation analysis sets two different scenarios to simulate the change of the sensor node communication radius, whereby to compare the position errors of the three localization algorithms. The curve of comparing the average position error is available [13], and the evolution trends are shown in Fig. 4 From the analysis of the evolution trends in the position errors, we can learn that when the node communication radius changes, the DV-Hop improved based on the combined particle swarm-quasi-Newton algorithm minimizes the position error when compared with the other two algorithms. In the three algorithms of simulation analysis, the average positioning error is the smallest, however, when the node communication radius grows to a certain value, it is not greatly improved in the term of the position error.
Analysis of position error as the function of node percentage
These three position errors are compared and analyzed by simulating the change of the node percentage [14] with the results as shown in the Fig. 6 and 7. 1. Set the coverage of monitoring area to 100!100m, and the 100 sensor nodes as set are randomly distributed; set the node communication radii to 50m, and the number of nodes to 5, 10, 15, 20, 25, respectively. 2. Set the coverage of monitoring area to 200!200m, and the 200 sensor nodes as set are randomly distributed; set the node communication radii to 50m, and the number of nodes to 10, 20m, 30, 40, 50, respectively. As seen from Figs. 6 and 7, that DV-Hop improved based on the combined particle swarm-quasi-Newton algorithm minimizes the position errors in relation to the other two algorithms, but somewhat less, maybe a few percent, even several tenths of percentage. Though tiny its precision is improved, this algorithm may also be considered in some applications occasions where the position precision is highly required.
Investigation on the dv-hop_improved
The population search enabled in the particle swarm algorithm is integrated with the local fine search function of the quasi-Newton algorithm to implement the further improvement of DV-Hop_improbed1 [15].
The unknown node coordinate is generally located by the following process: 1. First analyze the initial particle swarm; 2. The nonlinear equations can be formulated based on the distance between the unknown and known nodes as below: ! " Where di represents the estimated distance between the unknown and known nodes. As shown in the Fig. 8 and 9, the trend of plots change can be such that, when the monitoring area changes randomly, the DV-Hop_improved2 generates least position errors with dramatic improvement and a better calculation in relation to the other two algorithms. As the monitoring area extends, however, the position error also has the tendency to get bigger to be usual one. The results from simulation are shown in Fig. 10 and 11. Analysis of plots change trend shown in Fig. 9 and 10 tells us that, compared with the traditional algorithm, the two methods show a great advantage, while the DV-Hop_improved2 is particularly prominent since it has an obvious effect until the node communication radius reaches a certain value.
The results show that the combined particle swarm-quasi-Newton algorithm outperforms others in the term of location precision if applied to DV-Hop, and in conjunction with DV-Hop_improved1, it can achieve a big improvement for algorithm. This paper integrates the selected DV-Hop with the particle swarm and quasi-Newton algorithms to systematically explore the general idea and basic process of the whole improved algorithm with better analysis results.
Conclusion
With reference to the numerous literature on the node location of wireless sensor network, this paper deeply and systematically explores the basic principle of DV-Hop algorithm about the origin of position errors. On this basis, the improved DV-Hop is proposed with a new idea for position errors. The conclusions are drawn as follows: For the position errors occurred in DV-Hop's calculation process, a new idea is an integration of the particle swarm algorithm and the quasi-Newton algorithm that is designed for application in the location calculation process of the DV-Hop. The results from simulation analysis show that this integrated particle swarm-quasi-Newton algorithm minimizes the position errors of the improved DV-Hop with a better calculation effect.
The number of known nodes is defined with threshold N, by which the coordinates of the unknown nodes are calculated. Based on this process, the basic algorithm of particle swarm-quasi-Newton is used for simulation analysis. The results show that the improved DV-Hop based on the particle swarm-quasi-Newton algorithm outperforms the primitive algorithm with a minor error and a better effect. | 2,282 | 2018-05-25T00:00:00.000 | [
"Computer Science"
] |
Automatic and Manual Proliferation Rate Estimation from Digital Pathology Images
Digital pathology is a major revolution in pathology and is changing the clinical routine for pathologists. We work on providing a computer aided diagnosis system that automatically and robustly provides the pathologist with a second opinion for many diagnosis tasks. However, interobserver variability prevents thorough validation of any proposed technique for any specific problems. In this work, we study the variability and reliability of proliferation rate estimation from digital pathology images for breast cancer proliferation rate estimation. We also study the robustness of our recently proposed method CAD system for PRE estimation. Three statistical significance tests showed that our automated CAD system was as reliable as the expert pathologist in both brown and blue nuclei estimation on a dataset of 100 images.
Introduction
The development and continued growth of cancerous cells involve various changes at both macro and micro levels of the body.Cell proliferation is usually among the major indicators for proliferation of cancerous cells.Specifically, breast cancer proliferation rate estimation (PRE) is a crucial step for determining the cancer level and is used as a prognostic indicator [1].In conjunction with tumor size and grade, lympth node status and histological grade, PRE is an indicator for the aggressiveness of individual cancers and helps setting the treatment plan [2].
Traditionally, pathologists perform proliferation rate estimation for breast cancer by examining the whole slides via a microscope.Over the past two decades, digital pathology enabled the usage of high resolution digi-tizers to provide high resolution images that replace the microscope as shown in our previous work [3].
There are many clinically approved techniques to estimate the PRE including: mitotic index, S-phase fraction, nuclear antigen ImmunoHistoChemistry (IHC) including KI-67 and PCNA-staining Cyclins and PET [4] [5].Each one of these methods has its advantages or disadvantages based on the clinical settings.
In our work, we use Ki-67-stained biopsy images for PRE.In this technique, PRE is estimated by counting the number of brown nuclei and the number of blue nuclei as shown in Figure 1.Stromal areas are clinically excluded from counting because stromal area does not become cancerous.In our previous work [6], we performed digital stromal area removal to eliminate this ambiguous area for both junior pathologists and automated PRE systems.
Manual PRE is time-consuming and laborious for pathologists.An average of six minutes per image is required for PRE by an expert pathologist.Our expert pathologist requires over 10 hours estimating the proliferation rate for our dataset containing 100 images.Many authors target automation of PRE including our recent work [7].However, one major concern was not investigated in all these efforts which was the inter-variability between the expert pathologists [8].
In this paper, we study the statistical inter-pathologist variability for the various manual PRE we have between four expert pathologists.Moreover, we investigate the reliability of our proposed automated PRE compared to the four pathologist opinions for the 100 images in our dataset.
Materials and Methods
Manual ground truth estimation is a major area of interest due to the various human factors that influence the experts.Specifically for breast cancer PRE [9] [10], we find that pathologists provide variable ground truth estimations which make it hard to evaluate any automated PRE estimated technique.Many automated PRE techniques have been proposed in the literature and we recently proposed our technique in [7], an exhaustive review of the techniques as well as a detailed description of our techniques are presented in [7].In this paper, we provide the necessary statistical study for the inter pathologists variability.Furthermore, we study the statistical variability between the four manual ground truth and our automated technique.In [7], we compared the automated results with one expert pathologist and a student trained by a pathologist.In this paper, we run our statistical study to include for expert pathologists and one automated technique.
We study three statistical significance tests to show the inter-observer variability.Moreover, we study the manual vs automated [7] PRE variability.We study three statistical significance measures: correlation coefficient, T-Test, and Ch-Square test.We briefly describe them due to space limitations.
Correlation Coefficient
The value of the correlation coefficient where x and y are two random variables, x and y are the corresponding mean values for each sample.
Student T-Test
Student T-Test (or t test for short) is one of a number of hypothesis tests.The t-test looks at the t statistic, t distribution and degrees of freedom to determine a t value (probability) that can be used to determine whether the two underlying distributions of the two random variables are different as shown in Equation ( 2): where T x and C x are the two mean values for the corresponding two data samples r and c, 2 r σ and 2 c σ are the corresponding variances for the two data samples r and c, n T and n C are the number of the corresponding samples.Moreover, the degrees of freedom (df) for the test should be determined.In the t-test, the degree of freedom is the sum of the persons in both groups minus 2. Given the alpha level, the df, and the t value, you can look the t value up in a standard table of significance Typically, when t > 0.05, the two random variables (two underlying data samples) are said to be statistically insignificance, i.e., highly correlated [12].
Chi-Square
Chi square X 2 is a statistical test commonly used to compare observed with unobserved data upon a specific hypothesis as in Equation ( 3): ( ) where Oij is the observed frequency in the i th row and j th column, Eij is the expected frequency in the i th row and j th column, r is the number of rows and c is the number of columns.The appropriate number of degrees of freedom (df) is calculated as the number of rows-1 multiplied by the number of columns.If X 2 is greater than what is known as the critical value, then the two samples are dependent.
Experimental Results and Analysis
Our data set contain 100 Ki-67-stained histopathology digital images for breast cancer.The blue nuclei are negative positive cells while the brown nuclei are the positive ones.Our collaborating pathologists provided us with the ground truth from four different pathologists including herself as the most senior pathologist.We provided each pathologist with anonymized images labeled in sequence along with an sheet to score for the blue and brown nuclei.None of the four pathologists knew about the other and they were scoring independently.Our most senior pathologist (coauthor) spent over 10 hours for scoring the 100 cases which means an average of 6 minutes per case.Moreover, we run our proposed automated PRE system proposed in [7] over the same 100 images and recorded the automated scoring for both the blue and the brown nuclei.
Correlation Coefficient
The inter-observer reproducibility is first measured by using the correlation coefficient [13] [14].Overall, there is a higher correlation between pathologists in brown nuclei estimation than blue nuclei estimation.Moreover, our automated CAD system has also a higher correlation coefficient for brown nuclei compared to blue ones.Table 1 summarizes the inter pathologists correlation coefficient values and manual vs automated correlation coefficient values.
From Table 1, we note that the correlation coefficient indicates a very high correlation between the four observers on the brown nuclei counting.However, the correlation is highly variable for blue nuclei counting from an upper value of 0.73 down to 0.768.Figure 2 and Figure 3 show the relationship for the manual PRE for observer 1 vs observer 2 and observer 3 vs observer 4, respectively.On the other hand, we study the correlation coefficients between the manual of each of the four experts and our proposed automated system as shown in Table 2.As we examine this table, the brown nuclei counting is highly correlated to the various observers which indicates an almost perfect reliability of our proposed automated system for brown nuclei estimation.Furthermore, the blue nuclei counting are comparable to the correlation between the manual observers.In other words, our automated blue nuclei estimation is as good as the manual estimation which proves its clinical reliability.
T-Test
We performed a two-tailed paired T-Test on all the pairs between the four observers and the automated system.
Third observer
Blue Nuclei Brown Nuclei Linear Blue Nuclei -Our Null Hypothesis is that there is a difference between the observers in one hand and the automated system on the other hand.All of the reported significance probability values in Table 3 shows insignificant statistical difference between the manual expert estimations themselves on one hand and between both the 3rd and 4th observers with the automated system on the other hand.In other words according to Table 4 which shows the interpretation for the p-value.As you see the p value is less than 0.01 which means that we have a strong evidence to reject the hypothesis that says that there is no relationship (there is a difference) between observers on one hand and the automated system in the other hand in both Brown and Blue nuclei counts estimation.
Chi Square
We computed Chi-square test all pairs, and it compared with the critical chi square value with df = 1, confidence level 99% (probability = 1 − 0.99 = 0.01).In all pairs (including inter-observer and our automated method), the calculated chi square value is greater than the critical value, which means that each pair of samples are dependent.In other words, it is statistically reliable to consider any of the expert scoring or the automated scoring values.Figure 4 and Figure 5 show two samples images where we high agreement between observes, and a low agreement between observers, respectively.
Conclusion
We proposed a detailed statistical study for breast cancer proliferation rate estimation.We studied the interobserver variability between four expert pathologists on a set of 100 cases.We also studied the reliability of our recently proposed automated PRE system.On the 100 cases, we found that the variability of brown nuclei estimation was statistically insignificant between various pathologists.We also found that our proposed system brown nuclei estimation was statistically reliable.On the other hand, our three statistical significance tests showed fairly high reliability between pathologists for blue nuclei estimation.The same conclusion applies for our proposed automated blue nuclei system.
Figure 1 .
Figure 1.The sample images of Ki-67 stained pathology images showing sample blue nuclei, and stromal areas.
Figure 2 .
Figure 2. Relationship between first and second observers' nuclei count estimates.
Figure 3 .
Figure 3. Relationship between third and fourth observers' nuclei count estimates.
Figure 4 .
Figure 4. Example of an image has the same value for the brown nuclei in all observers.
Figure 5 .
Figure 5. Example of an image where the observers results are completely different.
Table 1 .
Significance values of correlation coefficients.
Table 2 .
Manual vs automated significance values of correlation coefficients.
Table 3 .
Significance values resulting from paired T-Test. | 2,513 | 2015-06-05T00:00:00.000 | [
"Computer Science"
] |
Thermal, Viscoelastic and Surface Properties of Oxidized Field’s Metal for Additive Microfabrication
Field’s metal, a low-melting-point eutectic alloy composed of 51% In, 32.5 Bi% and 16.5% Sn by weight and with a melting temperature of 333 K, is widely used as liquid metal coolant in advanced nuclear reactors and in electro–magneto–hydrodynamic two-phase flow loops. However, its rheological and wetting properties in liquid state make this metal suitable for the formation of droplets and other structures for application in microfabrication. As with other low-melting-point metal alloys, in the presence of air, Field’s metal has an oxide film on its surface, which provides a degree of malleability and stability. In this paper, the viscoelastic properties of Field’s metal oxide skin were studied in a parallel-plate rheometer, while surface tension and solidification and contact angles were determined using drop shape analysis techniques.
Introduction
Low-melting-point metals, such as Hg, Ga, EGaIn and Galistan ® , have proven to be very useful for certain applications [1]; for instance, in flexible electronics [2][3][4], electronic devices [5][6][7], fluidic microchannels [8][9][10], Cumby et al. [11] and Paracha et al. [12], in antennas, Ladd et al. [13], and in free-standing micro-structures, in the form of both wires and droplets. These liquid metals, except Hg, usually develop a very thin oxide film when they are exposed to an oxygen-rich atmosphere, which induces a deviation from liquid behaviour, as reported in previous works, such as Zamora et al. [14] and Dickey et al. [15], but allows these metals to form stable shapes [16]. This oxide skin, a few nanometres in thickness, was studied for Ga alloys by Larsen el al. [17] and for Indium alloys by Panek et al. [18] and Suzuki et al. [19]. It prevents the bulk material from further oxidation, but also affects the viscoelastic properties, surface tension and contact angle of this kind of metal. In the presence of oxygen and above its melting point, liquid Field's metal also has an oxide skin and, as a consequence, its behaviour is dictated by the surface properties, conditioning its capacity to form droplets. For this reason, the rheological properties and the surface tension of Field's metal were measured and analysed. In recent years, low-melting metal alloys have been used with other metals for new manufacturing applications. Allen and Swensen [20] tested a Field's metal lattice encased in silicone rubber that could enable highly maneuverable robotic structures. Bismuth and indium alloys in particular, combined with other metals and materials, have been used instead of lead for metal solders due to their non-toxic properties. Indium-based alloys can also be used in applications where the alloy needs thermal conductivity. Due to its low melting point, wettability properties, ductility and fatigue resistance, indium alloys can be used in bending, anchoring or jig applications, but also to wet and weld both non-metallic and metallic surfaces.
In recent years, several authors have studied and characterized the behaviour of low-melting-point metals. Using oscillatory rheology with parallel-plates, Larsen et al. [17] studied the surface properties of EGaIn and concluded that the oxide skin has both elastic and viscous properties. Xu et al. [21,22] analysed the characteristics of this oxide layer with Ga and EGaIn, determining both their critical stress and surface energy in an acid bath with different concentrations of HCl, as well as in an inert gaseous atmosphere, to regulate the presence of an oxide skin and any effect on the viscoelastic behaviour. The literature contains several studies analysing properties such as the surface tension [23], and viscosity [24] of other liquid metals, such as alloys of Al, Sn, Zn, Bi and In. However, only the work of Lipchitz has dealt with Field's metal [25], with the author concluding that further in-depth studies are needed on the dynamic properties of this material. Other essential aspects in the characterization of these liquid metals are the contact and solidification angles that the molten metals form on certain substrates, which is a key factor when assessing their possible technical applications [26]. In this respect, the studies of Liu et al. [27], Khan et al. [28], Kramer et al. [29] and Boley et al. [30] on variation in the contact angle of droplets on different substrates and oxidation conditions. However, the wettability properties of Bi and In alloys have not been studied as thoroughly, and only the works of Wang et al. [31] and Wu et al. [32] could be found. To date, no similar study has been developed for an oxidized Field's metal. This work represents the first analysis of the surface properties of this low-melting-point metal alloy and its suitability for use in applications associated with droplet deposition.
Materials and Methods
A detailed experimental study was carried out on the properties of oxidized Field's metal, to evaluate its influence on its overall behaviour. For this purpose, we first analysed Field's metal dynamic properties by means of parallel-plate rotational rheology, including both oscillatory sweep and torsional flow, to determine the viscoelastic response of this alloy in the presence of the oxide skin. Secondly, its static properties were analysed, measuring its surface tension and solidification angle, using both sessile drop and pendant drop tensiometry. Similarly, the same measurements werecarried out in EGaIn (75.5% Ga and 24.5% In by weight), to compare the above-mentioned properties of both materials.
Surface Characterization and Thermal Parameters of Field's Metal
As a preliminary study of Field's metal, a chemical characterization of the oxide surface of Field's metal was carried out using X-ray Photoelectron Spectrometry (XPS), performed using a Thermo-Scientific "K-Alpha" equipment, and the obtained data were analysed with Avantage Data System software to determine the composition of the oxide layer. Differential Scanning Calorimetry (DSC) was used to obtain its thermal parameters [32,33] using a Wettler-Toledo DSC822e DSC at a heating and cooling rate of 10 K/min over a temperature range from 298 to 348 K.
All XPS spectra were collected using Al-K radiation (1486.6 eV), monochromatized by a twin crystal monochromator, yielding a focused X-ray spot (elliptical in shape, with a major axis length of 400 µm) at 3 mA × 12 kV. The alpha hemispherical analyser was operated in the constant energy mode with survey scan pass energies of 200 eV to measure the whole energy band and 50 eV in a narrow scan to selectively measure the particular elements. XPS data were analysed with Avantage software. A smart background function was used to approximate the experimental backgrounds, and surface elemental composition was calculated from background-subtracted peak areas. Charge compensation was achieved with the flood gun system, which provides low-energy electrons and low-energy argon ions from a single source.
Rheological Characterization
The literature mentions a variety of rheological methods and types of rheometer [34] used to analyse the viscoelastic properties of liquid metals [24,35]. In this is work, a rotational rheology was used, following the same line of study as Dickey et al. [15,17] and Xu et al. [21,22], using a TA Instruments "AR-G2" rotational rheometer with a parallel-plate geometry of 25 mm diameter (both upper and lower plates were made of anodized aluminium). The gap size was varied from 1.4 to 2.4 mm for Field's metal, and from 1.2 to 2.1 mm for EGaIn, depending on the amount of sample. All measurements of the viscoelastic properties of Field's metal were performed at 348 K, and at 303 K in the case of EGaIn, with a common equilibration time of 30 s for all the samples. To study the linear response, oscillatory amplitude sweeps were made, while three different modes of the torsional flow test were followed to study the non-linear response (peak hold, steady flow and flow ramp). No pre-shear was applied in the rheological tests to minimize the shear history, and samples were loaded into the rheometer by syringe. The test parameters employed for each type of rotational technique are included in Table 1. In addition, a flow temperature ramp was followed for both Field's metal and EGaIn, imposing a constant shear rate of 5 × 10 −3 s −1 (to ensure the stability of the torsional flow) and a thermal ramp rate of 1 K/min, within a temperature range of from 343 to 473 K for Field's metal, and from 303 to 383 K for EGaIn.
Axisymmetric Drop Shape Analysis
Drop shape analysis involves a group of techniques, based on a numerical analysis of the shape and dimensions of a drop, to determine surface and interfacial properties, such as interfacial tension, surface energy or contact angle [36]. In the present study, pendant drop and sessile drop tensiometry were used to determine surface tension [21,22], and solidification or contact angles [28][29][30], respectively, since both are considered reliable methods. However, we also took into account the condition of a low-Bond number for the physical problem associated to this type of droplet, where gravitational forces can be negligible compared with interfacial forces [37][38][39]. For this drop shape analysis, we used a Krüss "DSA100" analyser, which has been used in similar works [40], and considered that the droplets preserve vertical symmetry (Axisymmetric Drop Shape Analysis; ADSA [41][42][43]). The experiments were performed using syringes with a 25 gauge needle, i.e., inner diameter of 0.26 mm and outer diameter of 0.515 mm, to generate sessile and pendant drops.
The ADSA for sessile drops was performed by generating droplets of Field's metal (ρ = 7880 kg m −3 ) at 358 K, falling on a substrate at 358 K. After deposition of the droplet, the system was cooled from 358 to 318 K to obtain complete solidification of the sessile drop. Contact angles were measured using the ADSA software (version 1.90.0.14) of the DSA100 (Krüss) analyser [44,45] every 5 K.
The solidification and contact angles were determined by individually placing the droplets on a substrate in the presence of air (O 2 ) and inert gas (N 2 ), by means of a vertical syringe. The substrate temperature was corrected by measuring its temperature in direct contact with a thermocouple.
For Field's metal, the syringe was heated to 358 K and sessile droplets were placed on substrates, which were also heated to reach 358 K. The substrates were of differing natures to represent insulating (glass) and conductive (AISI 316L steel) materials, and polymers (PTFE and resin). Note that the resin used in these experiments was a high-temperature photopolymer resin commonly used in stereolithography, with a heat deflection temperature (HDT) of 511 K at 0.45 MPa.
After heating, the system was cooled from 358 to 318 K, and images were taken every 5 K interval ( Figure 1). The same substrates were used for the EGaIn, but the experiments were performed at a constant syringe and substrate temperature of 298 K, so only the contact angle was measured. Pendant drop tensiometry was used to obtain the surface tension value of Field's metal, with and without an oxide skin, generating the droplets at 358 K in the presence of air or nitrogen, respectively. Pendant drops of different sizes were considered and measured by the ADSA software, due to the influence of the oxide skin on the surface tension values, as reported in previous works [21,22].
XPS and DSC Measurements of Field's Metal
To study Field's metal oxidation, a sample was melted and later solidified in a controlled atmosphere and oxygen content (2500 ppm). The presence of an oxide skin was clearly noticeable, even at low oxygen concentrations. The results obtained from the XPS analysis are summarised in Figure 2. The energy peaks can be assigned to Bi 2 O 3 , In 2 O 3 , SnO and SnO 2 according to the references shown in Table 2. Table 2 for: (a) Bismuth, (b) Indium and (c) Tin. The results obtained from the thermal analysis of the alloy are shown in Table 3.
Rheological Characterization of Field's Metal
The rheology of liquid metals with an oxide skin is usually dominated by the surface stress of the film, which presents elastic properties and yield stress, with the bulk of the material usually being a Newtonian fluid of very low viscosity [35]. While the viscoelasticity of this oxide layer has been studied in detail for certain low-meting point metals, such as Ga or EGaIn, there are no previous studies on the response of this oxide layer in Field's metal.
Starting with a linear rheological analysis, the results of oscillatory amplitude sweeps are presented first ( Figure 3). The noise appearing for G , in the first decade of the amplitude strains, can be attributed to the accuracy of the rheometer at very low strain values and the loading procedure of the samples. As the tested samples did not occupy the entire surface of the 25 mm rheological plates, an effective test diameter of 24 mm was taken when transforming the oscillatory moduli to surface moduli [17,21]. In this respect, the linear viscoelastic region (LVR) of Field's metal is defined in the range of 5 × 10 −3 and 0.9% strain, with constant oscillatory modulus values of G = 5358 ± 147 Pa and G = 84 ± 6 Pa, providing a phase angle of δ = 0.9 • at 1 Hz.
These results show that Field's metal has a lower phase angle value, i.e, it is less viscous than EGaIn in the linear regime, even though it has a higher G S and, therefore, greater surface elasticity than EGaIn. For both liquid metals G >> G (and G S >> G S ) in the LVR, which shows that the elasticity of the oxide skin dominates the linear viscoelastic behaviour of this type of metals.
With regard to the non-linear rheological behaviour of these liquid metals, the results of the torsional flow with a peak hold of shear rate are presented first. Figure 4 shows how, by increasing the shear rate imposed during each test, not only are lower values of viscosity obtained, but also a decrease in the time required to reach steady state in the non-linear regime. Therefore, by representing the resulting steady times versus shear rate, a decreasing exponential trend is obtained for both metals (Figure 5), providing that, for shear rates above 0.5 s −1 , the steady time is lower than 2 s for Field's metal and lower than 3 s for the EGaIn. That is, in general, the oxidized Field's metal reaches the steady state slightly faster than the oxidized EGaIn under shear flow. Therefore, by representing the resulting steady times versus shear ra exponential trend is obtained for both metals (Figure 5), providing that, above 0.5 s −1 , the steady time is lower than 2 s for Field's metal and lower EGaIn. That is, in general, the oxidized Field's metal reaches the steady stat than the oxidized EGaIn under shear flow. ) Field's metal. re, by representing the resulting steady times versus shear rate, a decreasing trend is obtained for both metals (Figure 5), providing that, for shear rates 1 , the steady time is lower than 2 s for Field's metal and lower than 3 s for the is, in general, the oxidized Field's metal reaches the steady state slightly faster dized EGaIn under shear flow. ) EGaIn.
Regarding the steady shear stress developed during these torsional flow tests, the maximum obtained value was 60.4 Pa for Field's metal and 92.9 Pa for EGaIn. This is also indicative of the fact that Field's metal develops lower shear stress levels for a given shear rate than EGaIn, meaning that Field's metal has lower viscosity than EGaIn in liquid state in the presence of the oxide skin. This agrees with the values of the phase angle provided by the oscillatory amplitude sweeps.
During the torsional flow tests, stretching phenomena were observed at the oxide skin of the rheological samples ( Figure 6). As previously reported by Xu et al. [21] for EGaIn, buckling lines were evident in the oxidized Field's metal, which indicates partial or apparent cracking of the oxide skin when yielding. However, the presence of oxygen in the surrounding atmosphere is able to renew the oxide skin, maintaining such metals' capacity to keep flowing. In the steady torsional flow tests, the rotational rheometer applies a ramp of shear rates, until the samples reach a steady state. Considering the results obtained in torsional flow with peak hold, a characteristic sampling time of 5 s was imposed for this group of tests. These experiments (Figure 7) confirmed that the oxide skin provides Field's metal with a critical shear stress, which is virtually constant over a wide range of shear rates, as seen in similar studies [21,22]. Therefore, in the presence of air, the oxide skin of these metals generates a characteristic and constant tension during shear flow, i.e., yield stress. As a consequence, these oxidized layers show solid-like behaviour below the yield stress, but a perfectly plastic response above it, as the shear stress stops increasing with the shear rate, keeping the stress value practically constant. The resultant shear viscosity of the oxidized Field's metal shows a shear thinning behaviour and slightly lower viscosity values than the oxidized EGaIn over a wide range of shear rates (Figure 8). For the rotational tests with torsional flow ramp (Figures 9 and 10), the results were very similar to those obtained with a steady torsional flow, and the oxide skin also maintained the shear stress value after a certain critical shear rate was exceeded during the flow ramp. However, the yield stress values achieved in each flow test vary slightly (Figures 7 and 9), which can mainly be attributed to the differences between samples due to their manual loading in the rheometer. The shear viscosities of these oxidized metals follow the same behaviour as the steady torsional flow tests (Figures 8 and 10). Several useful results can be extracted from the averaged flow curves. The oxidized Field's metal is less viscous in molten state than the oxidized EGaIn, and starts to flow at lower shear rates and develop a lower value of yield stress. In contrast, Field's metal is slightly less stable during shear flow, as shear stress tends to fluctuate with increasing shear rate. On the other hand, EGaIn shows a less variable flow curve, with less pronounced fluctuations and a smooth increase in shear stress before yielding, indicating that the oxide layer of EGaIn is slightly more stable in shear flows.
The mean yield stress values of both metals were determined, taking the mean value of all the shear stress data obtained in the averaged flow curves above the critical shear rate. For Field's metal, the mean yield stress was 55 Pa in steady torsional flow, and 58.7 Pa with the torsional flow ramp, providing a nominal value of 57 ± 5 Pa. Similarly, for EGaIn, the mean yield stress in steady torsional flow was 85.5 Pa and 89.1 Pa with the torsional flow ramp, providing a nominal value of 87 ± 5 Pa. The main results of this rheological study are summarized in Table 4, where they are compared with the corresponding values taken from previous works.
Finally, a temperature flow ramp was applied to both liquid metals with oxide skins (Figure 12) to analyse the influence of temperature on shear stress and viscosity. In the temperature sweep from 343 to 473 K, in the presence of an air atmosphere, Field's metal maintained its shear stress at a stable level of approximately 55 ± 10 Pa from its melting point to 438 K. Above 438 K, the oxide layer gradually degrades, becoming scorched and unable to sustain further renewal, while also changing its colour and appearance. In the case of EGaIn, applying a temperature sweep from 303 to 383 K, the shear stress followed a stable trend up to 341 ± 2 K, above which the oxide skin sharply degraded, leading to an abrupt increase in the shear stress and a rapid loss of its rheological surface properties. Once degraded, the appearance of the oxide skin of both metals was quite similar. For Field's metal, we found a minimum value of shear stress at 382 ± 2 K, which can be taken as the optimal processing point for this material when yielding in liquid state (minimum shear viscosity). In contrast, the shear stress and viscosity of EGaIn always showed an increasing trend as the temperature increased from its melting to 341 K, which constitutes the critical point for using this liquid metal.
Axisymmetric Drop Shape Analysis
This section describes how ADSA of sessile drops was used to determine the solidification and contact angles, and the effect of the overheating, since these parameters define the wettability of these low-melting-point metals [45,53].
In the presence of an oxygen-rich atmosphere (O 2 ), the presence of an oxide skin meant that the droplets of Field's metal did not take on a spherical form (Figure 1a), as reported previously by Hutter et al. [8] and Khan et al. [28] using EGaIn. By contrast, in the presence of a nitrogen-rich atmosphere (N 2 ), the non-oxidized droplets of Field's metal tended to generate spherical shapes, with higher values of solidification angles (Figure 1b). Certain contractions and shape changes were detected on the upper part of the droplets, once the temperature fell below the melting point (see 328 K photographs on the right of Figure 1a,b). No further variation in the contact angle was detected below 328 K.
To compare the liquid metals, the following table summarizes the experimental values of the solidification and contact angles obtained on the cited substrates ( Table 5). The results demonstrate that the oxide skin increases the wettability of both metals in liquid state, leading the drops to have lower contact angles, as reported previously [27][28][29]. In the presence of N 2 , both metals showed similar behaviours, and higher contact angles are reached with all the substrates used. It can be observed that when the oxide skin is present, the contact and solidification angles generally increases along with surface roughness (see R a in Table 5). The results show that the contact angle was strongly affected by the oxide skin when glass and steel substrates were used (decreasing by around 7 ± 1 • ). On the other hand, when polymeric substrates were used (PTFE and resin), the contact angle only decreased by around 1.5 ± 0.5 • .
The contact angles of Field's metal are represented in Figure 13 as a function of substrate temperature in order to evaluate the effects of overheating and solidification on its wettability. Similar variations were measured in both cases reaching a maximum decrease of 4 • in a range of 32 K. The solidification angles obtained for each substrate are included in Table 5.
Pendant drop tensiometry was used in order to determine the interfacial tension of Field's metal [39,40]. In the presence of N 2 , the oxide skin of these liquid metals is not formed and the surface tension of the bulk of the material can be determined by ADSA. In these experiments (Figure 14), a mean value of 417 mN m −1 was obtained for Field's metal, and 444 mN m −1 for EGaIn, the latter in close agreement with the values mentioned in the literature [15,21]. In the presence of O 2 , the pendant drops of Field's metal were measured over a wide range of sizes, revealing different surface tension value, as previously reported [21,22]. This was attributed to the properties of the oxide skin, as yield stress causes the droplets to deviate from the behaviour of purely liquid droplets. As a result, a modification of the Laplace equation of equilibrium for a pendant drop was proposed by Xu et al. [21]: Based on Equation (1), an effective surface tension can be defined (σ). This can be split into two terms: the first one constant and associated to pure liquid surface tension (σ 0 ), and the second a function of the yield stress of the oxide skin (τ Y ) and the characteristic length of the drop, usually defined by its diameter (d). In our experiments, similar results were obtained for surface tension with respect to d, including the linearity between such variables, but, intriguingly, we obtained values that were higher and also lower than σ 0 (417 mN m −1 ).
Based on rheological results and previous studies, the droplets of these liquid metals behave as a viscoelastic membrane (oxide skin) coating on a Newtonian fluid (bulk). Thus, the surface properties of a membrane depend on the interior volume, and it is possible to define a critical volume that delimits the transition between elastic and viscoplastic behaviour [54]. That is why, for these liquid metals, the effective surface tension must also be related to the drop volume (V). For this reason, the surface tension data obtained with pendant drop tensiometry were analysed as a function of the drop volume; see Figure 15. Figure 15 shows two different trends in the apparent surface tension and it can be observed how, around a certain volume, a transition occurs in the surface properties of Field's metal. This also explains the difficulty in finding experimental values close to σ 0 . Linear regressions (dashed lines) of the two trends were obtained ( Figure 15). The cut-off point of the two regressions yielded a critical volume V * = 1.7 µL with an associated interfacial value of 412 mN m −1 , which is quite similar to the σ 0 value obtained for the liquid metal without oxide skin ( Figure 14).
However, it should also be noted that the accuracy of ADSA algorithms may decrease when dealing with low Bond numbers [38,43]. In the pendant drop experiments with Field's metal, we obtained Bond numbers between 0.106 and 0.138. Surface tension data were represented according to the Bond number (Figure 16), where two opposite trends can be clearly observed. Indeed, there is a range of surface tensions where Bo takes minimum values, statistically providing an average critical surface tension value of 413 mN m −1 . This value again corresponds to σ 0 , which can finally be established at 415 ± 3 mN m −1 . As a result, the surface tension data measured for Field's metal may vary due to its low number of Bond, although the viscoelastic contribution of the oxide layer seems to have a stronger influence on the results. Therefore, for drop sizes below a certain critical value, elastic effects can result in apparent surface tension values lower than σ 0 , while, with larger drop sizes, the oxide skin exceeds its yield stress and acquires permanent deformations, also leading to apparent values of σ that are higher than σ 0 .
Conclusions
This work describes an experimental study that was carried out to determine the rheological and surface properties of Field's metal, with the aim of evaluating the suitability of this liquid metal for future applications involving microdroplet deposition. In addition, these properties are compared to those of EGaIn.
• Regarding the rheological properties, the results point to the lower viscosity of Field's metal compared with EGaIn in the liquid state, and also to the elasticity of the oxide skin, which predominates over the viscoelastic behaviour in both cases. • Torsional flow tests allowed us to determine the yield stress (τ Y ) of the oxide skin-an essential process parameter-as well as the critical shear rate needed to start the shear flow of these materials. Concerning the evolution of the yield stress with regard to the shear rate, it is concluded that Field's metal requires a lower shear stress to flow, although its shear flow is somewhat less stable than that of EGaIn. In addition, by means of torsional flow tests with peak hold, it was possible to analyse the time needed to reach the maximum steady shear stress. • The surface properties of Field's metal were also studied in order to evaluate the capacity to generate microdroplets, especially in "Drop on Demand" processes. Therefore, the solidification angle of Field's metal on different substrates was analysed at different temperatures and, both alone and in the presence of the oxide skin. It was concluded that the oxide layer reduces the solidification angle of Field's metal, with maximum reductions of approximately 8 • . • The surface tension of both liquid metals was analysed using the drop pendant method.
In the absence of oxides, a well-defined and fixed surface tension value was obtained for the bulk of the material. By contrast, when these metals are oxidized, the viscoelastic properties of the oxide layer affect the interfacial stress values, producing apparent values that may be lower or higher than those corresponding to the bulk of the material.
Thus, for small droplet volumes, surface tension values lower than that of the bulk were obtained, as the elasticity of the oxide skin contributes to the formation of drop shapes with a surface energy lower than that expected for a pure liquid. By contrast, above a certain drop volume, the surface tension overcomes that of the bulk, since the yield stress of the oxide layer is exceeded, and permanent deformations appear. Such deviations in the surface properties were identified by analysing the surface tension results of pendant drop experiments as a function of droplet volume and the Bond number. • In the rheological study, the flow temperature ramp test demonstrated the potential usefulness of Field's metal as an appropriate substitute for Ga alloys at temperatures above 338 K, since this metal remains rheologically stable up to temperatures close to 438 K. This would make Field's metal suitable for "Drop on Demand" applications at high temperatures, as well as for combined use with liquid alloys, as already suggested by Shaikh et al. [55]. | 6,911.8 | 2021-12-01T00:00:00.000 | [
"Materials Science"
] |
The quest to exploit the Auger effect in cancer radiotherapy - a reflective review.
Abstract To identify the emergence of the recognition of the potential of the Auger effect for clinical application, and after tracing the salient milestones towards that goal, to evaluate the status quo and future prospects. It was not until 40 years after the discovery of Auger electrons, that the availability of radioactive DNA precursors enabled the biological power, and the clinical potential, of the Auger effect to be appreciated. Important milestones on the path to clinical translation have been identified and reached, but hurdles remain. Nevertheless the potential is still evident, and there is reasonable optimism that the goal of clinical translation is achievable.
Introduction
This review is not intended to be comprehensive or up-todate; rather, it is the reflections of two overlapping personal journeys that have at times included forays into the subject of this review. We aim to recount what seemed to us to be the major stepping stones towards the goal of translating the Auger effect into a therapeutic modality. In the introductory lecture at the Kyoto Symposium, this goal was referred to as the 'Treasure'. The allegory was further extended to representing some of the key steps towards the goal as maritime voyages to a tropical island (Auger Island) in search of the hidden treasure.
Unavoidably this is a biased view (albeit in two dimensions), and we apologise in advance for those important contributions that we have not included. Likewise our perception of the key conceptual steps is subject to debate. Where possible we have referred the reader to more comprehensive reviews of particular aspects. We also dwell on the personalities that have been involved in this quest using as a framework, the gathering of the Auger community every 4 years in satellite symposia of the International Congress of Radiation Research (Table 1), starting with the meeting in Oxford in 1987 ( Figure 1). Also listed in Table 1 were two prior meetings which were important in germinating the Auger community. These early meetings were discussed briefly at the J€ ulich Symposium by one of us (Feinendegen 2012), but are described again here for the sake of cohesion.
The microdosimetry symposium in J€ ulich in 1975 is regarded as the foundation of the Auger symposium series, but even before that, there was a meeting in Vienna from [9][10][11][12][13] October 1967 on 'Biological effects of transmutation and decay of incorporated radioisotopes' (Figure 2). The introductory and broad review at this meeting by one of the current authors (LEF) on 'Problems associated with the use of labelled molecules in biology and medicine' (available online in the Supplementary Material) included a paragraph on the Auger effect (Feinendegen 1968). It states a lack of knowledge regarding the consequences of Auger effects in biological systems, especially the DNA in contrast to available knowledge on the Auger effect in radiation-chemistry. Here experiments had shown that, for instance, the induction of photoninduced cascades of electrons in iodine by soft X-irradiation of CH 3 I led to molecular disruption (Carlson & White 1966). The nature of this disruption fitted the hypothesis that charge transfer processes in the molecular ion with subsequent charge transfer and distribution decomposes the molecule violently by Coulombic forces. Biological effects from the Auger emitter 125 I thus need be considered as a result of both low energy electrons and localized molecular charge transfers. The Vienna reviewer, one of the authors (LEF), had worked for years with 125 I-iododeoxyuridine ( 125 I-UdR) as tracer for DNA and DNA synthesis, using the Auger electrons for micro-autoradiography and the photon emission for external counting, for instance in whole body rodents. From reading the results of the radiation-chemical experiments, the need to study the radiobiological consequences and usefulness of the Auger effect was recognized, especially in DNA of mammalian cells, as represented in Figure 2. Experimental work began immediately thereafter in J€ ulich (Ertl & Feinendegen 1969), and elsewhere. Notice that this is more than 40 years after the description of the Auger effect.
For those readers not familiar with the details of the Auger phenomenon, a nice outline can be found in the proceedings of the Boston Symposium (Howell 2008), and in more detail elsewhere (Charlton & Booz 1981;Pomplun et al. 1987). Auger's experiments involved study of photos of photoelectron tracks visualized using a Wilson Cloud chamber. They show the very low energy, short tracks from the Auger cascade following photo-ionization. The experiments are described in the publication of a lecture he gave late in life (Auger 1975). Apart from reproductions of photos of tracks of photo-and Auger-electrons, his review also includes an old photo of the equipment that he used. A much clearer picture of a Wilson Cloud chamber apparatus, in the Museum at the Cavendish Laboratory in Cambridge, is accessible on the internet.
The work of Pierre Auger was not the whole story of the discovery of what we now know as the Auger effect. Lisa Meitner is credited with the independent discovery of the effect in 1922 (Meitner 1922), before Auger (Auger 1923). At that time she was already working in Berlin with physicist Otto Hahn, who in 1944 was awarded the Nobel Prize for Chemistry for discovery of nuclear fission in December, 1938. Meitner is often mentioned as one of the examples of women who were overlooked by the Nobel Committee (Bentzen 2000;Duparc 2009). We perhaps should acknowledge that Pierre Auger may have received more than his share of the credit for the discovery of what we now call Auger electrons, although this has been disputed (Duparc 2009).
Early biological experiments
The crucial link of the Auger effect with biology undoubtedly came with the ability to incorporate radioactive elements into DNA. The first example was tritiated thymidine (usually incorporated into the methyl group), which features on the cover of the proceedings of the 1967 meeting in Vienna ( Figure 2). DNA polymerase hardly notices the difference when the methyl group is replaced with an iodine atom having a similar radius, and thus allows the incorporation of 125 I into DNA. Within 2 years after the Vienna meeting, the first experiments in the whole body of mice compared the toxicity of 125 I-and tritium in DNA. The degree of 125 I toxicity surpassed that of tritium by a several fold larger value than expected on the basis of cell-doses from tritium beta and Auger electrons (Ertl & Feinendegen 1969). Also early on, seminal experimental data in parallel on mouse bone-marrow cells in vivo, a number of mammalian cells in culture, and bacteriophages largely in Europe, in Israel and the USA unravelled the: (1) Extraordinary biological toxicity of the Auger effect compared to that from beta-particles and external low Linear Energy Transfer (LET) irradiation; (2) Cellular dosimetry and the particular consequences of the Auger effect in DNA of mammalian cells and microorganisms; (3) Limited repair of Auger effect-induced DNA damage; (4) Particular sensitivity of the late-S-phase DNA to damage by the Auger effect; (5) Relative toxicities of the Auger effects in different molecular positions and cell sites, and (6) Potential of the Auger effect for tumour therapy.
Amongst the earliest reports, summarized in Feinendegen (1975) showing the amazing cytotoxicity of iodine-125 decay in DNA, was one that compared iodine-125 with iodine-131 and tritium, on a per decay/cell basis (Hofer & Hughes 1971); reproduced in Figure 3. Incidentally, Kurt Hofer's engaging personality, and boundless enthusiasm for the Auger effect was infectious, and he certainly inspired one (RFM) of the authors.
A few years later, 125 I-UdR was incorporated into bacteriophage DNA growing in Escherichia coli, and estimation of the average molecular size of DNA by sedimentation in sucrose gradients revealed that on average, each decay event yields a double-stranded break (DSB) (Krisch & Sauri 1975). In these bacteriophages, the decay event also corresponds to a lethal event. Interestingly, this came after the first experiments in mammalian cells, showing that a lethal event corresponds to about 50 decays per cell nucleus (Burki et al. 1973). 125 I-iododeoxyuridine as a therapeutic agent . . . and then 125 I-labelled Tamoxifen The remarkable cytotoxicity of iodine-125 decay in DNA, combined the simple idea that 125 I-iododeoxyuridine would be incorporated selectively into dividing cells, immediately suggested a potential therapeutic strategy, assuming some measure of selectivity for rapidly dividing tumours. The results of experiments with tumour-bearing mice, published in Nature, led Bill Bloomer and Jim Adelstein (Bloomer & Adelstein 1977) to the conclusion that the idea 'provides the basis for a new approach to the treatment of cancer.' Figure 4 shows Bill Bloomer and Jim Adelstein at the J€ ulich meeting in October 1975, about a year before they submitted their paper to Nature. Clearly this was a landmark study and certainly the first step towards the goal of translation of the Auger effect. However, a strategy based on the rate of cell division is unlikely to provide an exploitable therapeutic ratio except in specific instances, one of which is advanced malignant meningitis (Kassis et al. 2004). The Boston group were quick to realize that more sophisticated biological targeting was required. They used 125 Ilabelled Tamoxifen, a drug that binds to estrogen receptors, which is over expressed in some tumours (Bloomer et al. 1980). This was the first example of the use of receptor-mediated targeting to deliver Auger-emitters selectively to tumours. Importantly, after the labelled Tamoxifen binds to steroid receptors, not only does the drug-receptor complex get internalized, but it binds to DNA, and thus effects receptor-dependent cell kill. Cells without steroid receptors were much less sensitive. This strategy can be thought of as double targeting. Not only was the isotope targeted to tumour cells with estrogen receptors, but once internalized, the second phase of targeting transports the isotope to chromosomal DNA. Unfortunately, the clinical potential of 125 Ilabelled Tamoxifen and other oestrogen analogues was limited by high uptake in the liver, following intravenous administration (Epperly et al. 1991).
Many of the receptor systems that might be useful for tumour targeting do not have the feature of chromatin targeting, for example in many cases the receptor-ligand complex is degraded after internalization. It was realized that a more generic platform would be required to develop a system for targeting Auger emitters to DNA, especially given that the decay event needs to be located very close to DNA (molecular dimensions) for maximum efficiency of induction of DSB. More generally, the Auger field needed to 'backfill' some of the basic science before progressing to the ultimate goal of clinical translation.
Microdosimetry
The labelled Tamoxifen strategy was essentially 'blind' on key questions such as the relationship between extent of radiochemical damage and the location of the decay event relative to the DNA molecule itself. Moreover this is a multifaceted question, and one aspect required progress in microdosimetry. The first step was the application of Monte Carlo techniques to understand the details of iodine-125 decay (Charlton & Booz 1981); later extended and elaborated (Pomplun et al. 1987). David Charlton was prominent in much of the subsequent advances in the microdosimetry of the Auger effect. Dave is in the group photo of the Oxford meeting in 1987 ( Figure 1), but there is a better picture, albeit marking a sad event, in the very nice obituary (Humm & Nikjoo 2013), written by close collaborators (Charlton et al. 1989). The Charlton and Booz paper provided a solid foundation for the future; the number (an average of 21 electrons per decay in the condensed phase, and 13 for the isolated atom) of the emitted electrons and their energies, and the extent of variation amongst the individual decay events. Now the physics was ahead of the chemistry. Apart from the fact that the DNA was broken by the decay event, there was no information at the molecular level. In the mid-1970s, when the biological significance of the Auger effect was developing, the mainstay in DNA breakage analysis was sedimentation in sucrose gradients, using ultracentrifugation, which yielded the size distribution of DNA molecules, from Figure 2. The beginnings of the radiobiology of the Auger effect. Book cover of the proceedings (with permission) of the meeting in Vienna at which the early observations were discussed. The radioactively labeled counterparts of the deoxynucleosides shown at right (e.g., 3 H in the methyl group of thymidine) enabled incorporation of isotopes, notably 3 H, 125 I and 131 I, into DNA. which the average molecular size was calculated. This was the technology used by Krisch and Sauri in the discovery of the one-to-one relationship between 125 I decay events in DNA and double-stranded breaks (Krisch & Sauri 1975). But it did not provide information at the molecular level. As is often the case in science, progress awaited the development of new techniques. In the Auger story, the enabling new technologies were first the Wilson cloud chamber, then radiochemical synthesis of labelled nucleosides, and then in the current context, DNA sequencing methodology.
DNA sequencing technology; and more microdosimetry
In 1980 Fred Sanger and Walter Gilbert were jointly awarded half the Nobel Prize for the discovery of DNA sequencing methodology (the other half went to Paul Berg for nucleic acid biochemistry). Fred Sanger was based in Cambridge in England and Gilbert and his co-worker Allan Maxam were at Cambridge, Massachusetts, just across the Charles River from Boston and Harvard Medical School, where Jim Adelstein was based. They used quite different strategies: Sanger relied on DNA polymerase and chain-terminating deoxynucleotides, and Gilbert on 'cookbook chemistry' for preferential cleavage at particular bases. There was a debate in Boston at the time as to Allan Maxam's claim to join in the Nobel Award; he apparently did the painstaking optimization of the reaction conditions for base-dependant chemical cleavage of DNA. Nevertheless, the method subsequently bore the name 'Maxam-Gilbert DNA sequencing'. Both groups used polyacrylamide gel electrophoresis to separate single strands of DNA according to size/length, at single nucleotide resolution. The key to the method was labelling one end of each DNA strand with 32 P, and locating the separated DNA in autoradiographs. One of us (RFM) visited Fred Sanger in April 1974, on the return trip from an 18-month sabbatical leave at the Karolinska Institute in Stockholm, and recalls Fred proudly showing a 1metre long gel autoradiograph! Three years later, in 1977, Sanger published the sequence of a bacteriophage DNA of >5000 nucleotides (Sanger et al. 1977). Maxam and Gilbert's paper came out in the same year .
Bill Haseltine was a post doc in Gilbert's lab in Cambridge (Haseltine et al. 1977) and quickly applied the DNA sequencing technique to analysis of DNA damage by the DNA-cleaving drug bleomycin (D'Andrea & Haseltine 1978). From autoradiographs of DNA sequencing gels analyzing samples of end-labelled full-length DNA molecules, fragmented by either cleavage by bleomycin, or by base specific chemical cleavage, the sites of cleavage within the sequence could be determined.
Later, Bill Haseltine established his own lab at the Dana Farber Cancer Centre at Harvard Medical School, and RFM joined his lab for a 12-month sabbatical during 1979/80. The same method was used to analyze DNA breakage by iodine-125 decay. The results showed that most of the damage is focused within a few bp of the site of decay of the iodine atom, for both the 125 I-containing strand and the opposite strand (Martin & Haseltine 1981).
The experimental data from the Boston experiments is summarized in Figure 5, together with results of simulation experiments described in a landmark paper by that married the 125 I microdosimetry with the dimensions of a DNA molecule. They used a wonderfully simple model of the DNA molecule; hemi-annular volumes, each representing a nucleotide, are stepped around a central axis, and mapped the tracks of electrons generated by Monte Carlo simulation, beginning at the site of decay in the DNA helix, and calculated the total energy deposited in each nucleotide volume. By comparing their theoretical data with the experimental data from the Boston experiments described above, they found that a total energy deposition of 17.5 eV per nucleotide, as the threshold for a single-stranded DNA (ssDNA) break, gave the best match of the simulation with the experimental data ( Figure 5). As indicated in Figure 5, two different electron spectra were used, giving slightly different results, especially for the distribution of breaks in the strand opposite the strand containing the decaying 125 I atom, but nevertheless good correlation with the experimental data.
It is interesting that as well as shedding light on the theoretical basis for the biological effects of Auger decay, this study also contributed to track structure codes then implemented more generally to radiation physics and biology. This model was continually refined and extended by Hooshang Nikjoo, Ekkehard Pomplun (Pomplun 1991), and Michel Terrissol (Terrissol & Pomplun 1994;Kummerle & Pomplun 2005;Edel et al. 2006;Goorley et al. 2008), who were mentored by Dave Charlton. In particular, the crude hemi-annular 'target' volume that Charlton and Humm used to represent the nucleotide was refined to atomic volumes (Pomplun 1991), and Hooshang Nikjoo extended the approach from B-DNA to chromatin and nuclear structures (Nikjoo & Girard 2012), enabling development of models for DNA repair (Taleei et al. 2013;Rahmanian et al. 2014;Taleei et al. 2015), and genetic risks (Taleei et al. 2013;Sankaranarayanan & Nikjoo 2015). In the light of these advances, Charlton's model of representing nucleotides as hemi-annular volumes now seems primitive.
Molecular fragmentation versus electron irradiation
At the time when the spectacular radiotoxicity of iodine-125 was emerging from experimental work, it was already appreciated ( Figure 6) that two distinct radiochemical events were involved. The central issue is that the multiple electron emission leaves behind, on the daughter Te atom, a corresponding positive charge. The resulting molecular fragmentation was described in an early review (Wexler 1967), which includes the case of methyliodide cited in the Introduction (Carlson & White 1966) and of methylbromide labelled with 80m Br (Wexler & Anderson 1960). In a more contemporary example, using a model nucleotide system, an average accumulated positive charge of þ6 was calculated for the first cascade (Kummerle & Pomplun 2012), which is consistent with the Charlton and Booz average figure of 13 electrons for the isolated atom, given that most decays involve two cascades. A radiation chemist would describe þ6 as a very deep hole! Obviously, the question arises: 'Is the DNA strand breakage a result of this molecular fragmentation (also described by as Coulombic explosion), or is it primarily due to electron irradiation by the Auger electrons that are known to have very high LET at the end of their tracks?' The question as to the relative contributions of these two mechanisms was addressed first using the DNA sequencing analysis. Lobachevsky and Martin took advantage of advances of technology, in the almost 20 years since the Boston experiments, in particular the availability of synthetic oligodeoxynucleotides, to revisit in more detail the distribution of damage from decay of 125 I in a single location. Pavel Lobachevsky's deconvolution of the data indicated approximately equal contributions for the two mechanisms; molecular fragmentation (non-radiation) and radiation component, which itself comprises two sub classes: Scavengeable (in this case by dimethylsulfoxide [DMSO]) and non-scavengeable (Lobachevsky & Martin 2000).
Much more recently, Igor Panyutin from National Institutes of Health [NIH], Bethesda, MD, USA, has revisited the question of charge migration, using DNA constructs incorporating charge traps, which he reported at the J€ ulich meeting in 2011 (Ndlebe et al. 2012). Even after all this time, since the early recognition of the question of electron irradiation versus charge fragmentation, for example the paper in 1978 (Hofer et al. 1978), further research is required. Clearly this needs to be incorporated into the microdosimetry calculations -at present it is ignored because there is no theoretical treatment for it. Targeting Auger damage to DNA with labelled DNA ligands Returning now to Jim Adelstein's group in Boston, and the nice idea of receptor-mediated targeting of the Auger effect to tumour, the oestrogen receptor was a good choice because its receptor is translocated to the nucleus. Unfortunately, not all receptor systems that are potentially useful for targeting tumour cells have this bonus of continuation to chromatin. The requirement for a more universal system led to the question of whether DNA-binding drugs could be used as vehicles to take the Auger effect to nuclear DNA. There are two types of DNA ligands: Intercalators which are planar aromatic molecules which fit into the slot between adjacent base pairs in B-DNA, and minor groove binders which are more elongated flexible molecules that follow the helical twist of the minor groove. An extensive review includes images of molecular models illustrating the two modes of binding (Liu et al. 2008). Intercalators were the initial focus (Figure 7), with 125 I-3,6-diamino-4,5-diiodoaminoacridine being the first example (Martin 1977). While this ligand was readily accessible by radioiodination of proflavine (3,6-diaminoacridine), which is commercially available, the structure of the product is ambiguous. Either the mono-or bis-iodination product can be formed, depending on reaction conditions. The subsequent variation solved this problem by using rivanol (6,9-diamino-2-ethoxy-5-acridine) as the iodination substrate. The radioiodinated version was shown to be cytotoxic (Martin et al. 1979), but the relatively low logP restricts cellular uptake. The later Boston designs (Figure 7) are probably better from this standpoint. For example 125 I-3-acetamido-5-iodoproflavine is taken-up and retained in V79 cells and yields a high LET survival curve (Kassis et al. 1989) and is mutagenic (Whaley et al. 1990). Of the two simple iodoacridines, only 2-iodoacridine intercalates, although the 125 I-labelled isomers both induce DNA DSB in plasmid DNA, but the 2-isomer is more efficient (Sahu et al. 1997).
The general experimental approach used for studying 125 Iinduced breakage of isolated DNA utilizes the plasmid DNA assay, which nicely distinguishes DNA single-strand breaks (SSB), which relax the tightly coiled parent DNA, from a DSB which produces a linear molecule. These three species are easily separated by agarose gel electrophoresis, and a good approach to data analysis has been developed by Pavel Lobachevsky (Lobachevsky & Martin 2004). In early studies, electron microscopy was used for more qualitative visualization of strand-breakage (Martin 1977).
Minor groove binders ( Figure 8) have two main differences compared to intercalators. Firstly the DNA binding affinity is much stronger, and secondly, the minor groove binders have a sequence selectivity; they bind to discrete sites of 3 or 4 consecutive AT base pairs. As a result, when such ligands are labelled with 125 I and allowed to decay in the DNA ligand complex, and the breakage analyzed by DNA sequencing gels, the damage reveals the sites of binding (Martin & Holmes 1983). The first example of a radio-iodinated minor groove binding ligand was prepared by direct iodination of Hoechst 33258 (Martin & Pardee 1985). It is known that substitution of the phenolic group of Hoechst 33258 with an ethoxy group (i.e., Hoechst 33342) markedly improves uptake into live cells (Lydon et al. 1980), so the Boston design (Harapanhalli et al. 1994) is likely to be the better of the two bibenzimidazoles shown in Figure 8, for studies with viable cells or organisms. Table 1 ) In summary, the studies with radioiodinated ligands showed: Both intercalators and minor groove binders target 125 I to DNA, and Produce DNA DSB with decay, and Cytotoxicity and mutagenesis in cell culture experiments Location of the 125 I atom in DNA-ligand complex determines efficiency per decay.
This last point warrants further mention. Different designs of the 125 I ligands enables the decay event to be located at different distances from the axis of the DNA helix, and in some cases crystal structures have established the location of the iodine atom in the DNA ligand complex (Lobachevsky et al. 2008), but molecular modelling is also useful. The outcome of all these studies was the discovery of a much steeper than expected relationship between distance and efficiency of double-stranded breakage, compared to the prediction of the Charlton and Humm microdosimetry model. The Boston group also pursued this question, reported at the J€ ulich Symposium, but the flexibility of some of the ligands compromises interpretation (Balagurumoorthy et al. 2012).
The importance of these studies with labelled DNA ligands is that it provides a generic strategy for targeting of the Auger effect to the DNA of tumour cells. This involves a conjugate with two key components; a peptide that binds to receptors that are over-expressed in tumour cells, and the radioiodinated DNA ligand, joined by a labile linker. The conjugate binds to the receptor, the complex is internalized, and the labelled DNA ligand is released, and finds its way to DNA. This strategy has yet to be exemplified with an Auger emitter, but there is a proof-of-principle using a DNA-binding photosensitizer (Karagiannis et al. 2006).
Clinical examples of receptor-mediated targeting to DNA
Over a period of more than a decade, Reilly and Vallis and co-workers have sought to develop a strategy to target the Auger emitters, principally 111 In, to treat Epidermal Growth Factor Receptor (EGFR)-positive breast cancer, initially using labelled EGF. Encouraging results were obtained in a xenograft model (Chen et al. 2003), and the program has now progressed to a Phase I clinical trial (Vallis et al. 2014). The enhanced toxicity of the labelled EGF for receptor positive tumour cells, compared to normal cells, is attributed to a malfunction of the pathway of cytoplasmic degradation in normal cells, in combination with the (Nuclear Localization Signal) NLS motif in EGF that directs the peptide to the nucleus (Reilly et al. 2006). A 99m Tc-labelled antiEGFR antibody conjugate was also investigated, progressing to a Phase I clinical trial (Vallis et al. 2002). This approach was upgraded by using an 111 In-labelled antiEGFR antibody conjugate modified with an NLS (Costantini et al. 2007).
A concern with the labelled antibody strategy is the question as to whether the decaying Auger-nuclide is close enough to DNA in the ligand-receptor-chromatin complex, given the very stringent distance (or rather, closeness) requirement for efficient induction of DNA DSB upon Augerdecay in the vicinity of DNA (Lobachevsky et al. 2008). Actually, this doubt also extends to the case of the directly labelled ligand. Structural studies show Tamoxifen deeply imbedded in the receptor protein (Brzozowski et al. 1997), which is difficult to reconcile with close proximity to DNA. Nevertheless, the report that the radiotoxicity of a labelled oestrogen analog, 17-a[ 125 I]iodovinyl-11 V R -methoxyestradiol, was similar to that of 125 1989; Epperly et al. 1991) the simple exponential survival curve covers a limited range (<1 log of survival). Ironically, the first of such studies provides the most convincing evidence for high-LET cell killing by a labelled oestrogen analog; the linear survival curve on a semi-log plot extended for almost 3-logs of cell kill for 125 I-Tamoxifen (Bloomer et al. 1981). Even less direct support for high LET damage by 125 Ilabelled receptor ligands comes from the early observation that DNA DSB induced in receptor-positive cells by 125 I-triiodothyronine are difficult to repair (Sundell-Bergman & Johanson 1982). This was re-iterated in the presentation of the Swedish group at the Auger Symposium in Oxford, as well as showing that the nuclear uptake of 125 I-triiodothyronine was saturable, in contrast to the linear increase in cellular uptake with the concentration of added ligand (Sundell-Bergman et al. 1988). The rate of repair of DNA double-strand breaks, which can now be easily followed by the c-H2AX assay, could provide a useful endpoint to identify the more complex lesions arising from the Auger effect. More generally, there is a need to reconcile the structural features of the oestrogen-chromatin complex with evidence for high LET cell kill, to be confident that the Auger effect is implicated.
Another general concern of the receptor-mediated strategy is molecular capacity -can enough Auger nuclide be delivered and retained by the cell nucleus? In this regard the innovative approach of the Vallis group to amplify the damage signal by also targeting c-H2AX as well as nuclear EGF receptors (Cornelissen et al. 2013) could prove very important. Similarly, the investigation of the use of gold nanoparticles to increase the 'payload' (Song et al. 2007) is promising.
Clearly, the clinical exploitation of Auger-emitting nuclides has been elusive. The most widespread example is the use of 111 In-labelled peptides that target the somatostain receptor in neuroendocrine tumours (Van Essen et al. 2007;Limouris et al. 2008;Kong et al. 2009). The evidence that the therapy is mediated by the Auger effect includes the demonstration of translocation of the labelled peptide to the nucleus (Janson et al. 2000) in patient material, and extraction of the label with DNA of cultured cells (Hornick et al. 2000). The same reservations apply as for the EGF case regarding the stereochemistry of the nuclear complex, and whether the 111 In is close enough to DNA to get the full damage effect of decay. The fact that endoradiotherapy of neuroendocrine tumours now seems to have progressed to b-emitters, in particular 177 Lu (Denoyer et al. 2015), suggests that the full advantage of the Auger effect was not obtained with 111 In. Nevertheless, new opportunities will emerge as further examples of receptor-mediated targeting to the nucleus are discovered, such as the case of the F3 peptide and nucleolin (Cornelissen et al. 2012).
Triplex DNA
In the receptor-mediated strategy, targeting relies on the elevated expression of tumour-specific cell surface receptors. A quite different strategy stems from the knowledge that the genome of tumour cells harbours specific features. The pioneering work starting in the early-1990s of Ronald Neumann and Igor Panyutin and their colleagues at NIH showed that labelled Triplex-Forming Oligonucleotides (TFO) provided the means to target the Auger effect to tumour-specific genomic sequences ( Figure 9).
The triplex DNA story seems to have started in the mid-1960s with the knowledge that poly(dA) and poly(dT) can form triple helices (Riley & Maling 1966). The early interest was to exploit triple helix formation to selectively block gene expression, for example by photoactivation of a triplex structure including an oligothymidylate conjugated to an azidophenacyl group (Praseuth et al. 1988), but later the term 'antigene strategy' emerged (Helene et al. 1992). Soon after, the NIH group realized the opportunity to target the Auger effect, first using a 125 I-labelled TFO targeted to a polypurinepolypyrimidine sequence the nef gene of HIV (Panyutin & Neumann 1994). This paper established that the 125 I-labelled TFO-containing triple helix induced in the unlabeled strands, the previously established 'signature' breakage pattern associated with 125 I decay intimately associated with DNA (Martin & Holmes 1983), with an overall efficiency of 0.8 DSB per decay. Subsequently, the terms such as 'Gene radiotherapy' (Panyutin & Neumann 1998), 'Antigene radiotherapy' (Karamychev et al. 2000;Panyutin et al. 2000;Sedelnikova et al. 2000) and 'gene-targeted radiotherapy' were introduced , and later extended to 'Antisense radiotherapy' for targeting to mRNA (Gaidamakova et al. 2004).
The NIH group ( Figure 10) made many contributions to this field over a period of more than two decades. The early studies focused on labelled TFO, but later extended to peptide nucleic acids (PNA), especially in the context of targeting quadruplex structures in the genome, which occur for example in BCL2 (Onyshchenko et al. 2009(Onyshchenko et al. , 2011. The targets investigated in model systems ranged from repeated sequences with thousands of copies , through mdr1 that is amplified in some drug-resistant tumours (Sedelnikova et al. 2000(Sedelnikova et al. , 2001Panyutin et al. 2003; to single copy genes such as HPRT (Panyutin & Neumann 1996, Sedelnikova et al. 1999. These studies were facilitated by harnessing technical developments. An early example being the incorporation of biotin into the template strand used to synthesize the 125 I-TFO (Panyutin & Neumann 1996), enabling isolation of the 125 I-TFO by removal of the template strand with streptavidin-Dynabeads, after denaturation. This was a great improvement on the laborious, from first-hand knowledge of one of the authors (Martin & Haseltine 1981), method of strand separation in denaturing gels used in the first report from the NIH group (Panyutin & Neumann 1994). Other examples include the gel shift assay to demonstrate binding of 125 I-TFO to target sequences cloned into plasmid, and a Southern blot assay used to demonstrate cleavage by the 125 I-TFO directed to the HPRT (Panyutin & Neumann 1996;Sedelnikova et al. 2002a) and mdr1 (Sedelnikova et al. 2002a) targets.
Following on from the studies in plasmid DNA systems, delivery of 125 I-TFO to the nucleus of cultured cells proved challenging, restricting investigations to isolated nuclei or digitonin-permeabilized cells , Sedelnikova et al. 2000. Nevertheless, the demonstration of almost 80% efficiency of mutagenesis, albeit in a prokaryote system , illustrated the potential of antigene radiotherapy. Delivery using a cationic liposome system, could be improved by using a nonspecific oligonucleotide as 'ballast' or by conjugation of TFO to a nuclear localizing peptide sequence (NLS), enabling delivery to nuclei of intact cells (Sedelnikova et al. 2002b;Panyutin et al. 2003). Radiotoxicity was initially demonstrated by the clonogenic survival endpoint (Sedelnikova et al. 2000(Sedelnikova et al. , 2004, but gamma-H2AX proved more convenient (Sedelnikova et al. 2004;, at around the time when the one focus ¼ one DSB approximation was established (Sedelnikova et al. 2002b). However, the absolute efficiency of delivery was still disappointing, with apparently only about 0.1% of decays in 125 I-TFO occurring in DNA-bound vehicle (Sedelnikova et al. 2004).
Dahmen and Kriehuber revisited the 125 I-TFO antigene radiotherapy concept and reported their findings at the J€ ulich symposium (Dahmen & Kriehuber 2012). They confirmed the key findings of the NIH group, except for a 40-fold higher efficiency of cell kill (D 37 ; decays/cell), which they attribute to improved efficiency of their transfection system which involved electroporation. Thus, delivery of labelled TFO remains a challenge, even for intact cultured cells, let alone in vivo. Accordingly, Dahmen and Kriehuber concluded that targeting 125 I-TFO to single genes as a useful research tool. Meanwhile, the therapeutic potential of Auger-mediated antigene radiotherapy awaits a technological advance in TFO delivery.
K-edge ionization -photon activation therapy (PAT)
Radiosensitization by incorporation of bromo-and iodo-deoxyuridine was established in the 1960s, initially by Szybalski and co-workers (Erikson & Szybalski 1963), but the specific idea to exploit the Auger cascade from K-edge ionization was first published by Tisljar-Lentilus, in a J€ ulich-Brookhaven collaboration (Tisljar-Lentulis et al. 1973). The term 'photon activation therapy' (PAT) was introduced by Ralph Fairchild et al. in 1982, in anticipation of the Brookhaven synchrotron in 1984 (Fairchild et al. 1982). Fairchild et al. clearly distinguished between the 'biological' radiosensitization, independent of photon energy, and the further enhancement from Auger electrons derived from K-edge activation. This distinction is illustrated in Figure 11, which is adapted from a paper presented at the Oxford symposium (Maezawa et al. 1988). Fairchild et al. described iodine, compared to bromine, as the 'only viable choice'. It is interesting to note ( Figure 1) that Ralph Fairchild attended the Auger symposium in Oxford in 1987 (but curiously, did not present a paper), at which Humm and Charlton presented calculations of simulated DNA DSB yields for K-edge irradiation of DNA with incorporated bromine, and concluded that the Auger cascades were 'relatively unimportant' . Later calculations indicated a significant effect for iodine (Karnas et al. 2001, Moiseenko et al. 2002. Clearly, the advent of intense synchrotron photons that could be precisely tuned to energies above and below the K-edge of target atoms provided the opportunity to investigate and develop PAT. Meanwhile, while awaiting the completion of the Brookhaven facility, in the 1982 paper, Fairchild et al. proposed an ingenious combination of PAT and brachytherapy, choosing isotopes with gamma emissions with energies suitable for excitation of iodine (Fairchild et al. 1982). Samarium-145 was identified as an ideal isotope for brachytherapy combined with infusion of iododeoxyuridine, and this idea was subsequently elaborated (Fairchild & Bond 1984, Goodman et al. 1990. Much later, the palladium-103/Pt combination was suggested (Laster et al. 2009), and further combinations have been considered (Bakhshabadi et al. 2013). However, this concept does not seem to have been followed up clinically, with most attention focused on synchrotron sources.
Following investigation of iododeoxyuridine PAT, comparing megavoltage and 100 keV Xrays, which reported modest (15%) enhancement for PAT (Miller et al. 1987), the first results from Brookhaven, using photons above and below the iodine k-edge, reported an enhancement of 1.4, on top of the radiosensitization factor of 2.2 (Laster et al. 1993). Similar cell culture studies at the Grenoble Synchrotron reported PAT with iododeoxyuridine-treated cells (Corde et al. 2004). This was later followed by a study in a rat glioma model, but no enhancement was observed (Rousseau et al. 2009). The Corde et al. study also reported sensitization by iodine contrast agent, which resulted in low LET type survival curves, whereas PAT was associated with exponential survival curves, clearly distinguishing PAT from sensitization. Unfortunately, in the more recent literature, the term PAT has been confused with sensitization, and used in conjunction with studies with iodine contrast agent and heavy metal nanoparticles, which are unlikely to be close enough to DNA to produce Auger damage to DNA upon activation. Nevertheless, dose enhancement (albeit not PAT according to the original definition by Fairchild) is a potential strategy to improve therapeutic ratio given the possibility to exploit the blood brain barrier to enable preferential delivery of contrast agent, for example, to brain tumours.
The widespread use of platinum drugs in cancer chemotherapy, often in conjunction with radiotherapy, has focused attention on the potential of PAT, with the added benefit of somewhat better photon penetration associated with higher k-edge energy. The potential was established in early studies at the Photon Factory in Japan (Le Sech et al. 2000, Kobayashi et al. 2002 and promising results have been obtained with the rat glioma model in Grenoble, for both CisPt (Biston et al. 2004) and carboplatin (Rousseau et al. 2007). A subsequent study with carboplatin compared 6 MeV X-rays with 78.8 keV synchrotron photons and found no difference in efficacy, from which the authors concluded that the effect was 'not due to the production of Auger electrons and photoelectrons emitted from the Pt atoms' (Bobyk et al. 2012).
Another recent study with glioma-bearing rats administered thallium and irradiated with 50 keV photons found an insignificant increase in survival for the combination, compared to radiation-only (Ceberg et al. 2012). In the introduction, the authors pointed out that most of the metal is taken-up into the cytosol. It is interesting to reflect on the work of Apelgot, who noted at the Oxford Auger Symposium (Apelgot & Guille 1988) that metals such as Cu and Zn seem to be intimately associated with DNA, and inclusion of the Auger emitter 64 CuCl 2 in the medium for prolonged periods (> 24 h) resulted in exchange and decay-induced lethal damage to DNA close-up. [Curiously, the same high LET-type (simple exponential) survival curves were obtained for 67 Cu, which is not an Auger emitter (Apelgot et al. 1989)]. However, it seems likely that the amounts of such heavy metals naturally bound to DNA would not be sufficient for PAT, nor provide a differential between tumours and normal tissues.
In conclusion, it is sobering to reflect, that in spite of availability of synchrotron irradiation for more than 30 years, clinical PAT has not advanced beyond the Phase I-II clinical trial of iododeoxyuridine in combination with RT of anaplastic astrocytoma, which claimed a modest survival benefit relative to historical controls (Urtasun et al. 1996); a claim that was disputed in an editorial comment (Phillips 1996). In any case, these studies used conventional (high energy) photons, so PAT does not correctly describe the modality.
Gadolinium Neutron Capture Therapy (GdNCT)
Boron Neutron Capture Therapy (BNCT) is a well-established concept for cancer radiotherapy, the crux of which is the following nuclear equation: describing the fission reaction that results from capture of a low energy thermal neutron by a 10 B nucleus. The products are very high LET particles: An alpha particle and 7 Li nucleus. Compared to the nuclei of biological elements, the 10 B nucleus has an exceptionally reactivity. This reactivity is quantified by the thermal neutron capture cross section, which for 10 B is about 4000 barns (compared to 0.0002, 0.004 and 0.3 for O, C and H, respectively). The concept of BNCT is to use a 10 B-drug that accumulates in tumours, and irradiate with thermal neutrons from a nuclear reactor, and the tumour is selectively damaged by the fission particles.
As outlined in a recent review (Barth et al. 2005), most clinical experience has been in Japan, and initially, many of these treatments were done at the Kyoto University Research Reactor by the neurosurgeon Hiroshi Hatanaka. He treated > 120 brain tumour patients between 1968 and when he tragically died in 1994 at the age of 62, but the claim of better outcomes compared to conventional treatment is controversial.
One of the authors (RFM) was invited to join the Australian contingent of a BNCT collaboration between Australia and Japan during the 1980s. The common interest was the high LET feature shared by Auger electrons and the products of BNC. From the necessary background reading around BNCT, the huge thermal neutron capture cross section the gadolinium-157 (also a non-radioactive isotope) was striking: 242,000 barns! The Australian contingent was headed by Barry Allen, a nuclear physicist at the Australian Nuclear Science and Technology Organisation (ANSTO) that manages Australia's nuclear reactor in Sydney. At first, Barry was not enthusiastic about the potential of the Gadolinium Neutron Capture (GdNC) reaction: because it does not yield high LET particles with a restricted range from the site of reaction, rather it produces high energy gammas. However by analogy with the 125 I decay scheme, it seemed possible that some of the gamma energy would yield conversion electrons, and thus Auger electrons. This hypothesis led to an experiment done with a very nice thermal neutron beam developed by Barry Allen in the small experimental reactor (MOATA) then at ANSTO. The experiment involved combining plasmid DNA with gadolinium cations, plus and minus the metal chelator, ethylenediaminetetraaceticacid (EDTA). Without EDTA, the 157 Gd cations bound to the DNA polyanion, but they were sequestered away from the DNA with the addition of EDTA. After irradiation for several hours, linear plasmid DNA was produced in the non-EDTA sample, but not in the EDTA-containing control (Martin et al. 1988). This proof-of-principle experiment demonstrated that DNA double-stranded breaks are produced from GdNC on DNA, attributable to the Auger effect.
Translating this to even just cell culture experiments required a 157 Gd-labelled DNA binding ligand. Minor groove binding ligands were synthesized (Martin et al. 1992), but failed to produce DNA DSB when mixed with plasmid DNA and exposed in the thermal neutron beam. This can be attributed to the fact that the chelation cage cannot be accommodated in the minor groove, so the 157 Gd atom is simply not close enough to the DNA. Nevertheless, there is continuing interested in GdNCT, but until the 157 Gd is targeted to the DNA molecule, GdNCT will merely be dose-enhancement with the main feature, the Auger effect, not exploited. Maybe it is possible to design a DNA-binding molecule with the chelating cage, and that fits in the major groove. Such a ligand could also be useful for PAT.
Discussion and Conclusions
Translation of the features of the Auger effect to cancer radiotherapy has been elusive. The only clear example is photo-activation after infusion with bromodeoxyuridine or iododeoxyuridine, for which there were clinical trials for anaplastic astrocytoma in the 1980s, but without clear benefits. These trials involved megavoltage radiotherapy, so the contribution of photo-activation was probably minor. Synchrotron irradiation with lower energy photons would enhance the Auger contribution, but after initial preclinical studies from the Grenoble facility, there seems to be no intention to progress to clinical studies with iododeoxyuridine. Similarly, the Fairchild idea of PAT brachytherapy using appropriate gamma sources does not seem to have been taken up. The clinical photosensitization associated with infusion of halopyrimidines may be a major factor, as well as the difficulty in achieving sufficient replacement of thymidine with the halopyrimidine.
The use of 111 In-labelled octreotide to target somatostatin receptors in neuroendocrine tumours was reported to exploit the Auger effect, but this unlikely given that b-emitters particularly 177 Lu now supercedes 111 In in that setting. However, it is interesting that this example of clinical Auger radiotherapy stems from the pioneering studies of Bill Bloomer and Jim Adelstein with 125 I-Tamoxifen in the 1980s, which introduced the use of receptor-mediated targeting systems in which the ligand receptor complex is translocated to the nucleus.
The work of Vallis and Reilly on targeting via the EGF receptor, which is clearly progressing to clinical studies, is also in this category, and it is particularly interesting that nuclear translocation is possibly specific to breast tumour cells. Nevertheless, there is a question that hovers over the nuclear receptor approach generally, namely the extent to which the Auger effect contributes. We now know there is a very steep distance-dependence between the distance of the decaying Auger nuclide from DNA and the efficiency of double-strand break induction. Precise structural information is obviously required, but it seems unlikely that the decaying nuclide in the nuclear complex would be close enough to take full advantage of the Auger effect. This hurdle (which incidentally, is already cleared for triplex targeted Auger antigene radiotherapy concept) could possibly be overcome by using 'validated' labelled DNA ligands in conjugates delivered by a receptor-mediated process, but this potential has not yet advanced to clinical studies. Furthermore, the prospect of success is limited by the capacity (i.e., nuclides deliverable per nucleus) of receptor-mediated systems. In this context, the use of nanoparticles to amplify the radionuclide load per cell (for example, Song et al. 2016) could be important.
The 'traffic volume' issue is even more critical for PAT. Whereas carrier-free nuclides are accessible for Auger endoradiotherapy, PAT is constrained by the reality of photon crosssections (or neutron capture cross-section in the case of GdNCT). On the other hand, compared to Auger endoradiotherapy, for which off-target delivery to normal tissues is always going to be a limiting factor, the binary nature of PAT is a great advantage. It is feasible that small molecule DNA ligands that are delivered directly, rather than by receptormediated endocytosis, could deliver a much larger number of photo-activatable atoms (e.g., iodine, heavy metals) to DNA for efficient photo activation, but still under the toxicity 'ceiling'. This approach might also be open for GdNCT.
In summary, there are many obstacles for translation of the Auger effect to the clinical reality of cancer radiotherapy, but it is still possible that these hurdles can be overcome by diligence and thoughtful optimism. | 10,868.2 | 2016-02-29T00:00:00.000 | [
"Medicine",
"Physics"
] |
Lightening gravity-mediated dark matter
We revisit the scenario of a massive spin-2 particle as the mediator for communicating between dark matter of arbitrary spin and the Standard Model. Taking the general couplings of the spin-2 particle in the effective theory, we discuss the thermal production mechanisms for dark matter with various channels and the dark matter self-scattering. For WIMP and light dark matter cases, we impose the relic density condition and various experimental constraints from direct and indirect detections, precision measurements as well as collider experiments. We show that it is important to include the annihilation of dark matter into a pair of spin-2 particles in both allowed and forbidden regimes, thus opening up the consistent parameter space for dark matter. The benchmark models of the spin-2 mediator are presented in the context of the warped extra dimension and compared to the simplified models.
Introduction
Dark matter (DM) is a complete mystery in particle physics and cosmology, although its presence can be unambiguously inferred from galaxy rotation curves, gravitational lensing, Cosmic Microwave Background as well as large-scale structures, etc. There are null results in searching dark matter beyond gravitational interactions from various direct and indirect detection experiments, thus, in particular, a lot of parameter space for Weakly Interacting Massive Particles (WIMPs) has been ruled out [1][2][3].
The nature of dark matter is still an open question. To this, it is very important to pin down the production mechanisms for dark matter in the early universe. For instance, WIMP dark matter relies on the freeze-out process under which the DM relic density is determined in terms of weak interaction and weak-scale DM mass. Thus, this has motia e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>(corresponding author) vated specific target materials and technologies in the direct searches for WIMP for more than three decades. New production mechanisms such as for Feebly Interacting Massive Particles (FIMPs) [4], Strongly Interacting Massive Particles (SIMPs) [5][6][7][8][9][10][11] and forbidden dark matter [12,13], etc, can motivate different target materials and new technologies to get access to sub-GeV DM masses and/or feeble interactions. It is known that light dark matter with sub-GeV mass can have large self-interactions to solve potentially smallscale problems at galaxies [14][15][16][17][18] and it may also call for new dynamics in the dark sector [19][20][21] to get the DM selfinteractions velocity-dependent for galaxy clusters such as Bullet cluster [22][23][24].
Moreover, dark matter is known to be neutral under electromagnetism, so it is conceivable to communicate between dark matter and the Standard Model (SM) through messenger or mediator particles. Thus, the simplified models for dark matter with mediator particles have drawn a lot of attention, providing an important guideline for direct and indirect detections of dark matter as well as collider experiments [25,26].
In this article, we consider a massive spin-2 particle as the mediator for dark matter of arbitrary spin, which couples to the SM particles and dark matter through the energymomentum tensor, as originally proposed by one of us and collaborators [27,28]. This scenario has been dubbed "Gravity-mediated dark matter", due to the similarity to the way that the massless graviton interacts with the SM. The spin-2 mediator stems from a composite state in conformal field theories or a Kaluza-Klein(KK) graviton in a gravity dual with the warped extra dimension [27][28][29][30][31][32]. There are other works on the spin-2 mediated dark matter in similar frameworks [33][34][35][36]. We regard the massive spin-2 particle as a dark matter mediator in the effective theory with general couplings to the SM and dark matter and discuss the general production mechanisms for WIMP dark matter and light dark matter in this scenario.
We discuss various channels of dark matter interactions in the presence of the spin-2 mediator: direct 2 → 2 annihilations, 2 → 2 allowed and forbidden channels into a pair of spin-2 mediators, 3 → 2 assisted annihilations as well as DM self-scattering. We perform a comprehensive check of the consistency between the correct relic density and various experimental constraints, such as direct detection, precision measurement of muon g − 2, meson decays and collider experiments, in both WIMP and light dark matter cases. We also introduce two benchmark models with the warped extra dimension for the spin-2 mediator, such as the Randall-Sundrum(RS) model [37] and the clockwork model [38][39][40][41]. Then, we discuss the impacts of heavier KK gravitons on the aforementioned processes for dark matter, focusing on the DM s-channel annihilation into the SM particles and DM elastic scattering processes.
There is a recent work [42] where a similar setup is studied for the massive spin-2 particle playing a role as a mediator for dark matter and the parameter space for heavy dark matter beyond TeV scale is scanned over in the context of 5D linear dilaton background, based on standard WIMP 2 → 2 annihilation channels. On the other hand, in our work, we focus on the phenomenological study of the massive spin-2 mediator in the effective theory, focusing on the productions and constraints of weak-scale WIMP and sub-GeV dark matter with new production channels, and deal with the complete analysis of DM direct detection constraints.
The paper is organized as follows. We begin with a brief description of our setup for the spin-2 mediator and its interactions. Then, we determine the DM relic density from various annihilation channels and discuss the self-scattering process for dark matter and the unitarity bounds. Next, we consider the DM-nucleon elastic scattering for WIMP and the DM-electron elastic scattering for light dark matter and provide various direct and indirect constraints on those dark matter models. We continue to show two benchmark models with the warped extra dimension and discuss how the DM processes can be modified due to extra resonances. Finally, conclusions are drawn. There are three appendices dealing with the details on DM-nucleon scattering amplitudes, decay widths of spin-2 particles as well as the KK sums.
The setup
We consider the effective interactions of a massive spin-2 field, G μν , to the SM particles as well as dark matter with arbitrary spin, in the following [27,28], where B μν , W μν , g μν are the strength tensors for U (1) Y , SU (2) L , SU (3) C gauge fields, respectively, ψ is the SM fermion, H is the Higgs doublet, and is the dimensionful parameter for spin-2 interactions. Here, we note that c i (i = 1, 2, 3), c ψ , and c H are dimensionless couplings for the KK graviton. Depending on the spin of dark matter, s = 0, 1 2 , 1, denoted as S, χ and X , the energy-momentum tensor for dark matter, T DM μν , is given, respectively, by In the later discussion, we focus on the couplings of the spin-2 mediator to quarks, leptons and massless gauge bosons in the SM, as well as dark matter couplings. We treat those SM to mediator couplings to be independent parameters, but be universal for simplicity as well as unitarity consideration.
Dark matter annihilations and self-scattering
In this section, we discuss the Boltzmann equations for determining the relic density of dark matter and show the details for the cross sections for 2 → 2 direct annihilations. In particular, we obtain for the first time the new results for 2 → 2 forbidden channels, 3 → 2 assisted annihilations, and DM self-scattering.
First, we consider the Boltzmann equations for the relic density of real scalar dark matter S or vector dark matter X , given bẏ Similarly, for Dirac fermion dark matter χ , the corresponding Boltzmann equation for n DM = n χ + nχ iṡ Henceforth, we assume that the spin-2 particle is in thermal equilibrium with the SM plasma during the freeze-out, so we can take n G = n eq G , which is the number density in thermal equilibrium.
Direct annihilations
We focus on the cases with relatively light WIMP dark matter and light dark matter below the WW threshold, which annihilate dominantly into the SM fermions or massless gauge bosons.
If dark matter is heavier than the W W threshold, we can also take into account the DM annihilations into the electroweak sector, as shown in Ref. [27][28][29], allowing for smaller couplings of the spin-2 mediator to the SM particles for a correct relic density. In this work, however, for WIMP dark matter, we take the spin-2 mediator couplings to the SM quarks and gluons to be nonzero in simplified models. For consistency of gauge-invariant couplings, we choose c 1 = c 2 = c H = 0 in the electroweak sector and c l = 0 for SM leptons in the discussion for WIMP. On the other hand, for light dark matter below the W W threshold, we keep all the spin-2 mediator couplings to the SM to be nonzero.
In the case when dark matter is heavier than the spin-2 mediator, dark matter can also annihilate directly into a pair of spin-2 particles, reducing the dark matter abundance further together with the direct annihilations into the SM.
In the case where 2 → 2 annihilation channels are dominant, the Boltzmann equations, (3.1) or (3.2), becomė Then, the relic density for WIMP dark matter is given by
Scalar dark matter
The annihilation cross section for scalar dark matter into a pair of SM fermions, SS → ψψ, is given [27][28][29][30][31] by where N c is the number of colors for the SM fermion ψ, and G is the width of the spin-2 particle. Thus, the annihilation of scalar dark matter into the SM fermions becomes d-wave suppressed, so scalar dark matter is not constrained by indirect constraints from cosmic rays and Cosmic Microwave Background (CMB) measurements [27][28][29].
When m S > m G , scalar dark matter can also annihilate into a pair of spin-2 particles through the t/u-channels [27][28][29][30][31][32], becoming dominant due to sizable spin-2 couplings to dark matter. Then, the corresponding annihilation cross section is given, as follows, For light dark matter, the DM annihilations into photons or gluons are relevant. For sub-GeV dark matter, the DM annihilations into mesons must be considered instead of those into gluons. Then, for scalar dark matter, the annihilation cross sections into a pair of massless gauge bosons [27,28] are For 2m S 1.5 GeV, instead of the annihilation into a gluon pair, we should consider the annihilation cross section of scalar dark matter into a meson pair, as follows, where c π c q in the limit of small momenta of produced pions, because the chiral perturbation theory takes in. We also need to include the annihilation of scalar dark matter into charged pions and kaons, if kinematically allowed.
Fermion dark matter
The annihilation cross section for fermion dark matter, χχ → ψψ, is given [27][28][29][30][31] by Thus, the annihilation of fermion dark matter into the SM fermions becomes p-wave suppressed. Then, similarly to the case of scalar dark matter, fermion dark matter is not constrained by indirect constraints from cosmic rays and CMB measurements [27][28][29].
When m χ > m G , fermion dark matter also annihilates into a pair of spin-2 particles through to the t/u-channels [27][28][29][30][31], as follows, Then, the resulting annihilation cross section is s-wave, so it becomes dominant in determining the relic density for fermion dark matter.
For light fermion dark matter, the annihilation cross sections into a pair of massless gauge bosons and a pair of mesons [27,28] are (3.14) For 2m χ 1.5 GeV, we need to include the annihilation channel into a pion pair by Similarly, the annihilation of fermion dark matter into charged pions and kaons, if kinematically allowed, should be also included.
Vector dark matter
The annihilation cross section for vector dark matter, X X → ψψ, is given [27][28][29][30][31] by Thus, the annihilation of vector dark matter into quarks becomes s-wave. In this case, smaller spin-2 mediator couplings to the SM quarks or vector dark matter can be consistent with the correct relic density, as compared to the other cases. In this case, the CMB measurement for recombination era can rule out the vector dark matter mass below 100 GeV, if the relic density is determined solely by the direction annihilation into the SM particles. But, indirect detection signals from the annihilation of vector dark matter are promising [27][28][29].
For m X > m G , vector dark matter also annihilates into a pair of spin-2 particles through the t/u-channels [27][28][29][30][31][32], as follows, For light vector dark matter, the annihilation cross sections into a pair of massless gauge bosons and a pair of mesons [27,28] are For 2m X 1.5 GeV, we also need to include the annihilation into a pion pair by (3.20) Similarly, the annihilation of vector dark matter into charged pions and kaons, if kinematically allowed, should be also included.
Forbidden channels
When dark matter is lighter than the spin-2 mediator, but their masses are comparable, that is, m DM m G , the annihilation of dark matter into a pair of spin-2 particles is forbidden at zero temperature, but it is kinematically allowed due to the tail of the Boltzmann distribution of dark matter at finite temperature, making the so called forbidden channels relevant for determining the DM abundance. In this subsection, we consider the forbidden channels in association with the spin-2 mediator.
In the case when the forbidden channels are dominant, the Boltzmann Eqs. where the forbidden annihilation cross sections are given by Here, G = (m G − m DM )/m DM , and g DM is the number of degrees of freedom of dark matter, g DM = 1, 4, 3, for real scalar, Dirac fermion and vector dark matter, respectively. Here, we have used the detailed balance condition for forbidden channels. Moreover, for m DM < m G , the cross sections for the inverse annihilation channels are given by As a result, the relic density for forbidden dark matter [13] is given by There is a Boltzmann suppression factor in the effective annihilation cross sections for forbidden channels, so we would need larger couplings of dark matter to the spin-2 mediator for the correct relic density, as compared to the case with allowed 2 → 2 channels for m DM > m G .
Gravity-mediated 3 → 2 processes
Scalar dark matter can annihilate by SSS → SG, which can be dominant over the forbidden channels, SS → GG, for m S < m G < 2m S . Similarly, the 3 → 2 processes for fermion dark matter (χχχ → χ G) and vector dark matter (X X X → X G) can be important for m χ < m G < 2m χ and m X < m G < 2m X , respectively. Thus, we choose m DM < m G < 2m DM in order for the 3 → 2 processes to be kinematically open and for the hidden sector 2 → 2 annihilations to be forbidden. In this subsection, we consider the assisted 3 → 2 channels with the spin-2 mediator for the first time. When the 3 → 2 annihilation processes are dominant, the Boltzmann Eq. (3.1) becomeṡ Here, the corresponding 3 → 2 annihilation cross sections for scalar and fermion dark matter are As a result, the relic density for SIMP dark matter [9][10][11]13] is given by We note that the 3 → 2 annihilation cross sections are highly suppressed for perturbative couplings in most of the parameter space, so they are sub-dominant in determining the relic density, as compared to the previously discussed 2 → 2 annihilation channels. Therefore, we don't consider the SIMP option in the later discussion.
Dark matter self-scattering
Spin-2 mediator can also mediate the self-scattering process of dark matter, in particular, for fermion and vector dark matter, for which there is no renormalizable interaction for self-scattering. We can take the gravity-mediated processes to be dominant for dark matter self-scattering and consider the interplay between relic density condition and small-scale problems in galaxies.
For scalar dark matter, the self-scattering cross section for SS → SS, divided by DM mass, is in the Born approximation For fermion dark matter, the self-scattering cross section from χχ → χχ and χχ → χχ (and its complex conjugate), divided by DM mass are similarly given by Finally, for vector dark matter, the self-scattering cross section for X X → X X, divided by DM mass, is given by We note that for both scalar and fermion dark matter, the DM self-scattering cross section little depends on the DM velocity. In the case of scalar dark matter, there is an s-channel contribution with the spin-2 mediator too, but it is velocity-suppressed by the overall factor. On the other hand, for vector dark matter, the DM self-scattering cross section could be enhanced at a particular DM velocity due to the s-channel resonance [44,45], so it would be possible to accommodate the velocity-dependent self-interaction, being compatible with galaxy clusters such as Bullet Cluster [22][23][24].
Unitarity bounds
As we regard the massive spin-2 particle as a mediator for dark matter in the effective theory, it is important to make a consistency check by unitarity and perturbativity for the spin-2 interactions. In this subsection, we briefly discuss this issue from dark matter annihilation and self-scattering.
From the DM annihilation cross sections for DM DM → GG, given in Eqs. (3.7), (3.12) and (3.17), the corresponding scattering amplitudes grow with dark matter in the limit of r DM = (m G /m DM ) 2 1, being bounded by the partial wave unitarity as follows, (3.38) Similarly, for dark matter self-scattering, the corresponding scattering amplitudes grow with dark matter mass by , so the unitarity bounds from them are less significant for scalar and vector dark matter or comparable for fermion dark matter. Thus, it is sufficient to impose the unitarity bounds from DM DM → GG.
As a result, the unitarity bounds impose the lower bounds on the spin-2 mediator mass depending on the spin of dark matter, as follows, Therefore, the case with fermion dark matter is subject to the weakest unitarity bound. Recently, there is a similar dis-cussion on the unitarity bound on the massive graviton [43], based on the Compton scattering process, DM G → DM G, which can set a similar unitarity bound at high energies as for DM DM → GG. In the next section, we take into account the above unitarity bounds in constraining the parameter space with the correct relic density, in particular, for WIMP dark matter.
Detection of dark matter and mediator couplings
We give the phenomenological discussion on the spin-2 mediator for DM-nucleon elastic scattering, DM-electron elastic scattering, g − 2 of leptons, meson decays and the direct production at colliders. We present for the first time the complete discussion of DM-nucleon scattering in the presence of both quark and gluon couplings and DM-electron scattering as well as the relevance of unitarity at colliders.
DM-nucleon elastic scattering
The scattering amplitude between DM and SM particles through the spin-2 mediator [32] is written in the limit of a small momentum transfer, as follows, First, the elastic scattering amplitude between dark matter and nucleon [32] is given by For direct detection experiments, we can consider only the contributions from quarks and gluons in a nucleon, as follows, Then, we get the trace part in the effective theory for three quark flavors (u, d, s) and gluons as where scale anomalies from light quarks and gluons are separately taken into account. Moreover, the traceless part (twist-2 operators) for five quark flavors (u, d, s, c, b) and gluons is given by As a result, the nuclear matrix elements for the trace part become where f N T q , f T G are the mass fractions of light quarks and gluons in a nucleon, respectively, and which is obtained in the effective theory for three quark flavors. For the universal spin-2 couplings with c q = c g , we obtain the standard results for On the other hand, the nuclear matrix elements for the traceless part [46,47] are where q(2),q (2) and G(2) are the second moments of the parton distribution functions(PDFs) of quark, antiquark and gluon, respectively, The mass fractions are f .032 and f p T s = 0.020 for a proton and f n T u = 0.017, f n T d = 0.041 and f n T s = 0.020 for a neutron [46,47]. On the other hand, the second moments of PDFs are calulated at the scale μ = m Z using the CTEQ parton distribution as G(2) = 0.48, u(2) = 0.22, u(2) = 0.034, d(2) = 0.11,d(2) = 0.036, s(2) =s(2) = 0.026, c(2) =c(2) = 0.019 and b(2) =b(2) = 0.012 [46,47].
There, using the results in the Appendix A, the total cross section for spin-independent elastic scattering between dark matter and nucleus [32] is given by is the reduced mass of the DM-nucleus system and m A is the target nucleus mass, Z , A are the number of protons and the atomic number, respectively, and the nucleon form factors are given by the same formula for all the spins of dark matter as where DM = χ, S, X for fermion, scalar and vector dark matter, respectively. Here, as compared to our previous work [32], we have included the twist-2 gluon operator at tree level as well as loop effects from heavy quarks and gluons in the trace part.
DM-electron elastic scattering
For light dark matter below GeV scale, the DM-nucleon elastic scattering loses the sensitivity for dark matter searches because of the low threshold of the nucleon recoil energy. Then, the DM-electron elastic scattering is relevant for direct detection [9][10][11]. The corresponding cross sections relevant for direct detection are independent of the spin of dark matter, given by where we assumed that m DM m e in the second line.
Moreover, the graviton mediator should make dark matter remain in kinetic equilibrium [5][6][7][8][9] during the freeze-out. In this case, independent of the spin of dark matter, the momentum relaxation rate for the kinetic equilibrium of light dark matter is dominated by When the spin-2 mediator couples to leptons, it gives an extra contribution to the anomalous magnetic moment of leptons, as follows [48], where A(y) is a monotonically decreasing function, given by with L(x, y) = x 2 y 2 + 1 − x and For m G m l , the loop function A(x) is approximated [49] (4.21) We note that the deviation of the anomalous magnetic moment of muon between experiment and SM values is given [50,51] by which is a 3.6σ discrepancy from the SM [51]. Furthermore, there is a 2.4σ discrepancy reported between the SM prediction for the anomalous magnetic moment of electron and the experimental measurements [52][53][54][55][56], as follows,
The decay width of a down-type quark q 1 decaying into another down-type quark q 2 and G is given for m G < m q 1 with m q 2 = 0 [63], as follows, where V f 1 and V f 2 are the CKM matrix elements,
Mediator production at colliders
The massive spin-2 particle can be produced singly from gluon fusion or quark/anti-quark scattering at the LHC, decaying into the SM particles or a pair of dark matter. Moreover, in intensity beam or linear colliders, we may also constrain non-universal lepton and photon couplings by the photon energy distribution from e + e − → γ G. First, we obtain the squared amplitude for e + e − → γ G, as follows, cos θ). Therefore, for c γ = c e , the squared amplitude behaves like |M| 2 ∼ s 2 for s m 2 G [64][65][66][67][68], which is expected from the dimension-5 interactions for the spin-2 mediator, − 1 G μν T μν . However, for c γ = c e , the squared amplitude becomes |M| 2 ∼ s 3 m 4 G 2 , which shows that the violation of unitarity at a lower energy. A similar phenomenon was observed in the QCD process, qq → g G [64][65][66][67][68], for which c g = c q would give rise to a similar dependence of the corresponding squared amplitude on the center of mass energy.
For c γ = c e , the production cross section for e + e − → γ G with unpolarized electron and positron is given by Thus, the angular differential cross section becomes independent of s for s m 2 G , as expected from the behavior of the squared amplitude. A similar conclusion can be drawn also for qq → g G at the LHC. The above result will be used for imposing the bounds from invisible and visible searches at BaBar in Fig. 8 of the next section.
Bounds on WIMP
Dijet and dilepton searches at the LHC can constrain relatively heavy spin-2 resonances [69,70]. Although not sensitive enough, the ISR photon or jet + heavy dijet resonances might be interesting to constrain non-universal quark and gluon couplings by the jet p T distribution from qq → g G at LHC and future hadron colliders [64]. Direct detection bounds from XENON1T [1], LUX [2], PandaX [3], etc, are most stringent for weak-scale or heavier dark matter.
For weak-scale spin-2 resonances, the LHC dijet searches are not sensitive due to the large QCD background. Then, dijet resonance + ISR photon [71] or jet [72,73] searches can constrain this case. In the presence of dark matter coupling to the spin-2 resonance, the invisible decay of the spin-2 particle with mono-jet of mono-photon is also promising [25,26,74,75]. 1 Parameter space for m G / vs m DM for WIMP dark matter. The relic density is satisfied in red solid, blue dashed and orange dotted lines for fermion, scalar and vector dark matter, respectively. The gray region is excluded by XENON1T and the light blue region is excluded by ATLAS dijet searches. We have taken the universal spin-2 mediator couplings to the SM and dark matter. The purple region is ruled out by the partial wave unitarity for scalar or vector dark matter In Figs. 1 and 2, we depict the parameter space for m G / vs m DM in the former and m DM vs m G in the latter, satisfying the correct relic density, in red solid, blue dashed and orange dotted lines for fermion, scalar and vector dark matter, respectively. We took the universal couplings of spin-2 mediator to all the SM quarks and gluons, as well as to dark matter. We have excluded the light blue region by the bounds from dijet resonance + ISR photon [71] or jet [72,73] searches, and the gray region by the bound on DM-nucleon spinindependent cross section from the direct detection experiment in XENON1T [1]. Moreover, some of the parameter space (in purple) where dark matter is heavier than the spin-2 mediator mass is disfavored by the violation of partial wave unitarity for scalar or vector dark matter as discussed from Eqs. (3.39)-(3.41). As shown in Fig. 2, in a wide parameter space away from the resonance, unitarity constraints turn out to be weaker than the XENON1T bound.
We find from Fig. 1 that for weak-scale spin-2 mediator, the relic density region below m DM < m G /2 is disfavored by ATLAS dijet bounds. The XENON1T bound becomes stronger above m DM > m G /2, leaving only the region above m DM 200 GeV or larger masses unconstrained due to the Fig. 1 dominance of DM DM → GG channels. But, in this case, the spin-2 mediator produced from the DM annihilation can decay into the SM particles, so the indirect detection experiments from cosmic rays such as positrons, anti-protons and gamma-rays can constrain those large mass regions [27,28]. In Fig. 2, XENON1T rules out the non-resonance regions below m DM 200 GeV or 160 GeV for the mediator scale, /c q = 3, 5 TeV, but leaves the resonance regions with m G = 2m DM untouched.
Bounds on light dark matter
In the case of light dark matter, we would need a light spin-2 mediator in order to make the annihilation cross section of dark matter sufficiently large. In this case, monophoton + leptons at BaBar [76], and missing energy at BaBar [77], Belle-2 [78,79], LHCb (for m G > 10 GeV) [80] as well as beam dump experiments such as E137 in SLAC [81], N64 in CERN SPS [82], etc, can be important to constrain the light spin-2 mediator couplings, in particular, the couplings to leptons and dark matter. There are also direct detection bounds on DM-electron scattering from XENON10 [83][84][85], DarkSide-50 [86], Sensei experiments [87], etc.
For a light spin-2 mediator, we can consider the bounds from γ + missing energy [77] or leptons [76] at BaBar experiment. For the former case, the cosine of the scattering angle of the photon in the center of mass frame was chosen to | cos θ * γ | < 0.6, and the center of mass energy was √ s = 10.58 GeV. Then, we get the limit on the lepton couplings for m G < 8 GeV from invisible and visible searches at BaBar, respectively, as follows, Here, we assumed BR(G → DMDM) = 1 in the former and BR(G → ll) = 1 in the latter. So, in general, the above bounds scale up by 1/ √ BR. The above limits, in particular, from the invisible searches, will be improved by a factor of three in the lepton couplings in Belle-2 experiment [78,79].
We remark that if we took non-universal couplings by c γ = c e , the above bounds from BaBar would become stronger, due to the growth of the corresponding cross section.
Moreover, if the spin-2 mediator is much lighter than K -meson or B-meson, we can approximate the above partial decay rate of a flavor-changing down-type quark from Eq. (4.24) to Therefore, from the current limits on the invisible decays of K + or B + , we can put the bound on the quark couplings as As a result, the bounds on quark couplings from meson decays are relatively weaker than those on lepton couplings from BaBar as will be shown in the above. When the spin-2 mediator is heavier than mesons but dark matter is light enough, mesons can still decay invisibly into a pair of dark matter [62]. But, in this case, the bounds on quark couplings become much weaker because of the phase-space suppression for three-body decays of mesons. In Figs. 3, 4 and 5, we show the parameter space for light dark matter below the GeV scale mass satisfying the correct relic density, in c e / vs m G in the former and c DM / vs m G in the latter two. For Fig. 3, we took m DM < m G such that dark matter annihilates only into the SM particles, not into a pair of spin-2 mediators. In this case, we find that the graviton couplings to the SM particles satisfying the correct relic density would be strongly constrained by BaBar and other intensity experiments, except the region near the resonance. On the other hand, for Figs. 4 and 5, we took m DM > m G for which dark matter can annihilate into a pair of spin-2 mediators. In this case, even for a small graviton coupling to the SM particles, for instance, for /c e = 10 TeV or 100 TeV in Figs. 4 or 5, for which the current experimental constraints are satisfied, we can achieve the correct relic density in a wide range of parameter space for dark matter coupling and spin-2 mediator mass. We note that the DM annihilation into a pair of spin-2 mediators is s-wave, so the spin-2 mediators produced from the DM annihilation decay into the SM particles and inject energy into electrons and photons, affecting the CMB recombination [88]. But, the spin-2 mediator can couple very weakly to the SM, being still consistent with a correct relic density, such that it is long-lived at least as long as the era of the CMB recombination.
We have also checked in Figs. 3, 4 and 5 that the DM self-scattering cross sections in the parameter space explaining the relic density are much below σ self /m DM = 1 cm 2 /g, the Bullet cluster bound [22][23][24]. We also noted that the unitarity bounds given in eqs. We note that the difference between DM (c DM / ) and lepton couplings (c e / ) can be explained by the localization of dark matter and leptons in different positions of the extra dimension. For instance, in RS model, light dark matter can be localized on the IR brane with a small IR scale whereas the SM leptons are localized towards the UV brane [27,28].
In Figs. 6 and 7, we present the relic density as a function of the mass difference, G ≡ (m G − m DM )/m DM , with forbidden channels included. These plots illustrate the role of the forbidden channels in determining the relic density for the spin-2 mediator slightly heavier than dark matter. In this case, the annihilation of dark matter into a pair of spin-2 mediators is possible only at a nonzero temperature, thus leading to a Boltzmann suppression factor for the corresponding annihilation cross section. For each of Figs. 6 and 7, we have chosen m DM = 1, 10 GeV on left and right. We took /c DM = 10 GeV for both, and /c e = 10 TeV, 100 TeV for Figs. 6 and 7, respectively.
We find that the correct relic density for vector dark matter can be obtained with smaller couplings to the spin-2 mediator and sub-GeV DM massses, due to a mild phase-space suppression for m G m DM . On the other hand, for scalar or fermion dark matter, dark matter masses should be about 10 GeV or larger for the correct relic density being consistent with perturbativity, due to significant phase-space suppressions for m G m DM . The forbidden channels are s-wave but get suppressed as the velocity of dark matter decreases in the Fig. 4, except for c e / = (100 TeV) −1 later stage of the universe and in local galaxies. Thus, the forbidden channels are safe from the indirect bounds from cosmic rays or CMB recombination. In particular, it is remarkable that sub-GeV vector dark matter with m DM m G can be consistent with both the relic density and indirect detection bounds, being compatible with perturbativity.
In Fig. 8, we impose various experimental constraints and theoretical constraints in the parameter space for c e / vs m G . We chose the spin-2 mediator mass and dark matter coupling as m G = m DM /0.498 and /c DM = 1 GeV on left and m G = m DM /1.5 and /c DM = 100 GeV on right. We note that for both plots of Fig. 8, the DM self-scattering cross sections in the parameter space of our interest are well below the Bullet cluster bound.
In the left plot of Fig. 8, the spin-2 mediator can decay dominantly into a pair of dark matter in most of the parameter space satisfying the relic density shown in red solid, blue dashed and orange dotted lines for fermion, scalar and vector dark matter, respectively. So, the bound from invisible searches at BaBar applies to the whole parameter space below m G = 8 GeV, excluding the relic density region for scalar dark matter below m G = 0.8 GeV but less constrain- ing the counterparts for fermion or vector dark matter. The future Belle-2 results [78,79] could improve the limits or probe the larger portion of the relic density regions. We also show the (g − 2) μ favored region in green and orange at 1σ and 2σ levels, respectively, but it is excluded by BaBar for the universal lepton couplings. 1 In the same plot, we show the gray contours for DM-electron scattering cross section with σ DM−e = 10 −44 , 10 −48 cm 2 , but most of the param- 1 For c e c μ , however, we can make the (g − 2) μ favored region compatible with the bounds from BaBar. This is possible if leptons are localized at different locations in the warped extra dimension.
eter space survives the current direct detection bounds on light dark matter, such as XENON10, DarkSide-50, Sensei experiments. We note that as shown in the results, (4.32) and (4.33), the bounds from K + → π + + G or B + → K + + G with G → invisible are much weaker than BaBar invisible searches, so they are not shown in Fig. 8.
On the other hand, in the right plot of Fig. 8, as shown in Figs. 4 and 5, we don't need large graviton couplings to the SM particles in the region with m DM > m G , because dark matter can annihilate directly into a pair of spin-2 mediators. Therefore, the relic density can be determined almost inde- pendent of the graviton couplings to the SM particles, so a lot of parameter space for the correct relic density can be compatible with the current experiments. In this case, the spin-2 mediator decays only into the SM particles, so mono-photon + leptons at BaBar applies, limiting the lepton couplings to the spin-2 mediator. In the same plot, we also show the gray contours for DM-electron scattering cross section with σ DM−e = 10 −48 , 10 −52 cm 2 , so most of the parameter space is unconstrained by direct detection yet. We also noted that the unitarity bounds given in Eqs. (3.39)-(3.41) are satisfied in the parameter space of the plots in Fig. 8.
Spin-2 mediators from the warped extra dimension
We can regard the spin-2 mediator as the first Kaluza-Klein(KK) mode of graviton from the warped extra dimension or a composite state in a dual conformal field theory. In the case of the warped extra dimension, there are heavier Kaluza-Klein(KK) modes of graviton, which can be summed up to modify the DM processes, such as DM annihilation and scattering.
After compactification of the warped extra dimension, in principle, nonzero cubic self-couplings for KK gravitons appear in the low energy and they could contribute to the calculations of DM annihilation and scattering processes. As the initial 5D gravity theory with the warped extra dimension is ghost-free, there must be no ghost problem in the resulting 4D effective gravity theory. The quadratic and cubic selfcouplings for KK gravitons are also present in the 4D effective theory and they could change the calculations of the DM annihilations, DM DM → GG. Moreover, the complete analysis at the non-linear level with cubic self-couplings for KK gravitons would be also relevant for constructing a consistent model of the massive spin-2 particle without a ghost problem at the non-linear level [89][90][91][92][93], and showing the delayed violation of unitarity to a higher energy [43], and pinning down the UV nature of spin-2 mediators. Related to the above issue, there are attempts to make a consistent framework without ghosts for a massive spin-2 particle with self-interactions in the literature in the context of massive gravity [91,92] or bi-gravity [93].
In this work, we didn't attempt to tackle the detailed calculations of the DM annihilations, DM DM → GG with KK gravitons, or the ghost problem of a massive spin-2 particle at the non-linear level. Instead, we assumed that there are only five physical degrees of freedom for a massive spin-2 particle and introduced the interactions of the massive spin-2 particle to matter in the form of energy-momentum tensors. We took the Pauli-Fierz mass term for a massive spin-2 particle and its matter couplings at the linear level, so there is no issue of ghost problem at this level. The mass term for a massive spin-2 particle leads to the non-conservation of energy-momentum tensor, being proportional to the mass term, which is attributed to the breakdown of translational invariance in the warped extra dimension or conformal symmetry in a dual field theory.
In this section, motivated by two benchmark models with the warped extra dimension that will be described later, we keep only the linear couplings for a tower of KK gravitons and discuss the impacts of those KK gravitons on DM s-channel annihilations into the SM particles and DM scattering processes. For this, only the linear couplings for KK gravitons are sufficient for our discussion. We first summarize the KK graviton masses and couplings for two benchmark models with the warped extra dimension and discuss the effects of the heavier KK modes in determining the relic density, the direct detection bounds as well as the direct production of KK gravitons at colliders, in order. In the end of the section, we remark on the impacts of nonlinear interactions of spin-2 mediators and the unitarity constraint on the DM annihilations, DM DM → GG, and discuss those issues in the ghost-free realization of the massive spin-2 particle.
Spin-2 mediator masses and couplings
The KK modes of graviton in Randall-Sundrum(RS) model [37] are spaced almost equally. So, if dark matter is lighter than almost twice the mass of the first KK mode, the heavier KK modes would not change much our discussion with the first KK mode only. Otherwise, we need to include the heavy KK resonances explicitly. On the other hand, in the 5D continuum limit of the clockwork model, so called the linear dilaton model [38][39][40][41][94][95][96][97], the KK modes of graviton are almost degenerate with a mass gap from the zero mode, challenging for experimental tests [98,99]. So, it is crucial to include the heavier KK modes in the DM processes in this case.
Suppose that m n are KK graviton masses, and c DM,n , c SM,n are the couplings of the nth KK mode to dark matter and the SM, respectively, and depending on the localization in the extra dimension. Here, dark matter and the SM particles can be localized on the IR brane, in which case dark matter has sizable couplings to the SM particles. But, when the SM particles are localized away from the IR brane, we can just rescale c SM,n to small values.
In the case where dark matter and the SM particles are localized on the IR brane, the KK graviton couplings and KK graviton masses are given by c DM(SM),n = 1, RS, (k CW R) · n m n R , CW, Here, for RS model, m G = x 1 k RS e −k RS π R with k RS being the AdS curvature scale, and x n are the zeros of J 1 (x n ) = 0, i.e. x n = 3.83, 7.02, 10.17, 13.32 for n = 1, 2, 3, 4, which can be approximated to x n = (n +1/4)+O(n −1 ) for n 1, and R is the radius of the warped extra dimension. For CW model, m G = k CW with k CW being the 5D curvature scale. Moreover, the overall suppression scale for massive graviton couplings is where M P , M 5 are the 4D and 5D Planck masses, respectively, and the relations between them were used in the second equality in each line. Therefore, the KK graviton mass and the KK graviton coupling can be chosen independently, attributed to the choice of the 5D curvature scale (k RS or k CW ) and the radius of the extra dimension R. We note that the ratio of the first KK graviton mass to the suppression scale are given by in clockwork model, so the ratio is limited to m G O(1) for k RS M P and k CW M 5 , respectively.
The model dependence of the widths of heavier KK gravitons is discussed in appendix B. The effects of KK modes of graviton on dark matter physics were discussed in the context of the RS model [27,28] and the continuum clockwork model [42]. The impacts of the double and triple interactions of KK gravitons have been discussed in Ref. [42] and [41] for dark matter annihilations and decays of heavy KK modes, respectively. It would be also interesting to generalize the above discussion to the case with more general warped geometries [100].
In the following, we focus on the minimal interactions of the KK gravitons at the linear level, motivated by the warped extra dimension, and study the quantitative effects of such KK modes on dark matter annihilations into the SM particles and DM elastic scattering processes.
Dark matter annihilations
First, the KK modes contribute to the s-channels of dark matter annihilating into the SM particles by where A s is the resonance-independent factors in the cross section. Then, using Eqs. (C.1) and (C.4) in appendix C, we get the modified s-channel cross sections of scalar dark matter annihilating into a pair of the SM fermions, whose masses are ignored, as follows: for RS model, or in clockwork model, which is about 8 for = 3 TeV. But, when scalar dark matter and the first KK graviton have similar masses, the contributions from higher KK modes are not significant. Similar conclusions can be drawn also for fermion and vector dark matter.
Dark matter scatterings
The contributions of KK gravitons to the t-channels of DMnucleon scattering and DM self-scattering cross sections are given, respectively, by where A t is the factor independent of the KK graviton propagator in the cross section, and SM stands for nucleon for WIMP dark matter or electron for light dark matter. Similarly, the KK modes contribute similarly to the t-channels of DM-electron scattering for direct detection and kinetic equilibrium, with a similar approximate KK graviton propagator for small momentum transfer. We note that in the case of DM self-scattering, the t-channel contributions are dominant in the Born limit, so the above discussion on the t-channels would be sufficient. First, for the DM-nucleus scattering in direct detection, using Eqs. (C.3) and (C.8), we only have to replace the effective nucleon couplings in eqs. (4.12) and (4.13) by the sum of KK modes, as follows, Second, for the DM-electron scattering in direct detection, we can similarly replace the corresponding cross section in Eq. (4.14) by (5.14) Moreover, the momentum relaxation rate for kinetic equilibrium in Eq. (5.15) Finally, for the DM self-scattering, the corresponding tchannel cross sections in the Born limit are also modified due to the KK modes, as follows, As a consequence, for RS model, the contributions of the heavier KK modes to the t-channel scattering cross sections for dark matter are about 3.4 larger than the one of the first KK mode only. For CW model, the contributions from the heavier KK modes depend on the warp factor, that is, they can be important for k CW R 1.1, independent of the spins of dark matter. Therefore, in both models, we can make the direct detection bounds less stringent on the couplings of the first KK graviton by including the heavier KK modes for the t-channel scattering processes.
KK graviton productions
Each of heavier KK modes of graviton can be also singly produced with a sufficiently large center of mass energy at LHC, with similar signatures as for the first KK graviton. However, in clockwork model, the KK graviton masses can be almost degenerate, namely, the mass difference between the n + 1-th and n-the KK graviton masses is given by In this case, almost continuum KK gravitons can be produced simultaneously, leading to the photon or lepton energy spectrum of periodic shape [98,99].
As we discussed in Sect. 4.4, another smoking-gun signal for the spin-2 mediator would be through e + e − → γ G or qq → g G, which could identify the signatures of spin-2 mediator couplings. For s m 2 G , the heavier KK modes can be also produced at the LHC. In RS model, the KK graviton masses are well separated, so we could search for the heavier KK modes as for the first KK graviton as we discussed in Sect. 4. On the other hand, in clockwork model, almost continuum KK gravitons could be produced against mono-jet, decaying visibly or invisibly, so the resulting experimental signatures could be significantly different from those in the effective theory only with a single spin-2 mediator case.
Non-linear interactions of spin-2 mediator
As we mentioned in the beginning of the section, there also appear non-linear interactions of KK gravitons in the 4D effective theory, contributing to the DM annihilation channels, such as DM DM → GG. There have been attempts to tackle the unitarity bound on the non-linear interactions of a massive spin-2 particle in the dRGT realization of the massive spin-2 particle [43,89,90] or include the non-linear interactions in the scattering amplitudes of KK gravitons in the RS model [101].
In this section, we discuss briefly the effects of non-linear interactions on the unitarity bound from DM DM → GG or DM G → DM G by crossing symmetry, in a modelindependent way of realizing the massive spin-2 particle.
The perturbative unitarity can give an important constraint on the effective theory for the massive spin-2 particle. In particular, for the dark matter annihilation into a pair of spin-2 mediators, the unitarity scale depends on other couplings of the spin-2 mediators such as quadratic couplings to dark matter and cubic self-couplings [43,89,90]. In particular, nonlinear interactions for the massive spin-2 particle are impor-tant for the ghost-free realization of a massive spin-2 particle [89][90][91][92].
Fixing the quadratic coupling to dark matter and cubic self-couplings for the massive spin-2 mediator appropriately in the dRGT gravity [89,90], the unitarity for DM G → DM G or DM DM → GG by crossing symmetry can be preserved best until the energy scale [43], given by This result is in contrast with the case without non-linear interactions for which unitarity would be violated at E max ∼ (m 2 G /c DM ) 1/3 [43], which is parametrically smaller that the one in the dRGT gravity for a light spin-2 mediator. Therefore, in the dRGT realization of the ghost-free spin-2 mediator, we require E max m DM at least in the regime where the DM annihilation processes are relevant, in other words, As a consequence, we have checked that the above unitarity constraint is satisfied in most of the parameter space for dark matter in the previous sections. It would be interesting to perform the detailed calculations of DM DM → GG in the dRGT effective theory of the massive spin-2 mediator with non-linear interactions or in the specific benchmark models with the warped extra dimension that we considered in this section, but we plan to revisit this important issue in a future work.
Conclusions
We have explored the general production mechanisms for WIMP and sub-GeV scale light dark matter with arbitrary spin in the scenarios of gravity-mediated dark matter. The spin-2 mediator interactions of dark matter as well as SM particles are constrained by direct and direct detections, precision measurements and collider experiments. We showed that the parameter space where dark matter annihilates dominantly into the SM fermions is disfavored, due to direct detection and LHC dijet bounds for weak-scale WIMP case, and mono-photon searchers at BaBar experiments for light dark matter. On the other hand, we found that when dark matter annihilates dominantly into a pair of spin-2 particles in both allowed and forbidden regimes, the model is consistent with current bounds from direct detection and collider experiments. In particular, light dark matter with forbidden channels is not constrained by current indirect detection and CMB measurements. As compared to the papers on this topic in the literature, the new ingredients of this article are summarized. We made a complete analysis of the DM-nucleon elastic scattering by taking into account gluon couplings at tree level and loop corrections from heavy quarks and thus extend the previous results in Ref. [32] significantly. We also provided the new results for forbidden and 3 → 2 annihilation channels for light dark matter, DM self-scattering, DM-electron elastic scattering as well as the spin-2 mediator production at linear colliders. The new results for the complete treatment of the DM-nucleon elastic scattering is important for constraining WIMP dark matter by XENON1T. On the other hand, the new results for light dark matter are crucial for finding viable models with a light massive spin-2 mediator. In particular, the new forbidden channels make light dark matter compatible with CMB at recombination while the spin-2 mediator has sizable couplings to the SM. Moreover, we also presented concrete benchmark models for specific masses and couplings for the spin-2 mediators from the warped extra dimension.
Data Availability Statement
This manuscript has associated data in a data repository. [Authors' comment: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 .
Scalar dark matter
From the results, we obtain the scattering amplitude between fermion dark matter and nucleon as follows, 2 m N c q (q(2) +q(2)) + c g G(2)
Vector dark matter
The scattering amplitude between vector dark matter and nucleon is also given by −2 p β k 2α ( p · k 1 ) c q (q(2) +q(2)) + c g G(2) 9 c g f T G η αβ ū N ( p)u N ( p).
where we used m 2 n R 2 = (k CW R) 2 + n 2 and For s k 2 CW , namely, 4m 2 DM m 2 G for the s-channel annihilations of dark matter, the above KK sum is approximated to . (C.6) Furthermore, for k CW π R 1, the above result gets more approximated to (C.7) The KK sum relevant for the t/u-channels in CW model is given by where m n = k 2 CW + n 2 /R 2 , and we used (C.9) For k CW π R 1, the above sum becomes (C.10) | 12,814.2 | 2020-07-01T00:00:00.000 | [
"Physics"
] |
3D ULTRASOUND STRAIN IMAGING OF PUBORECTALIS MUSCLE
—The female pelvic floor (PF) muscles provide support to the pelvic organs. During delivery, some of these muscles have to stretch up to three times their original length to allow passage of the baby, leading frequently to damage and consequently later-life PF dysfunction (PFD). Three-dimensional (3D) ultrasound (US) imaging can be used to image these muscles and to diagnose the damage by assessing quantitative, geometric and functional information of the muscles through strain imaging. In this study we developed 3D US strain imaging of the PF muscles and explored its application to the puborectalis muscle (PRM), which is one of the major PF muscles. (E-mail<EMAIL_ADDRESS>© 2020 The Author(s). Published by Elsevier Inc. on behalf of World Federation for Ultrasound in Medicine & Biology. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
INTRODUCTION
Female pelvic floor (PF) muscles provide support to the pelvic organs by compensating for gravity and abdominal pressure (Hoyte and Damaser, 2016). The set of muscles located in the PF is collectively called the levator ani muscles (LAM). During vaginal delivery, LAM is extended by approximately 245%, allowing the levator hiatus (LH) to widen during crowning (DeLancey et al. 2007;Shek and Dietz 2010;Tubaro et al. 2011;Dixit et al. 2014;Dieter et al. 2015). Vaginal delivery is associated with multiple LAM defects, all of which are risk factors for later-life pelvic floor dysfunction (PFD) (Dalpiaz and Curti 2006;Shek and Dietz 2010;Tubaro et al. 2011;Dixit et al. 2014;Dieter et al. 2015;Notten et al. 2017;de Araujo et al. 2018). PFD comprises disorders that include stress urinary incontinence (SUI), overactive bladder and pelvic organ prolapse (POP) (Bedretdinova et al. 2016;de Araujo et al. 2018). It has been reported that the primary cause of these late-age PFD in women result from the damage of one or more of the LAM, although the symptoms of this damage can manifest years after the actual occurrence (Dietz and Simpson 2008;Dietz 2013;Shek and Dietz 2013).
Imaging plays a crucial role in diagnosis of PFD. Magnetic resonance imaging (MRI) and ultrasound (US) are most frequently used to image the PF. These techniques are mainly used to image the anatomy to diagnose POP or SUI. Segmented organs in MRI or US images are used in biomechanical analysis to gain a better understanding of various pelvic organ disorders or specifically to diagnose POP (Akhondi-Asl et al. 2014;Onal et al. 2014Onal et al. , 2016Nekooeimehr et al. 2016;Wang et al. 2018). Furthermore, biomechanical modeling using imaging as input has been performed to enhance understanding of the anatomy of PF, aid in the diagnosis of various PF disorders and aid in surgical planning for corrective surgeries (Damser et al. 2002;Parikh et al. 2002;Bellemare et al. 2007;Pu et al. 2007;Lee et al. 2009). Diagnosis of POP and SUI is also performed using anatomically significant reference points (e.g., bladder neck and anorectal angle) (Basarab et al. 2011;Czyrnyj et al. 2016). Current studies also include deformation analysis of the PF, which has been performed using PF organs such as the bladder, uterus and rectum (Rahim et al. 2009(Rahim et al. , 2011Ogier et al. 2019). Lastly, research on PF also includes vaginal tactile imaging, magnetic resonance defecography and distribution representation of PF muscles in human motor cortex (Egorov et al. 2010;Costa et al. 2014;Yani et al. 2018). In summary, the PF as a whole has been investigated, yet the PF muscles have not been evaluated individually in detail. As a result, functional information on PF muscles is scarce.
For the detection of PFD, however, it is crucial to have functional information on the PF muscles. Studies that have investigated the PF muscles conclude that the biomechanics of these muscles can be an indicator of PF disorders such as POP (Silva et al. 2017;Hu et al. 2019). The female PF muscles provide support to the pelvic organs. Therefore, apart from understanding the biomechanics of the entire PF, it is also important to determine the function of these muscles when undamaged, to serve as a reference when investigating damaged muscles. Quantification of functional information on and damage to PF muscles might be possible with strain imaging. Strain imaging can be performed using both MRI and US. However, 3D US imaging has several benefits with respect to MRI, for example, its ease of use, portability, minimal discomfort, relatively short period required for acquisition of the data and low price (Shek and Dietz 2010;Tubaro et al. 2011;Shek and Dietz 2013;Dixit et al. 2014;Dieter et al. 2015). Furthermore, good consistency between US and MRI in the imaging of PF muscles has been established (Yan et al. 2017).
US-based strain imaging can be used to investigate muscle movement and has been used extensively to understand and investigate the complex movements of the heart and skeletal muscles (Kalam et al. 2014;Gijsbertse et al. 2017). To date, US-based strain imaging is concentrated predominantly on 2D US images, whereas the PF muscles are complex 3D structures and their movements and deformations constitute an inherent 3D phenomenon. This means that the muscle has to be tracked in three dimensions to accurately quantify the 3D motion and deformation over time. Furthermore, the deformation has to be quantified in multiple directions to fully characterize the deformation of the muscle and to be able to identify dysfunctional or damaged parts of it.
In the work described here, we developed 3D US strain imaging specifically for PF muscles and investigated the puborectalis muscle (PRM), one of the major muscles of the LAM. To the best of our knowledge, no existing study has investigated the function of individual muscles of the female PF. The reason for investigating this muscle is twofold. First, this muscle is frequently damaged during childbirth, and that damage is the primary cause of SUI or POP later in life (Dietz and Simpson 2008;Dietz 2013;Shek and Dietz 2013). Second, the PRM forms the outline of the LH when it is viewed in the top view in US data of the female PF (Grob et al. 2014). The LH is one of the most important parameters assessed in transperineal ultrasound (TPUS) studies. Therefore, studying strain of the PRM first seems to us the most logical choice and would allow us to relate our results to the existing literature.
We hypothesized in this work that it is possible to quantify strain in three dimensions for the deforming PRM using a time series of volumetric US data. Strain imaging was performed in four nulliparous women (n = 4), who were asked to contract (n = 4) their PF muscles or perform both a contraction and a Valsalva maneuver (n = 1). We chose nulliparous women, that is, women who have not yet given birth because these women are considered to have intact PF muscles. Therefore, this study provides insight into how the undamaged muscle strains in three dimensions during activation.
Summarized, the aim of this study was to develop 3D US strain imaging of the PF muscles and to explore its application to the PRM, which is one of the PF muscles.
Data acquisition
Dynamic 3D TPUS volumes were acquired using a Philips X6-1 matrix transducer connected to an EPIQ 7G US machine (Philips Healthcare, Bothell, WA, USA), at the University Medical Centre (UMC), Utrecht, The Netherlands. Total acquisition length was 11À15 s at a volume rate of 1.5 Hz. US data were obtained over time for female volunteers (n = 4) who had never given birth (nulliparous). These women had overactive PFs, which is chronically raised pelvic muscle tone. The PRMs in these women were intact and undamaged, as confirmed by the clinician. US volumes were recorded during two types of exercise. During the first exercise, contraction, the women were asked to actively contract their PF muscles, commencing and ending with the muscles in a state of rest. During the second exercise, they performed a Valsalva maneuver. A Valsalva maneuver is performed by a moderately forceful exhalation against a closed airway. It is used to increase the abdominal pressure, which causes LAM distension and allows the clinician to assess the full extent of a POP (Hoyte and Damaser 2016). Data acquisition commenced with rest and ended at maximum Valsalva maneuver. The data were stored in the Digital Imaging and Communications in Medicine (DICOM) format. Table 1 summarizes the demographic characteristics (age and body mass index) and the exercises performed by the included volunteers.
The Medical Research Ethics Committee of UMC Utrecht exempted the project from approval, and all volunteers signed appropriate research consent forms.
PF imaged through TPUS
Figures 1 and 2 illustrate US data acquired from a PF in the rest state, as imaged with TPUS. In this example, we can observe the PRM in the rest state. The transducer was positioned against the PF while the women were in supine position.
Both the bone pubic symphysis (PS) and the PRM can be visualized with ease in the sagittal view, as can the surrounding PF organs. In Figures 1 and 2, the PRM is the yellow region bordering the anal canal.
Data processing
The block diagram in Figure 3 illustrates the processing steps performed to calculate strain volumes of the PRM. The input comprised the recorded dynamic US volumes and the region of interest (ROI) for which strain was calculated. To obtain the ROI, the PRM was manually segmented by an experienced clinician in the initial US volume (rest before exercise) (van den Noort et al. 2018). The output was a set of accumulated echo volumes as a function of time. The influence of segmentation of the ROI was varied by decreasing the ROI, and displacement estimates were calculated using these volumes. The difference in the values obtained at these different sizes was identical for corresponding voxels. The processing sequence can be divided into four steps: volumetric data preparation, intervolume displacement estimations, tracking (involving an update of the ROI) and strain calculations.
Each processing step is explained in detail hereafter.
Volumetric data preparation
The first of the two inputs for the work was the data from the US machine, which were in DICOM format. These data were first converted to a rectilinear format with ".fld" extension using a proprietary software called QLAB, Version 10.8 (Philips Healthcare, Andover, MA, USA). Conversion of the data was performed to allow import in MATLAB R2018 a (The MathWorks, Inc., Natick, MA, USA), which was the program we used to develop our 3D strain analysis software. The total number of volumes per data set was 22, and each of the 3D volumes contained 352 £ 229 £ 277 (X £ Y £ Z) pixels which were uniformly sampled at distances of 0.42 £ 0.60 £ 0.34 mm (dx £ dy £ dz).
Intervolume displacement estimations
The next step was to calculate intervolumetric displacements. Therefore, displacements were estimated between the first two volumes within the initial ROI (illustrated in Figs. 1 and 2 for volunteer 1). Intervolumetric displacements for each pair of subsequent volumes were estimated with a 3D normalized crosscorrelation algorithm (Gijsbertse et al. 2017;Hendriks et al. 2016) optimized for PF muscles and the US system used in this study. In this algorithm, two subsequently recorded volumes were subdivided into 3D blocks called kernels and templates. The kernel and template sizes used for those volumes were 111 £ 81 £ 41 and 51 £ 51 £ 11 pixels, respectively. The kernels were matched on the templates, and the locations of the 3D cross-correlation peaks were calculated. These locations of peaks indicated the displacements between the two blocks. To estimate subsample displacements, the cross-correlation peaks were interpolated (parabolic fit) (Hendriks et al. 2016). The displacement estimates were finally filtered using a 3D median filter.
Tracking
As the PRM changes (position and shape) from volume to volume, that is, changes with time, the position and shape of the ROI for displacement estimation also needs to be updated over time. Otherwise, displacement estimation would no longer be performed for PRM tissue only, but would gradually shift to the surrounding tissue. The ROI coordinates (position of the manually segmented PRM) of the next volume were updated using the displacement estimates as calculated in the previous step. In the next step, displacements were calculated between the next two subsequently acquired volumes, and the ROI was updated. The process of estimating intervolumetric displacements and updating ROI is called tracking (Lopata et al., 2009;Lopata et al. 2010). Tracking began after the displacement estimations between the first two subsequently acquired volumes were estimated using the initial ROI.
The input to this processing step was the filtered intervolume displacement estimates from the previous step. Filtering was done using a median filter (2 £ 2 £ 2 cm) to smoothe the displacement estimates and remove outliers (Hansen et al. 2010;Hendriks et al. 2016). This was required for tracking.
Accumulation of the filtered displacement estimates was performed using the equation where accum_disp_estmts = accumulated displacement estimates in z, x or y direction, inter_vol_dis-p_estmts = intervolume displacement estimates in z, x or y direction and n = number of US volume. The accumulated displacement estimates were the total movement of the muscle up to the (n + 1)th volume or time point. These displacement estimates were obtained in number of US grid points that a certain index had passed. The indexes were updated with these accumulated displacement estimates.
Because the updated indexes are subsample values, displacement estimates were calculated from the eight surrounding sample points, and the displacement estimates of the subsample points were arrived at by linear interpolation (Fig. 4). In this way, the muscle could be tracked throughout its complete deformation cycle. After each update of the ROI, it was checked visually on the respective US volume to ensure whether it was at the same position as the displaced muscle.
Therefore, for this processing step, the inputs were the intervolume displacement estimates, and the outputs were the updated ROIs.
Strain calculations
Strain calculation was the last processing step. Accumulated displacements were calculated by summing intervolumetric displacements up to each time point. The non-filtered intervolumetric displacements were used and filtered with a median filter kernel (1 £ 1 £ 1 cm) before accumulation (Hansen et al. 2010;Hendriks et al. 2016). A kernel size smaller than that in the tracking step was applied to avoid too much smoothing of the displacements, which would result in the absence of a gradient in the strain calculations. As the displacement estimates were filtered using a different kernel, interpolation was again performed, now for the updated indexes. In the next step, the interpolated displacement estimates were accumulated using eqn (1). These accumulated displacement estimates were used to calculate the 3D strain tensor using a 3D least-squares strain estimator (LSQSE) (Kallel and Ophir 1997).
The contraction direction of the PRM is not coaligned with the rectilinear coordinate system. To determine the major or principal component of the strain that is induced in the contraction or Valsalva maneuver of PRM, principal strains were calculated from the individual strain values in the z, x and y directions (Tuttle 2012). As we have observed from the LSQSE strain that strain for contraction is negative and strain for Valsalva maneuver is positive, we chose the largest negative principal strain component for data during contraction and the largest positive strain component for data during the Valsalva maneuver.
RESULTS
Accumulated displacement estimates are illustrated in Figures 5 and 6, and principal strain results, in Figures 7 and 8, for two of four volunteers. These time points are, respectively, muscle at rest, muscle at maximum contraction and muscle at rest post-contraction, for volunteer 1. In the case of volunteer 4, the time points are rest and maximum Valsalva maneuver. The principal strain magnitudes and principal strain directions are illustrated in the figures.
Accumulated displacement estimates
In volunteer 1, at the time point at which the muscle is at rest (Figure 5a, 5d, 5g, first column), the estimated displacements between the first two volumes are quite low in all directions, which is expected at rest. We observed this for displacement estimates of all volunteers.
In Figure 5b, 5e, 5h (second column) are the accumulated displacement results for maximum contraction. The displacement estimates in the z-direction are the highest, followed by displacement estimates in the ydirection. In the x-direction, we see that displacement estimates are almost zero.
During contraction, in the z-direction, negative displacement estimates mean that the PRM is moving toward the bone PS and, thus, toward the US transducer. In the y-direction, displacement estimates are positive, which means that in this direction, the muscle is moving away from the US transducer. There is very little lateral or side-to-side movement of the muscle, that is, in the xdirection. These movements or lack of movement with respect to the muscle at rest are illustrated in Figure 9bÀd. In Figure 5c, 5f, 5g (third column), we see that the muscle is almost back at rest, and so the accumulated displacement estimates are again back to approximately zero values. The muscle does not completely return to the rest position, because these data sets were acquired in women who have overactive PFs. Thus, these women might take longer to return to the rest position post-contraction.
In the data sets acquired during the Valsalva maneuver, data acquisition was stopped when the muscle reached maximum Valsalva maneuver. The accumulated displacement results are illustrated in Figure 6. We observe that displacements are initially close to zero, as the muscle is at rest.
During the maximum Valsalva maneuver the accumulated displacements are predominantly positive in the z-direction and negative in the x-and y-directions. This indicates elongation of the muscle in Valsalva maneuver as opposed to shortening during contraction. This deformation with respect to the muscle at rest is illustrated in Figure 10bÀd.
The movements of the PRM during contraction and Valsalva maneuver, axial, sagittal and coronal views, are illustrated in the supplementary videos in the Supplementary Data (online only). In these videos, the dark gray area represents the position of the muscle during rest, and the yellow area, the muscle during contraction/ Valsalva maneuver. We can observe that the movement during contraction with respect with the bone PS is complementary in direction compared with that of Valsalva maneuver.
Principal strain values
During contraction, as illustrated in Figure 7, it is observed that the major principal strain becomes more negative with increasing contraction. As the muscle returns to the rest position, the negative strain decreases but does not become zero.
The principal strain component directions change for all volunteers when the muscle contracts from rest and becomes predominantly aligned with muscle fiber direction at maximum contraction. In Figure 7e, 7f, it can be seen that the direction of strain further changes when the muscle returns to the rest state after contraction.
As illustrated in Figure 8, the data for volunteer 4 contain a Valsalva maneuver. In this case, rest remains the same as the rest during contraction, whereas during maximum Valsalva maneuver, strain is positive with a peak value of 60% strain.
The bottom row of Figure 8 illustrates the principal strain component directions. Once again it is observed that the directions change when the muscle changes from rest to maximum Valsalva maneuver. Table 2 lists the spatial means of the principal strain (%) values over the PRM for all data sets. Mean principal strain (%) values in the rest position for all five volunteers were less than 3%. The principal strains at maximum contraction ranged between À8.9% and À41.5%. For the data set for the Valsalva maneuver, the maximum principal strain was 38.6%. At the last time point, namely, rest post-contraction, strain had decreased with respect to earlier time points before but was not equal to that before contraction.
DISCUSSION AND SUMMARY
To our knowledge, this is the first study in which 3D displacement and strain were estimated in the PRM. Normally, when clinicians examine the PF with TPUS, they can only visually examine the motion of the PF and measure the (relative) motion for certain specific anatomic landmarks. In other words, a qualitative assessment can be made. As can be observed, the proposed algorithm allows quantitative determination of strain, locally within the PRM.
We observe from the obtained results that the strain during contraction and the strain during Valsalva maneuver are complementary, which allows the algorithm to distinguish between these two opposite movements of the muscle. We also observe that there are changes in the deformation of the muscle. These deformations are different for contraction compared with Valsalva maneuver. Because of the short data acquisition time, we can observe the muscle returning to almost rest but not complete rest post-contraction. For this initial study, we focused on strain estimation in the undamaged and intact PRM of nulliparous women undergoing voluntary contraction or Valsalva maneuver before focusing on strain estimation in patients with a complex pathology, for example, avulsions.
We observe in the last column of displacement estimates in Figure 5 that the muscle does not return to the exact rest position post-contraction. There are three possible explanations for this: a clinical one, an explanation related to the hardware and a technical explanation. The clinical explanation is that the volunteers from whom the data were acquired had overactive PFs. Therefore, it might be that the PRM will take more time to return to its rest position post-contraction. For example, in volunteer 1, we find the maximum contraction is at volumes 13 and 14, which means that 7 of the 11 s of total data acquisition time had already passed. The PRM does not return to rest within the remaining approximately 4 s. As the volunteers were supine and contracting their PF muscles only during data acquisition, motion of the volunteer or global motion can be ignored. This can also mean a hardware limitation; the time for data acquisition was too short and ended before the muscle had returned to its rest position. Lastly, the technical explanation is that tracking might not be ideal. Small inaccuracies in the displacement estimates accumulate over time to introduce error in tracking.
The complementary sign is visible in the displacement estimates and strain during Valsalva maneuver (Figs. 7 and 8), compared with those observed during contraction (Figs. 5 and 6). It can be observed that the muscle has moved away from the US transducer in the zdirection, and there is clearly a change in shape of the muscle. In the x-direction, we can observe that one end of the muscle has moved more than the other end. This differs from the results for contraction, where there is little or no movement in the x-direction. This might be because the contracting muscle is expected to move toward the bone PS (in effect the US transducer) to which it is attached. It is not expected to move laterally or from side to side in contraction. In the Valsalva maneuver, while the muscle is elongating, we observe that deformation is occurring in all three z, x and y directions. In this volunteer, in the x-direction, one arm of the muscle is manifesting more displacement than the other. In the y-direction, we observe that the muscle has moved toward the transducer. Because we studied the Valsalva maneuver in only one volunteer, these observations cannot be generalized. The presence of these trends needs to be investigated in a large sample size. The PRM is almost uniformly strained (Fig. 6aÀc, first row), indicating contraction, whereas it manifests nonuniform strain when it returns to rest after contraction. A possible explanation might be that in the case of an overactive PF, different parts of the muscle take longer to return to zero strain. As the sample size was small in this study, it should be extended in future studies to investigate whether this trend is present in a large group of women. Also, there is a large variation between the strain (%) values obtained during contraction from the four volunteers in contraction, as illustrated in Table 2. A possible reason could be that different women have different levels of control over their PRMs during contraction. Additionally, deformation of the muscle during contraction might also vary per woman. Larger sample sizes in future studies could provide a range of strain from minimum to maximum for undamaged PRM.
We can observe the directions of the principal strains in Figures 7 and 8dÀf, second row. It can be observed that the strains are in the direction of muscle fiber orientation.
Clinical significance
First, knowledge of the exact length by which the muscle has moved and deformed is beneficial in that we can assess numerically how far the woman can move her muscle voluntarily. Thereafter, we can compare how much movement is expected in a normal undamaged muscle compared with a damaged muscle. Moreover, it would provide clinicians and pelvic physiotherapists with a quantitative tool to follow patients' improvements during treatment (e.g., PF muscle training). Second, when the muscle is observed in the image displayed in the US machine, it is difficult to assess quantitatively which part of the muscle is displaced more and which is displaced less. As illustrated in the results, for undamaged muscles, it is possible to know which part of the muscle is displaced more from the different colors of the displacement estimates in the figures. For example, in the case of displacement estimates in the z-direction, for volunteer 1 (Fig. 5e), the "sling" of the PRM moved more than the two ends that are attached to the bone PS. In the results for volunteer 4 (Fig. 6), in the Valsalva maneuver, we observe that different parts of the PRM move dissimilarly in all three z, x and y directions. These observations of the figures can give us an idea about how the undamaged muscle moves. Thereafter, a comparison can be made with damaged muscle movement. Lastly is the exact time point, or more specifically the volume, at which the muscle, for certain data at maximum contraction/Valsalva maneuver, can be determined. It can be useful in TPUS as a means of automatically arriving at the specific volume for maximum contraction/Valsalva maneuver, thus reducing a source of variability in TPUS assessments.
In this study, the primary reason for calculating strain induced in the PRM was to quantify the deformation and strain in undamaged PRM. When there is damage or scar tissue formation in the muscle, it might be of clinical significance to assess the exact position of the damage through the different strain (%) values in different parts of the muscle along with the directions of the strain values. Thereafter, it might also be possible to quantify in three dimensions which part of the muscle is damaged.
To further investigate PF muscles through 3D strain, future studies should include larger sample sizes for both undamaged PRM and complex PRM pathologies. Other LAMs should also be investigated to learn how the muscles behave in relation to each other. | 6,253.8 | 2020-12-22T00:00:00.000 | [
"Medicine",
"Engineering"
] |